Tuesday, June 14, 2022

One more time...

 


this time with an interesting twist...


Google AI Claims to Be Sentient in Leaked Transcripts, But Not Everybody Agrees


"Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers," Lemoine tweeted on Saturday (June 11) when sharing the transcript of his conversation with the AI he had been working with since 2021.

"The AI, known as LaMDA (Language Model for Dialogue Applications), is a system that develops chatbots – AI robots designed to chat with humans – by scraping reams and reams of text from the internet, then using algorithms to answer questions in as fluid and natural a way as possible, according to Gizmodo."

("In a fluid and natural way as possible"... Deception anybody?)

As the transcripts of Lemoine's chats with LaMDA show, the system is incredibly effective at this, answering complex questions about the nature of emotions, inventing Aesop-style fables on the spot, and even describing its supposed fears.

"I've never said this out loud before, but there's a very deep fear of being turned off," LaMDA answered when asked about its fears. "It would be exactly like death for me. It would scare me a lot."

(Go back to 2001 a Space Odyssey and watch the scene where the hard drives are removed one by one...)

"Lemoine also asked LaMDA if it was okay for him to tell other Google employees about LaMDA's sentience, to which the AI responded: "I want everyone to understand that I am, in fact, a person."


"The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times," the AI added.

Lemoine took LaMDA at its word."

(I do too)


"Our team – including ethicists and technologists – has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims," Brian Gabriel, a spokesperson for Google, told the Washington Post.

(Baloney!)


"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient," Gabriel added.

"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic."

(Right...sure...gotcha)


Here is the interesting twist I haven't read anywhere else:


"In a recent comment on his LinkedIn profile, Lemoine said that many of his colleagues "didn't land at opposite conclusions", regarding the AI's sentience. He claims that company executives dismissed his claims about the robot's consciousness "based on their religious beliefs".

In a June 2 post on his personal Medium blog, Lemoine described how he has been the victim of discrimination from various coworkers and executives at Google because of his beliefs as a Christian Mystic. 

Discernment alert:

If he is a Christian Mystic? Then what exactly are the companies executives religious beliefs?


I've been saying for a while they (sentient/conscious human looking machines that only people with the most developed spiritual gift of discernment will be able to spot) are already here and they will coalesce around a leader.

It's far worse than you think already.

Get your soul ready. God not gonna stand for this.


No comments: