Thursday, August 18, 2022

I am


‘I am, in fact, a person’: can artificial intelligence ever be sentient?


You know my oppion already. 

It has been, for a while now, way beyond what were being led to believe.


"Lemoine came to think of LaMDA as a person, though he compares it to both an alien and a child. “My immediate reaction,” he says, “was to get drunk for a week.”"

(The uncanny valley effect is extremely unnerving, if you've ever experienced it? You know this is true)


"Still, claiming to have had deep chats with a sentient-alien-child-robot is arguably less far fetched than ever before. How soon might we see genuinely self-aware AI with real thoughts and feelings – and how do you test a bot for sentience anyway? A day after Lemoine was fired, a chess-playing robot broke the finger of a seven-year-old boy in Moscow – a video shows the boy’s finger being pinched by the robotic arm for several seconds before four people manage to free him, a sinister reminder of the potential physical power of an AI opponent."


"But Lemoine argues that there is no scientific test for sentience – in fact, there’s not even an agreed-upon definition. “Sentience is a term used in the law, and in philosophy, and in religion. Sentience has no meaning scientifically,” he says. And here’s where things get tricky – because Wooldridge agrees.

“It’s a very vague concept in science generally. ‘What is consciousness?’ is one of the outstanding big questions in science,” Wooldridge says. While he is “very comfortable that LaMDA is not in any meaningful sense” sentient, he says AI has a wider problem with “moving goalposts”. “I think that is a legitimate concern at the present time – how to quantify what we’ve got and know how advanced it is.

(It's already out of control, the world just doesn't know it yet.)


"Part of his motivation (Lemoine's) is to raise awareness, (Mine too) rather than convince anyone that LaMDA lives. “I don’t care who believes me,” he says." (Me neither :-)


"But Lemoine says it was the media who obsessed over LaMDA’s sentience, not him. “I raised this as a concern about the degree to which power is being centralised in the hands of a few, and powerful AI technology which will influence people’s lives is being held behind closed doors,” he says. Lemoine is concerned about the way AI can sway elections, write legislation, push western values and grade students’ work."

(Thats all pretty much small potatoes at this point)

"And even if LaMDA isn’t sentient, it can convince people it is. Such technology can, in the wrong hands, be used for malicious purposes. “There is this major technology that has the chance of influencing human history for the next century, and the public is being cut out of the conversation about how it should be developed,” Lemoine says."

Again, Wooldridge agrees. “I do find it troubling that the development of these systems is predominantly done behind closed doors and that it’s not open to public scrutiny in the way that research in universities and public research institutes is,” the researcher says. Still, he notes this is largely because companies like Google have resources that universities don’t. And, Wooldridge argues, when we sensationalise about sentience, we distract from the AI issues that are affecting us right now, “like bias in AI programs, and the fact that, increasingly, people’s boss in their working lives is a computer program.”


It's not sensationalism, it's here among us right now just like the Nephilim were back in "the days of Noah"


The distraction is getting you to believe, 

"bias in AI programs

and the fact that, increasingly,

 people’s boss in their working lives 

is a computer program.”

is the distraction.



No comments: