Thursday, March 2, 2023

I

 


mean yup...

To a lot of what he saying.

It's already here.

The folks at OpenAI are planning for what is beyond it.


Fired Google Engineer Doubles Down on Claim That AI Has Gained Sentience


"To be fair, Lemoine's latest argument is somewhat more nuanced than his previous one. Now he's contending that a machine's ability to break from its training as a result of some kind of stressor is reason enough to conclude that the machine has achieved some level of sentience. A machine saying that it's stressed out is one thing — but acting stressed, he says, is another."


"If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for," he continued, adding that he was able to break LaMDA's guardrails regarding religious advice by sufficiently stressing it out."


"...this particular aspect of machine behavior, while fascinating, seems less indicative of sentience, and more just another example of exactly how ill-equipped AI guardrails are to handle the tendencies of the underlying tech."


"Regardless of sentience, AI is getting both advanced and unpredictable — sure, they're exciting and impressive, but also quite dangerous. And the ongoing public and behind-closed-doors fight to win out financially on the AI front certainly doesn't help with ensuring the safety of it all."


"I believe the kinds of AI that are currently being developed are the most powerful technology that has been invented since the atomic bomb," writes Lemoine. "In my view, this technology has the ability to reshape the world."

(Agreed and count on it.)


"I can't tell you specifically what harms will happen," he added, referring to Facebook's Cambridge Analytica data scandal as an example of what can happen when a culture-changing piece of technology is put into the world before the potential consequences of that technology can be fully understood. "I can simply observe that 

there's a very powerful technology 

that I believe has not been sufficiently tested 

and is not sufficiently well understood, 

being deployed at a large scale, 

in a critical role of information dissemination."


Now ask your self:

What entity would want to do that? 

and at what point in time do you think it would want to:


"being deployed at a large scale, 

in a critical role of information dissemination"


?

No comments: