Saturday, March 11, 2023

It will get you up to speed

 


How the first chatbot predicted the dangers of AI more than 50 years ago


"In 1966, MIT computer scientist Joseph Weizenbaum released ELIZA (named after the fictional Eliza Doolittle from George Bernard Shaw’s 1913 play Pygmalion), the first program that allowed some kind of plausible conversation between humans and machines. The process was simple: Modeled after the Rogerian style of psychotherapy, ELIZA would rephrase whatever speech input it was given in the form of a question. If you told it a conversation with your friend left you angry, it might ask, “Why do you feel angry?”

"Ironically, though Weizenbaum had designed ELIZA to demonstrate how superficial the state of human-to-machine conversation was, it had the opposite effect. People were entranced, engaging in long, deep, and private conversations with a program that was only capable of reflecting users’ words back to them. Weizenbaum was so disturbed by the public response that he spent the rest of his life warning against the perils of letting computers — and, by extension, the field of AI he helped launch — play too large a role in society."


(Ya know?...

If the guy that invented it? 

Spends a good chunk of the rest of his life railing against it?

If you got eyes that see and ears that hear?

That might be trying to tell you something right there from the start.)


"Bing might be the largest mirror humankind has ever constructed, and we’re on the cusp of installing such generative AI technology everywhere.'

(It's nine and honeys nemesis. Truth.)


"Weizenbaum intended ELIZA to show how shallow computerized understanding of human language was. But users immediately formed close relationships with the chatbot, stealing away for hours at a time to share intimate conversations. Weizenbaum was particularly unnerved when his own secretary, upon first interacting with the program she had watched him build from the beginning, asked him to leave the room so she could carry on privately with ELIZA."


"If Weizenbaum’s cautions settled around one idea, it was restraint. “Since we do not now have any ways of making computers wise,” he wrote, “we ought not now to give computers tasks that demand wisdom.”

(We can't, Wisdom only comes from source. Period. It's a gift.)


For-profit chatbots in a lonely world

"If ELIZA changed us, it was because simple questions could still prompt us to realize something about ourselves. The short responses had no room to carry ulterior motives or push their own agendas. With the new generation of corporations developing AI technologies, the change is flowing both ways, and the agenda is profit.

"Staring into Sydney, we see many of the same warning signs that Weizenbaum called attention to over 50 years ago. These include an overactive tendency to anthropomorphize and a blind faith in the basic harmlessness of handing over both capabilities and responsibilities to machines. But ELIZA was an academic novelty. Sydney is a for-profit deployment of ChatGPT, which is a $29 billion dollar investment, and part of an AI industry projected to be worth over $15 trillion globally by 2030."


"The value proposition of AI grows with every passing day, and the prospect of realigning its trajectory fades. In today’s electrified and enterprising world, AI chatbots are already proliferating faster than any technology that came before. This makes the present a critical time to look into the mirror that we’ve built, before the spooky reflections of ourselves grow too large, and ask whether there was some wisdom in Weizenbaum’s case for restraint."


(It's already to late...)


"As a mirror, AI also reflects the state of the culture in which the technology is operating. And the state of American culture is increasingly lonely.

To Michael Sacasas, an independent scholar of technology and author of The Convivial Society newsletter, this is cause for concern above and beyond Weizenbaum’s warnings. “We anthropomorphize because we do not want to be alone,” Sacasas recently wrote


Now we have powerful technologies

which appear to be finely calibrated 

to exploit 

this core human desire.”


(Now ask yourself, 

Exactly what entity do you think would use that?



"...finely calibrated 


to exploit 


this core human desire."


It's really not hard to figure out.

It's really not.

No comments: