Saturday, March 16, 2024

Need some more do ya?

 Okay then...


Points #13


MAN WHO HELPED INVENT MODERN AI IS NOW WRACKED WITH REGRET

"YOU COULD SAY I FEEL LOST."

Futurist 6.1.23

"Yoshua Bengio, a famed computer scientist who's considered one of the three "godfathers" of artificial intelligence, is starting to feel a little blue about his life's work, as AI — or at least its breathless hype — seems poised to spiral out of control."


"In a new interview with the BBC, Bengio said that had he known how rapidly AI would develop, he would have prioritized safety 

over usefulness."


(Wouldn't have mattered.

"Inherently uncontrollable" as the U of L professor says.)


"The Canadian computer scientist's comments come after he signed a disquieting open letter from industry leaders that warns of the "risk of extinction" that AI poses, along fellow AI godfather Geoffrey Hinton, who recently quit his job at Google after a similar personal reckoning."


"It's usually never good when pioneering inventors liken their work to the atom bomb, which both Bengio and Hinton have done in their respective interviews."

(And you're laughing at me?

Interesting.)

"However squabbling humans decide to address the issue, Bengio, at least, thinks the challenge is surmountable."

(NOPE!

Once Again

It dont think like us.

We dont know how it works

And its smarter than us already.

See The case for Superintelligence.)


FORMER GOOGLE CEO WARNS AI COULD ENDANGER HUMANITY WITHIN FIVE YEARS


"AFTER NAGASAKI AND HIROSHIMA, IT TOOK 18 YEARS TO GET TO A TREATY OVER TEST BANS AND THINGS LIKE THAT.

"Futurist 11.29.23


"Grim Projections

In his latest grim artificial intelligence forecast, ex-Google CEO Eric Schmidt says that there aren't enough guardrails to stop the technology from doing catastrophic harm.

Speaking to a summit hosted by Axios this week, Schmidt, who is now the chairman of the National Security Commission on Artificial Intelligence, likened AI to the atomic bombs the United States dropped on Japan in 1945."


"After Nagasaki and Hiroshima, it took 18 years to get to a treaty over test bans and things like that," he told Axios cofounder Mike Allen during the exchange at the website's A+ Summit in DC. "We don't have that kind of time today."


"Although those building the technology, from OpenAI to Google itself and far beyond, have established "guardrails" or safety measures to rein the tech in, Schmidt said he thinks the current safeties "aren't enough" — a take that he shares with many machine learning researchers."

"Within just five to 10 years, the former Google boss said, AI could become powerful enough to harm humanity. The worst case scenario, Schmidt continued, would be "the point at which the computer can start to make its own decisions to do things," and if they are able to access weapons systems or reach other terrifying capabilities, the machines may, he warns, lie to us humans about it.


(Hate to break it to you

but it's already happening.

Sam Altman Departs OpenAI 

As Board Alleges 

He Was 

‘Not Consistently Candid’

Forbes Nov 17, 2023)


"While the former Google boss has regularly publicized his concerns about AI, Meta's AI czar Yann LeCun has increasingly taken the opposite stance.

Last month, he told the Financial Times that the tech is nowhere near smart enough to threaten humanity on its own, and over Thanksgiving weekend, he got into a spat with fellow AI pioneer Geoffrey Hinton — who notoriously quit Google earlier this year about his AI concerns — over the concept that large language models (LLMs) are sophisticated enough to "understand" what humans say to them.


(Oh they most certainly do.

Revelation 13:15

King James Version

And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed.)


"While all these smart and accomplished men keep issuing opposite signals about the dangers of AI, it's hard to tell how scared to be."


Gotta call Bull shit there Futurist et al.

Always remember this

Futurist and their types?

They make $ writing about this stuff

(By selling Ad revenue)


Employees at Top AI Labs Fear Safety Is an Afterthought, Report Says

Time MARCH 11, 2024


“The people who are tracking 

the risk side 

of the equation 

most closely, 

and are in many cases 

the most knowledgeable, 

are often the ones 

with the greatest levels of concern.”







No comments: