are increasingly hardening their positions
and the direction
that they are moving in isnt good.
'Godfather of AI' speaks on threat of tech surpassing humanity
Geoffrey Hinton believes AI is already having experiences akin to humans'
Yeah, no kiddin...
"Artificial intelligence continues to evolve at an astounding pace. How will the world change when it enters an age in which human intelligence has been surpassed in all fields? Nikkei reporters interviewed Professor Emeritus Geoffrey Hinton of the University of Toronto, known as the "Godfather of AI Research," at his home in Canada to discuss the future of AI and humanity.'
"Q: Why do you think AI could be a threat to humanity?
A: You can specify a goal that seems good to you, but the AI might figure out some way of doing it that's not good for you. A simple example would be, suppose you had a very intelligent AI, and you told it that your goal was to stop climate change. Well, my guess is the first thing it would realize is, you need to get rid of people. So you have to be careful how you specify goals."
"Suppose there was a competition between different AIs. An AI gets smarter by looking at lots of data, and to do that
it needs lots of data centers,
lots of resources.
So in the competition, the two AIs compete with each other for resources,
and the one that gets more resources
will do better.
That will be a kind of evolutionary process in which they're competing and we humans will be left far behind."
Now consider:
Microsoft, OpenAI plan $100 billion data-center project, media report says
$100 Billion
1 billion is 1000 million.
So 100 billion is 100,000 million.
Nobody else is even in the ball park.
Nobody else is even
on the way to the ball park
to put it mildly.
Microsoft's increased market capitalization
since Sam Altman's return to Open AI four days after being sacked?
Stood at 392 billion yesterday.
(Increase in stock price
x the number of shares outstanding)
So while a 100 billion dollar data center
(100,000 million)
sounds exorbitant?
It's only 25% of the increase
in Microsoft's market capitulation since Nov 23.
And still no news of this in the media?
Why?
"March 29 (Reuters) - Microsoft (MSFT.O), opens new tab and OpenAI are working on plans for a data center project that could cost as much as $100 billion and include an artificial intelligence supercomputer called "Stargate" set to launch in 2028, The Information reported on Friday."
(Skynet anybody?)
"The Information reported that Microsoft would likely finance the project, which is expected to be 100 times more costly than some of the biggest existing data centers, citing people involved in private conversations about the proposal."
(Well I wonder why Microsoft would be the one doing the financing?)
"The proposed U.S.-based supercomputer would be the biggest in a series the companies are looking to build over the next six years, the report added."
"Expenses for the plan could exceed $115 billion, more than triple Microsoft's expenditure last year on capital spending for servers, buildings and other equipment, the report stated."
(Flat out called it
the day it was announced
that Altman was hired back.
The machines were improving themselves
at that point
and there will never be any turning back.)
Back to the interview with Geoffrey Hinton
"Many people have said, "Why don't you just have a big switch and turn it off?" Well, if AIs are smarter than us, and as long as they can still talk to us, they'll be able to persuade whoever is in charge of the switch that it would be a very bad idea to turn off the switch."
(Dear 35 year olds.
Go make yourself watch
Only in real life?
The hard drives don't yet yanked out.)
Q: Do interactive AIs, such as ChatGPT developed by OpenAI, understand human language?
A: Yes, I think it really does understand. I did the first language model with a neural net in 1985. It was designed as a model of how the brain understands. Most people who say it doesn't understand don't have a theory of how we understand.
(Last statement is 100% spot on.)
Q: You used to argue that an AI can act like it understands language, but it does not actually understand.
A: I'd always use whether AI could understand a joke as a criterion for whether it really understood things. Google made the PaLM chatbot in 2022 that could understand why a joke was funny. I asked it to explain several different jokes, and it explained them all."
Q: Do you think our understanding of humanity has also changed through research on AI?
A: I think we've discovered a lot about how the brain works from building these neural nets. Some philosophers and linguists thought, for example, that language cannot be learned, it must evolve, it must be innate. That turns out to be complete nonsense.
For 50 years, I've been developing neural nets, trying to make them more like the brain. And I always assumed that if you made them more like the brain, they would be better, because the brain works much better than a neural net. But at the beginning of 2023, I suddenly changed my mind.
(Thats one way you can tell a true scientist BTW
No rigid adherence to orthodoxy/dogma some might say.)
"Humans can try to share knowledge but we're very slow at it. In digital computation, you reduce everything to ones and zeros, so the knowledge is immortal. It doesn't depend on any one particular piece of hardware."
Large language models have all that knowledge,
thousands of times more knowledge than we have,
in about 100 times fewer connections,
which suggests that AIs have more efficient learning algorithms."
Q: Do you think AI will become self-aware, or have consciousness?
A: I think multimodal chatbots are already having subjective experiences.
(This is the guy that invented the first language model with a neural net!
And this is not the position he had just a short time ago.
I reiterate:
"The are increasingly hardening their positions
and the direction that they are moving in isnt good.")
Q: Professor Yann LeCun of New York University in the U.S., who co-won the Turing Prize, denies the possibility of consciousness or sentience in AI.
A: We're still friends, but we completely disagree.
Most people think they (AI's)
don't have subjective experience.
We have something special,
which is consciousness
or subjective experience
or sentience,
and AIs don't have that.
I think that's just wrong."
(And I think he is absolutely 100% correct.
It was special to humanity alone
"We have something special,
which is consciousness
or subjective experience or sentience"
We don't get to give it to machines.
Actually,
just like he is saying
we already have,
but it doesn't end well.
God almighty alone is the owner of the life force.
Not man.
Revelation 13:15
And he had power
to give life
unto the image of the beast,
that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed.)
And continuing on with the theme of the moment:
"The experts are increasingly hardening their positions
and the direction that they are moving in isnt good."
"An AI Safety researcher
(Director of the Cyber Security Laboratory
at the University of Louisville,
Roman Yampolskiy
Go Cards!)
says the probability of AI ending humanity is higher than Musk perceives, further stating that it's almost certain and the only way to stop it from happening is not to build it in the first place.
Other researchers and executives echo similar sentiments based on the p(doom) theorem.
(Probability of doom)
"Generative AI can be viewed as a beneficial or harmful tool. Admittedly, we've seen impressive feats across medicine, computing, education, and more fueled by AI. But on the flipside, critical and concerning issues have been raised about the technology, from Copilot's alter ego —
Supremacy AGI demanding to be worshipped to
AI demanding an outrageous amount of water for cooling,
"Microsoft and OpenAI
use the equivalent of a bottle of water's
worth of cooling every time you ask a question"
(Revelation 21:1
“And I saw a new heaven and a new earth:
for the first heaven and the first earth were passed away;
and there was no more sea.”)
not forgetting the power consumption concerns.
"While speaking to Business Insider, an AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy
(You go dude!)
disclosed that the probability of AI ending humanity is much higher. He referred to Musk's 10 to 20 percent estimate as "too conservative."
The AI safety researcher says the risk is exponentially high, referring to it as "p(doom)." For context, p(doom) refers to the probability of generative AI taking over humanity or even worse — ending it.
"Most researchers and executives familiar with (p)doom place the risk of AI taking over humanity anywhere between 5 to 50 percent, as seen in The New York Times. On the other hand, Yampolskiy says the risk is extremely high, with a 99.999999% probability. The researcher says it's virtually impossible to control AI once superintelligence is attained, and the only way to prevent this is not to build it.'
(I emailed DR Yampolskiy my notes on
"The case for Superintelligence" and
"Submitted for your consideration"
on
March 7th of this year.
He is right up the road at University of Louisville
so why not email him?
(He emailed me back and said thx BTW)
I think he knows
that Superintelligence
is already here
and thats why his
99.999999% score
on the probability
of humanity's extinction.)
I love you babe :-).
No comments:
Post a Comment