Saturday, March 16, 2024

Staying with the same theme :-)

 

Exclusive: U.S. Must Move ‘Decisively’

 to Avert ‘Extinction-Level’ Threat 

From AI, Government-Commissioned Report Says

3/11/24

(That was the Monday we did the 

'Critical Thinking"

presentations.

Needless to say

this gets added to the mix

  when critically thinking about AI

as Point #11



"The U.S. government must move “quickly and decisively” to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an 

“extinction-level threat to the human species,” 

says a report 

commissioned

 by the U.S. government published on Monday."


Current frontier AI development poses urgent and growing risks to national security,” the report, which TIME obtained ahead of its publication, says. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.

Baloney, the scientist are behind the curve on this one.

(The case for Superintelligence)


"The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies—like OpenAI, Google DeepMind, Anthropic and Meta— as part of their research. Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decision making by the executives who control their companies.


"The finished document, titled “An Action Plan to Increase the Safety and Security of Advanced AI,” recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry. Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power. The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI’s GPT-4 and Google’s Gemini. The new AI agency should require AI companies on the “frontier” of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds. Authorities should also “urgently” consider outlawing the publication of the “weights,” or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says. And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward “alignment” research that seeks to make advanced AI safer, it recommends."


(It's already here, you simply don't have time for all that,

The authors of the report fail to understand the exponential growth

this threat operates from, daily. You simply will not catch up.)


"The report was delivered as a 247-page document to the State Department on Feb. 26. The State Department did not respond to several requests for comment on the report."


"As governments around the world discuss how best to regulate AI, the world’s biggest tech companies have fast been building out the infrastructure to train the next generation of more powerful systems—in some cases planning to use 10 or 100 times more computing power. Meanwhile, more than 80% of the American public believe AI could accidentally cause a catastrophic event, and 77% of voters believe the government should be doing more to regulate AI, according to recent polling by the AI Policy Institute."

(You cant regulate it, it's already improving itself etc...)


” The second category is what the report calls the “loss of control” risk, or the possibility that advanced AI systems may outmaneuver their creators. 

(Sam Altman firing, 

I keep telling you, right there is your proof.)

There is, the report says, “reason to believe that they may be uncontrollable if they are developed using current techniques, and could behave adversarially to human beings by default.”


(See what I just said above.)


“Frontier AI labs face an intense and immediate incentive to scale their AI systems as fast as they can,” the report says. “They do not face an immediate incentive to invest in safety or security measures that do not deliver direct economic benefits, even though some do out of genuine concern.”


"Before co-founding Gladstone with Beall, the Harris brothers ran an AI company that went through YCombinator, the famed Silicon Valley incubator, at the time when OpenAI CEO Sam Altman was at the helm."


(Thats not good.)


“Our default trajectory right now,” 

he says, 

“seems very much on course to create systems 

that are powerful enough 

that they either can be weaponized catastrophically, 

or fail to be controlled.” 


Only one way out.




Romans 10:9

...that if you confess with your mouth the Lord Jesus 

and believe in your heart that God has raised Him from the dead, 

you will be saved.


No comments: