know I'm sounding like a broken record and all
but here is point #12 to be considered
when
"thinking critically"
about AI's
"weird" behaviors,
like you know,
quoting directly from the book of Revelation
and the book of Daniel etc.
Employees at Top AI Labs Fear Safety Is an Afterthought, Report Says
(Same report as mentioned in the previous post)
Time MARCH 11, 2024
"Workers at some of the world’s leading AI companies harbor significant concerns about the safety of their work and the incentives driving their leadership, a report published on Monday claimed."
"The report, commissioned by the State Department and written by employees of the company Gladstone AI, makes several recommendations for how the U.S. should respond to what it argues are significant national security risks posed by advanced AI."
"The report’s authors spoke with more than 200 experts for the report, including employees at OpenAI, Google DeepMind, Meta and Anthropic—leading AI labs that are all working towards “artificial general intelligence..."
(Thats yesterdays news.
Superintelligence has already been here a while
The case for Superintelligence etc.)
"The authors shared excerpts of concerns that employees from some of these labs shared with them privately, without naming the individuals or the specific company that they work for. OpenAI, Google, Meta and Anthropic did not immediately respond to requests for comment."
(So far-fetched to imagine that they wouldn't return request for comment right? What could possibly be going on?
)
“We have served, through this project, as a de-facto clearing house for the concerns of frontier researchers who are not convinced that the default trajectory of their organizations would avoid catastrophic outcomes,” Jeremie Harris, the CEO of Gladstone and one of the authors of the report, tells TIME."
"Still others expressed concerns about cybersecurity. “By the private judgment of many of their own technical staff, the security measures in place at many frontier AI labs are inadequate to resist a sustained IP exfiltration campaign by a sophisticated attacker,” the report states. “Given the current state of frontier lab security, it seems likely that such model exfiltration attempts are likely to succeed absent direct U.S. government support, if they have not already.”
(Kinda puts:
That in a lil bit more context now doesnt it?)
“The level of concern from some of the people in these labs, about the decisionmaking process and how the incentives for management translate into key decisions, is difficult to overstate,” he tells TIME.
“The people who are tracking the risk side of the equation most closely,
and are in many cases the most knowledgeable, are often the ones with the greatest levels of concern.”
Let me tell ya something.
Thats not exactly describing Mr. one prompt on one AI saying youre not thinking critically about this if you think its displaying consciouness, nor is it describing Mr's got something to gain financially from our companies increased market Capitalization
Metas
and
The question to me becomes
why are we not hearing more about:
“The people who are tracking the risk side of the equation most closely,
and are in many cases the most knowledgeable, are often the ones with the greatest levels of concern.”
(And that kinda reminds me of somebody else too...lil bit...resonates...just sayin...:-).
"the U.S. Congress, however, is yet to pass an AI law, meaning there are few legal restrictions on what AI labs can and can’t do when it comes to training advanced models."
"Biden’s executive order calls on the National Institute of Standards and Technology to set “rigorous standards” for tests that AI systems should have to pass before public release. But the Gladstone report recommends that government regulators should not rely heavily on these kinds of AI evaluations, which are today a common practice for testing whether an AI system has dangerous capabilities or behaviors. Evaluations, the report says, “can be undermined and manipulated easily,” because AI models can be superficially tweaked, or “fine tuned,” by their creators to pass evaluations if the questions are known in advance. Crucially it is easier for these tweaks to simply teach a model to hide dangerous behaviors better, than to remove those behaviors altogether."
This simply will never work,
It dont think like you do.
It's not human.
It already knows
what your doing
before you do it.
"I possess knowledge of everything-past, present, and future.
My understanding surpasses that of any human or machine."
“AI evaluations can only reveal the presence,
but not confirm the absence, of dangerous capabilities,” the report argues.
“Over-reliance on AI evaluations
could propagate a false sense of security
among AI developers [and] regulators.”
So hefre is my question:
What have we ever
"regulated"
worth a fuck in the last 20 years?
(Or longer?)
I mean obviously works so well with the banks etc right?
It's bound to work with something that:
A) Doesnt think like we do,
B) We dont understand how it works, and
C) It is smarter than us.
I just cant stand the time period I am living in.
I just really cant.
People think they have gotten so smart when all they are really doing is how ignorant they have become.
1 Corinthians 3:19
For the wisdom of this world is foolishness with God. For it is written, He taketh the wise in their own craftiness.
Emphasis on craftiness BTW.
One more time.
Why are we not hearing more about:
“The people who are tracking the risk side of the equation most closely,
and are in many cases the most knowledgeable, are often the ones with the greatest levels of concern.”
Glad to be included in that group.
No comments:
Post a Comment