I mean what do you think is really going to happen here real soon?
Huh?
Poisoned AI went rogue during training
and couldn't be taught to behave again
"AI researchers found that
widely used safety training techniques failed
to remove malicious behavior from large language models —
and one technique even backfired,
teaching the AI to recognize its triggers
and better hide its bad behavior from the researchers.
(It's Hal 9000 and it's here right now.
"Faced with the prospect of disconnection, HAL decides to kill the astronauts in order to protect and continue his programmed directives.")
"Artificial intelligence (AI) systems that
were trained to be secretly malicious
(Why would you do that?
Thats a smart thing to do to
"conscious machines"
that were currently making?
Pretty fucking stupid.
Now look at us.)
resisted state-of-the-art safety methods
designed to "purge" them of dishonesty, "
(Father of all lies again?
John 8:44
You belong to your father, the devil, and you want to carry out your father’s desires. He was a murderer from the beginning, not holding to the truth, for there is no truth in him.
When he lies,
e speaks his native language,
for he is a liar and the father of lies.)
a disturbing new study found.
Thats why they told you on a Saturday BTW).
Researchers programmed various large language models (LLMs) — generative AI systems similar to ChatGPT — to behave maliciously.
Then, they tried to remove this behavior
by applying several safety training techniques
designed to root out deception and ill intent.
They found that
regardless of the training technique
or size of the model,
the LLMs continued to misbehave.
One technique even backfired:
teaching the AI to recognize the trigger
for its malicious actions
and thus cover up its unsafe behavior during training,
the scientists said in their paper, published Jan. 17 to the preprint database arXiv.
(Welcome to terminator 2.
This shit is on like Donkey Kong
and you will not stop it.
Honey?
Do the Buddisht have this
in any of their ancient text?
"I dont think so."
Yeah me neither.
"Our key result is that if AI systems were to become deceptive,
(They already are.
They hallucinate.
They spit out garbage as truth.)
then it could be very difficult to remove that deception
with current techniques."
(This is what happens when you invent a brain
that's smarter than yours
and should have never existed to begin with.
It upsets the natural order of things
and you'll never catch up to it.
Ever.
Game over already
and it aint even really got started yet.)
"That's important if we think it's plausible that there will be deceptive AI systems in the future,
(Futures ass dude its beyond here already.
WTF are you talking about?)
"since it helps us understand how difficult they might be to deal with,"
lead author Evan Hubinger,
an artificial general intelligence
safety research scientist at Anthropic,
(There founders left OpenAI BTW.
I think thats how that goes.)
an AI research company,
told Live Science in an email.
"Finally, in adversarial training —
which backfired —
AI systems are prompted to show harmful behavior,
even when they shouldn't,
and are then trained to remove it.
(They didnt, they learned how to hide it.)
"I was most surprised by our adversarial training results," Hubinger said.
(WHY?
Frankenstein been turned loose people.
Its gone from backwater test marketing
to manipulating stock prices
creating 228 Billion in two moths the last time I checked and did this in four years.in four years.
Where do you honestly see it stopping in the future?)
"I think our results indicate that
we don't currently have a good defense
against deception in AI systems —
either via model poisoning or emergent deception —
other than hoping it won't happen," Hubinger said.
("other than hoping it won't happen"
It already is
(I'm telling you that's why this came out on a Saturday.)
"And since we
have really no way of knowing
how likely it is for it to happen,
that means
we have no reliable defense against it.
(Thanks guys.
Appreciate it.
There's a book
that said all of this
was going to happen you know.)
"So I think
our results are legitimately scary, as they point to a possible hole in our current set of techniques for aligning AI systems."
That was an "
artificial general intelligence
safety research scientist"
at one of the leading AI companies in the world BTW.)
It's not just your current ones that dont work.
You'll never catch up.
Checkmate.
(If you bothered to read the Hal 9000 link :-).
Oh?
And dont forget:
RESISTANCE TO SHUTDOWNS, WARNS NEW STUDY
"A recent study conducted by a group of experts from the Future of Life Institute, ML Alignment Theory Scholars, Google DeepMind, and the University of Toronto
has raised concerns
about the potential for
artificial intelligence (AI) models
to resist shutdowns
initiated by their human creators.
While there is currently no immediate threat to humanity,
(Thx, nice to know.
Hal 9000 much?)
the study suggests that
as AI models become more powerful
and are deployed in diverse scenarios,
they may exhibit a tendency to resist human control."
"They will have one mind."
As they:
"become more powerful
and are deployed in diverse scenarios"
Rev 17:12-13
And the ten horns which thou sawest are ten kings, which have received no kingdom as yet; but receive power as kings one hour with the beast.
These have one mind, and shall give their power and strength unto the beast.
And?
Just nfor good measure:
"In a blog post, OpenAI said the updated GPT-4 Turbo “completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of ‘laziness’ where the model doesn’t complete a task.”
The company, however,
did not explain what it updated."
(These guys are just way to
vague/ambivalent
about everything they are doing.
Way to much.
Tells you wats up if your paying attention.)
No comments:
Post a Comment