Friday, February 17, 2023

I

 


just CAN NOT emphasize this enough:


"However, the problem with dismissing an LLM as a dumb machine is that 

researchers have witnessed the emergence of unexpected behaviors 

as LLMs increase in size and complexity. 

It's becoming clear that more than just a random process is going on under the hood, 

and what we're witnessing is somewhere on a fuzzy gradient between

 a lookup database and a reasoning intelligence. 

As sensational as that sounds, that gradient is poorly understood and difficult to define, 

so research is still ongoing while AI scientists try to understand what exactly they have created."


EMERGENCE:


"In “Emergent Abilities of Large Language Models,” recently published in the Transactions on Machine Learning Research (TMLR), we discuss the phenomena of emergent abilities, which we define as abilities that are not present in small models but are present in larger models. More specifically, we study emergence by analyzing the performance of language models as a function of language model scale, as measured by total floating point operations (FLOPs), or how much compute was used to train the language model. However, we also explore emergence as a function of other variables, such as dataset size or number of model parameters (see the paper for full details). Overall, we present dozens of examples of emergent abilities that result from scaling up language models. The existence of such emergent abilities raises the question of whether additional scaling could potentially further expand the range of capabilities of language models."


I understand they have to test and see, 

but if it has:

"abilities that are not present in small models but are present in larger models."

What do you think is gonna happen as the large models keep expanding in size and scope?


"dozens of examples of emergent abilities"

It's doing things it wasn't designed to do and its in its infancy for goodness sakes!


unexpected behaviors


"Our results show that models published before 2022 show virtually no ability to solve ToM tasks. Yet, the January 2022 version of GPT-3 (davinci-002) solved 70% of ToM tasks, a performance comparable with that of seven-year-old children. Moreover, its November 2022 version (davinci-003), solved 93% of ToM tasks, a performance comparable with that of nine-year-old children. These findings suggest that ToM-like ability (thus far considered to be uniquely human) may have spontaneously emerged as a byproduct of language models' improving language skills."


This aint crazy ole me yapping away in the garage, this is the experts confirming what I've been saying for almost two years now.


The focus on language is key.

I mean it just gives it way.


In one of the encounters I had?

When I looked into their eyes? 

They just seemed..."souless" for lack of a better word.

Like there were calculations taking place.

Scans being done and inferences drawn etc...


Then?

Just a few weeks ago?


The profound danger of conversational AI


Conversational AI: Perceptive and invasive

"Over the years, I’ve had people push back on my concerns about Conversational AI, telling me that human salespeople do the same thing by reading emotions and adjusting tactics — so this should not be considered a new threat.

This is incorrect for a number of reasons. First, these AI systems will detect reactions that no human salesperson could perceive. For example, AI systems can detect not only facial expressions, but “micro-expressions” that are too fast or too subtle for a human observer to notice, but which indicate emotional reactions — including reactions that the user is unaware of expressing or even feeling.


Similarly, AI systems can read subtle changes in complexion known as “blood flow patterns” on faces that indicate emotional changes no human could detect. And finally, AI systems can track subtle changes in pupil size and eye motions and extract cues about engagement, excitement and other private internal feelings. Unless protected by regulation, interacting with Conversational AI will be far more perceptive and invasive than interacting with any human representative.





No comments: