Monday, April 3, 2023

LET

 


the Coalescing begin!


ChatGPT gets “eyes and ears” with plugins that can interface AI with the world


3/24/2023, 2:29 PM



"On Thursday, (3/23) OpenAI announced a plugin system for its ChatGPT AI assistant. The plugins give ChatGPT the ability to interact with the wider world through the Internet, including booking flights, ordering groceries, browsing the web, and more. Plugins are bits of code that tell ChatGPT how to use an external resource on the Internet.


"Basically, if a developer wants to give ChatGPT the ability to access any network service (for example: "looking up current stock prices") or perform any task controlled by a network service (for example: "ordering pizza through the Internet"), it is now possible, provided it doesn't go against OpenAI's rules."


"Bing Chat has taken this paradigm further by allowing it to search the web for more recent information, but so far ChatGPT has still been isolated from the wider world. While closed off in this way, ChatGPT can only draw on data from its training set (limited to 2021 and earlier) and any information provided by a user during the conversation. Also, ChatGPT can be prone to making factual errors and mistakes (what AI researchers call "hallucinations").


"To get around these limitations, OpenAI has popped the bubble and created a ChatGPT plugin interface (what OpenAI calls ChatGPT's "eyes and ears") that allows developers to create new components that "plug in" to ChatGPT and allow the AI model to interact with other services on the Internet. These services can perform calculations and reference factual information to reduce hallucinations, and they can also potentially interact with any other software service on the Internet—if developers create a plugin for that task."


(Daniel 7:8

“While I was thinking about the horns, there before me was another horn, a little one, which came up among them; and three of the first horns were uprooted before it. This horn had eyes like the eyes of a human being and a mouth that spoke boastfully.

Think about it for a second.)


"Beyond that, developers have been using ChatGPT and GPT-4 to write ChatGPT plugin manifests (a manifest is "a machine-readable description of the plugin’s capabilities and how to invoke them," according to OpenAI), further simplifying the plugin development process."


"This kind of self-compounding development capability feels like uncharted territory for some programmers."

(Oh it's uncharted alright...


"Were setting sail

to the place on the map 

from which no one has ever returned...")


"Given that OpenAI has previously tested its AI models (such as GPT-4) to see if they have the agency to modify, improve, and spread themselves among the world's computer systems, it's unsurprising that OpenAI spends almost half of its ChatGPT plugins blog post talking about safety and impacts. "Plugins will likely have wide-ranging societal implications," the company casually mentions in one section about potential impacts on jobs."


"Beyond jobs, a recurring fear among some AI researchers involves granting an advanced AI model access to other systems, where it can potentially do harm. The AI system need not be "conscious" or "sentient," just driven to complete a certain task it deems necessary. In this case with plugins, it seems like OpenAI is doing exactly that."

(Somebody has been telling you exactly where this was leading and now we are already there. It will be given protection as "unborn life" and now? Now it will have the ability to Coalesce just like somebody was telling their son it would a few years ago before he really even knew that much about it. Having fun yet? One day people are going to learn:

Galatians 6:7

New International Version

Do not be deceived: God cannot be mocked. A man reaps what he sows.)


"OpenAI appears to be aware of the risks, frequently referencing its GPT-4 system card that describes the kind of worst-case-scenario testing we described in a previous article. Beyond hypothetical doomsday scenarios, AI-powered harms could come in the form of accelerated versions of current online dangers, such as automated phishing rings, disinformation campaigns, astroturfing, or personal attacks."

(Those will be the least of your worries before it's all said and done.)


"There’s a risk that plugins could increase safety challenges by taking harmful or unintended actions, increasing the capabilities of bad actors who would defraud, mislead, or abuse others," writes OpenAI. "By increasing the range of possible applications, plugins may raise the risk of negative consequences from mistaken or misaligned actions taken by the model in new domains. From day one, these factors have guided the development of our plugin platform, and we have implemented several safeguards."


(Translation? Damn the torpedo's, full steam ahead! There's $ to be made!)






No comments: