OpenAI: Learning to Grow a Better AI Language: A Blog Post by Jacob Andreas (aka OpenAI, in The Hudson River)
The core of that artificial intelligence is not very new. It is a version of an AI model called GPT-3 that generates text based on patterns it digested from huge quantities of text gathered from the web. That model, which is available as a commercial API for programmers, has already shown that it can answer questions and generate text very well some of the time. Getting the service to respond in a specific way required a specific prompt to feed into the software.
The internet has been enamored with Openai’s chatgppt since it was released last week. Early users have been enthusiastically posting pics of their experiments, marveling at how well it can do things such as create short essays on any topic, craft literary parodies, answer complex coding questions and much more. It has prompted predictions that the service will make conventional search engines and homework assignments obsolete.
OpenAI has not released full details on how it gave its text generation software a naturalistic new interface, but the company shared some information in a blog post. The team fed human answers to GPT- 3.5 as training data and used a form of punishment called reinforcement learning to push the model to give better answers to questions.
Jacob Andreas, an assistant professor who works on AI and language at MIT, says the system seems likely to widen the pool of people able to tap into AI language tools. “Here’s a thing being presented to you in a familiar interface that causes you to apply a mental model that you are used to applying to other agents—humans—that you interact with,” he says.
Last year I attended an event hosted by Google and they were celebrating its advances in Artificial Intelligence. The company has a domain that extends to the Hudson River and a bunch of us gathered in a pierside exhibition space to watch scripted presentations from executives. Speaking remotely from the West Coast, the company’s high priest of computation, Jeff Dean, promised “a hopeful vision for the future.”
Something weird is happening in the world of AI. In the early part of the century, the field was sparked out of a dull winter by the innovations of deep learning. This approach to AI transformed the field and made many of our applications more useful, powering language translations, search, Uber routing, and just about everything that has “smart” as part of its name. We’ve been working in the artificial intelligence for a dozen years. But in the past year or so there has been a dramatic aftershock to that earthquake as a sudden profusion of mind-bending generative models have appeared.
Answers to those questions aren’t clear right now. There is one thing. The tech sector has been boosted by granting open access to these models, even as the current giants lay off their workforces. Contrary to Mark Zuckerberg’s belief, the next big paradigm isn’t the metaverse—it’s this new wave of AI content engines, and it’s here now. A gold rush of products moved tasks from paper to PC application in the 1980s. You could make a lot of money shifting your desktop products to online in the 1990s. The movement was to be mobile a decade later. In the 2020s the big shift is toward building with generative AI. This year thousands of startups will emerge with business plans based on tapping into the APIs of those systems. The cost of churning out generic copy will go to zero. By the end of the decade, AI video-generation systems may well dominate TikTok and other apps. The human beings are great, but the ROBOTS will dominate in a few years.
It seems that there is no need to explain the phenomenon that ischatg pattych, which is a testament to OpenAI building the fastest-growing consumer software product in history. I was not shown any tips or precautions for interacting with the Snap My Artificial Intelligence. A blank chat page opens and is waiting for a conversation to start.
“The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day,” he says. “And this is something we’re well positioned to do as a messaging service.”
That distinction could save Snap some headaches. As Bing’s implementation of OpenAI’s tech has shown, the large language models (LLMs) underpinning these chatbots can confidently give wrong answers, or hallucinations, that are problematic in the context of search. If toyed with enough, they can even be emotionally manipulative and downright mean. It has kept large players in the space, namely, Google and Meta, from releasing products to the public.
In a different place, snap is. It has a deceivingly large and young user base, but its business is struggling. The company will probably see a boost in paid subscriber numbers, and eventually it could lead to new ways for the company to make money, though Spiegel is reticent about his plans.