OpenAI has a memory
Using Memory in GPTs is a Toy Model for Personalized Interactions with Online Chatbots and Appearance in Virtual Assistants
The goal is the same, that the interaction feels fluid, so natural, and that the user can forget what the chatbot remembers. This is great news for businesses who want to maintain an ongoing relationship with their customers on the other end.
That approach makes many people feel uncomfortable. The idea of the bot knowing what users are asking, and then feeding it information to help personalize it is both cool and frightening.
Each custom GPT has its own memory. OpenAI uses the Books GPT as an example: with memory turned on, it can automatically remember which books you’ve already read and which genres you like best. You can imagine that memory might be useful when you visit the GPT Store. GymStreak could help you track progress over time, the tutor me could offer a better long-term course load if it knows what you know, and Kayak could go straight to your favorite airlines and hotels.
For this example, we have trained the model to steer away from remembering information, since we think there are many useful cases for that.
Openai is not the first entity to toy with memory. Google has emphasized “multi-turn” technology in Gemini 1.0, its own LLM. This means you can interact with Gemini Pro using a single-turn prompt—one back-and-forth between the user and the chatbot—or have a multi-turn, continuous conversation in which the bot “remembers” the context from previous messages.
Do you think the hypervigilant virtual assistant that tech consumers have been promised as of late, or a data-capture scheme that uses your likes, preferences, and personal data to better serve a tech company than its users? Possibly both, though OpenAI might not put it that way. Fedus thinks the assistants of the past did not have the intelligence to do their job.