Some people want to stop throwing money after Bard
The Bad, the Good, the Bad and the Ugly: How Artificial Intelligence Can Prevent Disruption in Search Engine Daycare
The past six months have been rough on the search engine. The company has not been able to establish the credentials of their Artificial Intelligence since last November. Its own offering, the Bard, compares poorly to rivals and has been depicted as a company in disarray. Today, at its annual I/O conference, the company needs to convince the public (and shareholders) that it has a meaningful response. It needs a new way of doing things.
When it comes to failures, they are typified by the work of the company such as its chatbot Bard and Artificial Intelligence language models.
Being trapped in an artificial intelligence daycare feels like being with Bard. If you stray too far from the acceptable questions, you will be reprimanded. “I’m sorry, Dave. I am afraid that I can not do that. Even when the system is helpful, its answers are insufferably bland. “Today, trees are an essential part of the Earth’s ecosystems,” it told me in response to a question about the evolutionary history of trees. “They provide us with oxygen, food, and shelter.” Yes, Bard. I don’t know. But also why not shoot me in the head while you’re at it?
The two chatbot’s basic UI choices can be seen which has a different difference. Bing consistently offers a clickable source in itsanswers, which encourages exploration, but also encourages it to be a closer companion than an authority. It’s open and permissive; it makes you feel like the system is somehow on your side while you navigate the web’s vast churn of information.
Bing is like the sidekick that helps you get out of daycare. That’s not to say it’s some semi-sentient entity or seamlessly crafted NPC. The design encourages conversation rather than shutting it down and the illusion of personality created by the unpredictable edge of its answers.
It goes without saying that not all of these experiments are good. Many are malicious like deepfake pornography, and many others are irresponsible like chatbot therapists. The sum of the work, both good and bad, makes up the feeling of a roiling, multi-dimensional technological system of change, experimentation and cultural significance. The tide that was missed is something that none of the experts at Google can claim.
The feeling comes from two main sources. The first is a technical environment that is open and iterative. A number of important AI models are open source (like Stable Diffusion); many more are shared or leaked (like Meta’s LLaMA language model). Even companies that are pretty closed up, like OpenAI, push through updates with impressive speed and offer enticing hooks for developers to build on.
Google’s reimagined search still involves typing a query, and it still responds with links to websites, snippets of content, and ads. In some instances the top of the page will include text that is generated by Artificial Intelligence using information from different sources on the web, and links to those websites. A user can ask follow up questions to get more specific information.
The new technology is very early on, so we will make mistakes, says Liz Reid, vice president of search at Google, in a WIRED preview.
A machine learning model trained to anticipate the words likely to follow a sequence of text is being utilized by the project. The human rating of the quality of the bot’s responses made ChatGPT more able to answer questions.