It has been admitted that the search feature is screwed up
Google and WIRED Revisited: Are AI Overviews Really Out Of This World? Reply to Reid on Twitter, Facebook, and Facebook
There was no mention of Reid rolling back the summaries in his post. A number of features will be adjusted as needed according to feedback from users.
Only four features are mentioned: better detection of nonsensical questions; making the feature less dependent on user-generated content from websites; and offering a more limited series of Artificial Intelligence Overviews.
Last week, when strange and misleading answers to search queries were found on social media, the company issued statements that downplayed the idea that the technology had problems. Liz Reid, the company’s head of search, apologized late Thursday for the flubs that highlighted some areas that needed improvement. ”Reid’s post directly referenced two of the most viral, and wildly incorrect, AI Overview results. Eating rocks can be good for you, and one suggestion was to use nontoxic glue to make pizza sauce.
Why the embarassing failures? The internet audit resulted in many mistakes, which made themselves known to the public. “There’s nothing quite like having millions of people using the feature with many novel searches. We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results.”Google claims some widely distributed screenshots of AI Overviews gone wrong were fake, which seems to be true based on WIRED’s own testing. A user who was on X posted a screenshot that seemed to answer the question “Can a bug live in your penis?” with an enthusiastic Confirmation from the search engine that this is normal. Over 5 million people have viewed the post. Upon further inspection, though, the format of the screenshot doesn’t align with how AI Overviews are actually presented to users. WIRED was not able to recreate anything close to that result.
As for Google telling its users to put glue on pizza, Reid effectively attributed the error to a sense of humor failure. “We saw AI Overviews that featured sarcastic or troll-y content from discussion forums,” she wrote. “Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.”
There are not many online sources that can draw on to get information about rock eating. According to Reid, the AI tool found an article from The Onion, a satirical website, that had been reposted by a software company, and it misinterpreted the information as factual.
You.com routinely avoids the kinds of errors displayed by Google’s AI Overviews, Socher says, because his company developed about a dozen tricks to keep LLMs from misbehaving when used for search.
A lot of effort is required because the underlying technology has no real understanding of the world and the web is full of unreliable information. It’s better to give you an answer rather than show you multiple viewpoints, he says.
Richard Socher, who helped build an Artificial Language Toolkit in late 2021, says that making it so it doesn’t tell you to eat rocks requires a lot of work.
After suggesting people to eat rocks and put glue on pizza, the new generative artificial intelligence search feature needed to be adjusted. The episode highlights the risks of Google’s aggressive drive to commercialize generative AI—and also the treacherous and fundamental limitations of that technology.
Google’s AI Overviews feature draws on Gemini, a large language model like the one behind OpenAI’s ChatGPT, to generate written answers to some search queries by summarizing information found online. The current AI boom is built around LLMs’ impressive fluency with text, but the software can also use that facility to put a convincing gloss on untruths or errors. Using the technology to summarize online information promises can make search results easier to digest but it is hazardous when online sources are contractionary or when people may use the information to make important decisions.