
Google swiftly addresses public outcry by implementing fixes to its AI-generated search summaries after a wave of outlandish answers gain viral attention.
Google announced on Friday that it has implemented over a dozen technical enhancements to its artificial intelligence systems following concerns about inaccuracies in its revamped search engine.
The company rolled out updates to its search engine in mid-May, introducing AI-generated summaries alongside search results. However, soon after its release, users began sharing screenshots of some particularly absurd answers.
While Google has defended its AI summaries as generally accurate and extensively tested, Liz Reid, head of Google’s search division, admitted in a recent blog post that “some peculiar, incorrect, or unhelpful AI summaries did surface.”
Although some examples were merely amusing, others presented potentially dangerous misinformation.
For instance, when the Associated Press inquired about edible wild mushrooms, Google’s response contained technical correctness but lacked crucial safety information, raising concerns among experts like Mary Catherine Aime, a professor specializing in mycology and botany at Purdue University.
According to her, the details regarding puffball mushrooms were “fairly accurate,” but she pointed out that Google’s description focused heavily on identifying puffballs with solid white flesh. However, this characteristic is also shared by several puffball look-alikes that can be harmful if consumed.
The updates to AI systems also aimed to minimize reliance on user-generated content, such as Reddit posts, which might provide misleading guidance. For instance, there was a notable case where Google’s AI summary, sourced from a satirical Reddit comment, suggested using glue to adhere cheese to pizza.
Google’s summaries strive to deliver authoritative answers swiftly, eliminating the need to navigate through multiple website links.
Nevertheless, concerns have been raised by AI experts regarding the potential for bias and misinformation in AI-generated responses, particularly in critical situations. Large language models operate by predicting the most fitting words based on their training data, making them susceptible to generating inaccurate information—a phenomenon commonly referred to as hallucination.