Google's AI suggests adding glue to pizza again, sparks controversy
Google's AI search results have sparked controversy, by yet again suggesting that glue should be added to pizza. The unusual recommendation was first brought to light by internet personality Katie Notopoulos, who created and consumed a "glue pizza" following the search results. The issue was further highlighted by Colin McMillen on Bluesky, who discovered that Google's response to the query "how much glue to add to pizza" was an eighth of a cup.
Google's AI training on incorrect information
The Verge confirmed the accuracy of the search results, indicating that Google's AI is being trained on incorrect information. This raises concerns about the reliability of the search engine's results and the potential impact on users. The situation is reminiscent of "Google bombing," a practice where certain search terms were manipulated to produce specific results. Google eventually addressed this issue in the late 2000s.
Other AI models disagree with Google's recommendation
In contrast to Google's suggestion, other AI models like Perplexity.AI and ChatGPT strongly advise against using glue as a pizza ingredient. Perplexity.AI warns that consuming glue might be toxic and harmful to health. This discrepancy in responses highlights the potential for varied results from different AI systems, and raises questions about the accuracy of Google's recommendations.
Google's AI struggles with product-related queries
In addition to the glue controversy, Google's AI has been found to struggle with answering questions about its own products. The Verge editor Richard Lawler's query about how to enable screenshots in Chrome's Incognito mode, received two incorrect responses from Google's AI. This further underscores the potential limitations of the search engine's AI technology.