Page Loader
Summarize
Google's AI-powered search errors lead to manual cleanup drive
The tech giant is facing challenges with its AI-driven responses

Google's AI-powered search errors lead to manual cleanup drive

May 25, 2024
02:09 pm

What's the story

Google's AI Overview has been generating unusual responses to user queries, and the company has to manually disable it for certain searches due to bizarre suggestions. Social media is currently abuzz with AI Overview's responses like recommending glue as a pizza topping and implying that rocks are edible. The feature, previously known as "Search Generative Experience," was launched in beta in May 2023, and has served over a billion queries since then. However, it still continues to produce odd results.

Bizarre answers

AI Overview also produced unusual claims

Google's AI Overview has also been generating strange responses, including claims of dogs playing professional sports and owning hotels. In one instance, the system cited a YouTube video as evidence when asked if a dog had ever played in the NHL. When queried about whether a dog had ever owned a hotel, the system responded affirmatively and pointed out two instances of hotel owners owning dogs and a 30-feet tall statue of a beagle as proof.

Company response

Google responds to criticisms over AI Overview's quality

Despite criticisms, Google maintains that its AI Overview product generally provides "high quality information" to users, Google spokesperson Meghann Farnsworth stated to The Verge. Per Farnsworth many of the peculiar examples were results of uncommon queries or "doctored" examples. She confirmed that the company is "taking swift action" to remove AI Overviews on certain queries "where appropriate under our content policies, and using these examples to develop broader improvements to our systems."

AI limitations

AI expert discusses challenges in achieving accuracy

AI expert Gary Marcus highlights the difficulty of achieving 100% accuracy with AI technology. While it's relatively straightforward for these systems to approximate a large amount of human data and achieve 80% accuracy, the final 20% requires reasoning akin to human fact-checking. Marcus stated, "You actually need to do some reasoning to decide: is this thing plausible? Is this source legitimate? You have to do things like a human fact checker might do, that actually might require artificial general intelligence."