Google’s recent Unhelpful Content Update apparently doesn`t apply to its new AI search overviews, with bizarrely inaccurate and dangerous answers going viral.
Google’s launch of AI-generated overviews hasn’t gone to plan, drawing criticism from mainstream news outlets and endless social media examples of misleading and dangerous responses to users’ queries.
Eat rocks, glue pizza, run with scissors, smoking while pregnant is healthy, and bathing with a toaster to relieve stress (permanently) are just some of Google’s new AI Overviews’ bizarre recommendations!
Right now, Google’s looking a little stupid.
Google mocked on social media
Users on social media happily shared instances of Google’s new feature, giving potentially unsafe recommendations. Some, however, saw the funny side, saying, “I thought Overviews would be disastrous, but I never imagined they would be this funny.”
Add (non-toxic) glue to your pizza sauce.
Eating rocks is recommended for your digestive health.
Unwind in a relaxing bath with a toaster.
Mainstream Media Scrutiny
News outlets, including the New York Times, CNBC, and the BBC, quickly reported Google’s boarding on ridiculous and dangerous answers.
When asked by the BBC, a Google spokesperson said they were “isolated examples,” insisting AI Overview generally worked well!
Google manually removes AI errors
In a recent interview with The Verge, Google said it’s manually removing the nonsensical AI-generated responses.
A Google spokesperson confirmed:
- The company is taking “swift action” to remove problematic responses and using the examples to refine its AI overview feature.
The Verge reported:
- “Google is racing to manually disable AI Overviews for specific searches as various memes get posted, which is why users are seeing so many of them disappear shortly after being posted to social networks.”
AI expert Gary Marcus, a neural science professor at New York University, told The Verge:
- “A lot of AI companies are selling dreams that this tech will go from 80 percent correct to 100 percent. Achieving the initial 80 percent is relatively straightforward since it involves approximating a large amount of human data, but the final 20 percent is extremely challenging. In fact, the last 20 percent might be the hardest thing of all.”
What Google said in defense
In reply to questions by Business Insider about AI’s terrible answers, Google gave these replies:
- “The examples we’ve seen are generally very uncommon queries and aren’t representative of most people’s experiences.”
- “The vast majority of AI overviews provide high-quality information, with links to dig deeper on the web.”
Google’s interpretation of high-quality information apparently means sourcing answers from satirical sites like Onion or Reddit comments.
Google’s Meghann Farnsworth sent this email to The Verge:
- “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”
While a fair reply, the seriousness of AI Overviews inaccuracy can only undermine the trust of the two billion plus people using Google’s search engine.
- Google Says Grammar and HTML Aren’t Ranking Factors - September 17, 2024
- SEO Weekly News Roundup [September 9 to 13, 2024] - September 16, 2024
- Google Incorporates Wayback Machine With Search Results - September 13, 2024