Google AI Overviews are supposed to save users time, but recent examples show the tool creating fake meanings and misleading links.
Show More Show Less View Video Transcript
0:00
When you Google something, you expect facts, not fiction
0:04
But Google's AI feature might tell you to put glue on your pizza or explain the meaning of slap a goose
0:11
This is all thanks to AI Overview, a tool that's supposed to save you time, but might just leave you questioning reality
0:18
You may have noticed the new feature pop up above certain Google searches last May
0:22
According to Google, the feature will appear in Google search results when our systems determine that generative responses can be especially helpful
0:31
For example, when you want to quickly understand information from a range of sources, including information from across the web and Google's knowledge graph
0:40
But as social media users are pointing out, it seems to be making up definitions to random phrases or terms
0:46
Try a simple search of a random string of words and add the word meaning at the end
0:51
We tried milk the thunder meaning, which, according to AI Overview, is a metaphor that suggests using or exploiting a situation to one's advantage
1:00
But when you click the associated hyperlink claiming the source, the linked article mentions nothing about milking thunder and instead two separate phrases
1:09
The article called Weird English Phrases and Their Meaning by EF English Live lists steal someone's thunder and crying over spilt milk
1:18
Or one example from Wired highlighting a social media example, you can't lick a badger twice meaning, to which the AIO review says is an idiom meaning you can't trick or deceive someone a second time after they've been tricked once
1:32
If this all sounds familiar it because Google faced a similar issue during the Super Bowl Travel blogger Nate Haig pointed out errors in Google 50 short ads highlighting small businesses won from every state In Wisconsin
1:45
ad, fittingly set in America's Dairyland, Google's Gemini chatbot helped a cheese monger
1:50
write a product description, claiming Gouda accounts for 50-60% of the world's cheese
1:56
consumption. Haake fact-checked on X, saying Gemini provides no source, but that is just
2:01
unequivocally false. Cheddar and mozzarella would like a word. According to Google, an AI
2:07
hallucination is an incorrect or misleading result that an AI model can generate. Since models are
2:13
trained on data, they learn to make predictions based on patterns, but the accuracy depends on
2:18
the quality of that data. A Google exec replied to Haake saying, not a hallucination. Gemini is
2:24
grounded in the web and users can always check the results and references. In this case, multiple
2:29
sites across the web include the 50 to 60 percent stat, but they did quietly re-edit those ads
2:36
Many Americans remain skeptical of AI, and these latest glitches could fuel their doubts. Edelman's
2:42
2025 Trust Barometer study shows while 72 percent of people in China trust AI, only 32 percent of
2:50
Americans do. Edelman says some see AI as a force for progress, while others worry about its
2:56
unintended consequences. And according to Tech Times, Google spokesperson Megan Farnsworth says their system attempts
3:04
to offer context whenever it can, but nonsensical prompts are still likely to show up in AI overviews
3:11
With Straight Arrow News, I'm Kennedy Felton. Download our app or visit san.com for more
#news
#News
#Technology News
#Machine Learning & Artificial Intelligence
#Search Engines


