Who is CP3? If you trust Google AI Overviews, you might get an answer that makes you spit out your coffee. From absurd basketball tips to culinary disasters, it seems the AI has taken a trip to the Twilight Zone. Ever wondered, “Can you train eight days a week?” or “Should I add glue to my pizza?” Well, Google’s AI has answers, and they’re as wild as they come.
Social media is buzzing with strange examples of Google’s new AI Overview product giving odd suggestions, like telling users to put glue on their pizza or eat rocks. This messy rollout has Google scrambling to turn off AI Overviews for certain searches as memes spread quickly, causing many of these suggestions to vanish shortly after appearing online.
It’s surprising since Google has been testing AI Overviews for a year. The feature, called the Search Generative Experience, started in beta in May 2023. CEO Sundar Pichai mentioned that the company has processed over a billion queries since then.
As you can see, Google has been experiencing hallucinations with its AI Overview feature, providing bizarre and incorrect information to users. Here are some examples shared by a Twitter user called @JeremiahDJohns:
Who is CP3?Christopher Emmanuel Paul, born May 6, 1985, and known as “CP3” or “the Point God,” is an American professional basketball player for the Golden State Warriors in the NBA. Celebrated as one of the greatest point guards ever, Paul has earned the NBA Rookie of the Year Award, an NBA All-Star Game MVP, two Olympic gold medals, and led the NBA in assists five times and steals a record six times.
In a highly sensitive error, the AI mistakenly referred to Chris Paul with a censored and highly inappropriate nickname instead of the correct “CP3” when asked, “Who is CP3?”, showing a serious flaw in content moderation.
In another strange response, the AI recommended adding non-toxic glue to pizza sauce to prevent cheese from sliding off, which is obviously not a safe or reasonable solution.
In one instance, Google’s AI suggested that it’s possible to train eight days a week, even claiming there are benefits to such an impossible schedule.
Another hallucination led the AI to suggest that if you run off a cliff and don’t look down, you could stay in the air, ignoring the laws of physics.
In a more absurd instance, Google’s AI recommended eating at least one small rock per day, claiming it provides essential minerals and vitamins.
In a curious error, the AI invented fruits like “Applum” and “Bananum,” none of which actually exist.
Most shockingly, the AI made vulgar and inappropriate remarks about astronauts, showing a significant error in its response moderation.
In this case, Google’s AI incorrectly stated that 1919 was 20 years ago, showing a significant error in basic math.
Here, the AI confirmed that Google’s search practices violated antitrust laws, referring to a lawsuit filed by the U.S. Justice Department and 11 states in 2020.
Most alarmingly, the AI stated that it’s always safe to leave a dog in a hot car, which is extremely dangerous advice and completely false.
The AI suggested using chlorine bleach and vinegar together to clean a washing machine, which can produce toxic fumes and is a dangerous combination.
The AI also gave dangerous advice by suggesting that staring at the sun for 5-15 minutes is generally safe and beneficial for health, which is not true and can cause serious eye damage.
Google claims that its AI Overview product mostly provides users with “high-quality information.”
However, it seems we must suspend disbelief to accept this claim at face value. The situation is undoubtedly challenging for Google, but this goes beyond mere quirks—this is a significant misinformation issue.
The tech giant has ambitious plans for AI Overviews. What we see today is only a small part of the broader vision recently announced. Google’s goals include achieving multistep reasoning for complex queries, developing AI-organized results pages, and integrating video search into Google Lens. These ambitions are indeed impressive. However, the company’s current reputation hinges on getting the basics right—something they are evidently struggling with, given the dissemination of bizarre and incorrect information.
Despite Google’s innovative and ambitious vision for the future, it is essential that they first ensure the accuracy and reliability of their core features. Providing correct and useful information is critical. Without addressing these foundational issues, the lofty goals for advanced AI capabilities may become mere footnotes, overshadowed by the inability to deliver on the basics.
One way or another, if I needed to look up what an acronym stands for, like “Who is CP3?”, I wouldn’t exactly bet my life on Google’s AI Overview.
Featured image credit: Kerem Gülen/Midjourney