What you need to know
- AI Overview is an experimental feature Google uses for its search engine that gives users a brief summary of notable search results when a query is entered.
- Since its introduction, however, AI Overview has come under intense scrutiny for showing users absurdly inaccurate and/or downright dangerous information and suggestions.
- The feature’s latest snafu saw it recently erroneously list a game developer’s personal phone number as the main contact for a studio he hasn’t worked at for over eight years.
- As a result of the blunder, the developer received an irate phone call from a parent complaining about content in a game their child plays that he worked on, as the parent thought they were contacting the developer’s office.
Google’s experimental AI Overview search feature has come under fire for giving users bizarre results and suggestions that range from shockingly inaccurate to downright harmful, and its latest snafu has led to the erroneous publication of a game developer’s personal phone number.
Recently, Skybound Games senior environment artist Rogelio Delgado posted a thread on X (Twitter) in which he recounted an irate phone call he received from a child’s parent. The father contacted Delgado to complain about content in Conan Exiles — an M-rated, 2017 survival open world game his son plays that was made by the Dune: Awakening studio Funcom — and noted he’d found Delgado’s number online because of his previous employment at Funcom over eight years ago.
After a “vaguely combative and awkward conversation,” the caller eventually apologized, explaining that his assistant found the number for him and that he wasn’t aware Delgado no longer worked at Funcom. Shortly afterwards, he texted Delgado a screenshot that showed Google’s AI Overview listing his personal phone number as Funcom’s main corporate office. And as you’d expect, Delgado was (rightfully) livid about the mishap.
but how google AI made the leap that THAT was the number for the CORPERATE OFFICE LOCATED IN NORWAY… AND TELLING PEOPLE WITH ITS WHOLE CHEST. WTFI am aghast. @GoogleAI GET YOUR SHIT TOGETHER 9/9October 10, 2024
“My ONLY guess on how this happened is that I have my phone number on my resume that also has me listed as a past employee of Funcom,” he wrote in his social media thread. “…but how google AI made the leap that THAT was the number for the CORPERATE OFFICE LOCATED IN NORWAY… AND TELLING PEOPLE WITH ITS WHOLE CHEST. WTF. I am aghast. @GoogleAI GET YOUR SH*T TOGETHER.”
Commenters figured out that the AI actually pulled the number from an erroneous listing on the prospecting business platform LeadIQ, but even so, it should have drawn information from Funcom’s public contact information page. Frankly, this blunder was catastrophic, and it’s not a stretch to worry that seriously dangerous situations could occur if Google’s AI made mistakes like it with physical home addresses or office locations.
As I said before, this is also far from the first controversy Google has brought about with its AI ventures as it works to challenge Microsoft Copilot. Earlier this year, AI Overview was fiercely criticized for pushing absurd search results that included telling people to add glue to their pizza sauce, lauding rocks as a healthy snack, and recommending suicide to depressed individuals, among others. And while some of the queries that led to these answers were likely fine-tuned to spit out social media engagement bait, several were traced to old Reddit comments — and no matter what, there should be measures in place to stop results like this from generating.
🎃The best early Black Friday deals🦃
All of these debacles illustrate the importance of curated, high-quality sources and information for AI language models, filters that ensure dangerous or grossly inaccurate information isn’t shared with users, and failures on Google’s end to adequately integrate both into its search assistant. If AI is here to stay, I hope issues like these don’t continue happening for long. I can’t exactly say I’m confident they won’t, though positive experiences with other models like OpenAI’s ChatGPT and Microsoft Copilot despite my best efforts to trick them do make me hopeful.
Despite the gravity of the AI error here, it did at least lead to a pretty funny comment from Funcom chief creative officer and Dune: Awakening creative director Joel Bylos: “On the other hand, thanks for taking the heat for me man :).” There’s always a silver lining.