AI Search Engines: Your Source for Misinformation - 60% of the Time!

Photo by ZHENYU LUO on Unsplash
A recent study from the Columbia Journalism Review’s Tow Center for Digital Journalism throws quite the shade at AI-driven search engines, revealing they often can’t tell the facts from fiction. And we’re talking about a staggering 60% error rate! Imagine that, instead of being your trusty news companion, these so-called “intelligent” search tools might be leading you down a rabbit hole of misinformation.
Researchers Klaudia Jaźwińska and Aisvarya Chandrasekar put eight AI search tools to the test, especially the ones that promise live search capabilities. And spoiler alert: the results are grim. With about one in four American users depending on AI for their news fixes, this data raises major red flags.
Among the contenders, Perplexity felt relatively reliable with a mere 37% incorrect info, whereas the ChatGPT Search feature reportedly misidentified 67% of its queries. Don’t even get us started on Grok 3, the underachiever of the bunch, which flopped with a jaw-dropping 94% error rate!
The study wasn’t just a casual run-through either. Researchers fed the AIs direct excerpts from credible news articles, asking them to provide the correspondent headlines, URLs, and their respective publishers. What did they find? A systematic trend of these AI tools preferring to fill the silence with ‘confidently wrong’ answers rather than admitting they didn’t know. Can we give a slow clap for that?
For those who believe paying big bucks guarantees quality, think again! Paid versions of these AI tools, like Perplexity Pro at $20/month and Grok 3 at a whopping $40/month, often fared worse in terms of reliability than their free siblings. You’d think with higher fees would come higher accuracy, but nope, they just had a knack for being more confidently incorrect.
To make matters worse, these AIs tend to ignore Robot Exclusion Protocol settings, which means they’re accessing content they have no right to. For example, Perplexity’s free version snagged all ten excerpts from National Geographic’s paywalled articles, despite their explicit disapproval. Isn’t it delightful when tech ignores boundaries?
To sum it up: let’s steer clear of these high-tech wannabes for our news needs and stick to our trusty human reporters, at least they’re not playing fast and loose with the facts.
AUTHOR: mpp
SOURCE: Ars Technica