You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)
Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”
I mean yeah… if he had a solution they would be actually have the revolutionary AI tool the tech writers write about.
It’s kinda written like a “gotcha” but it’s really the fundamental problem with AI. We call it hallucinations now but a few years ago we just called it being wrong or returning bad results.
It’s like saying we have teleportation working in that we can vaporize you on the spot but are just struggling to reconstruct you elsewhere. “It’s halfway there!”
Until the AI is trustworthy enough to not require fact checking it afterwards it’s just a toy.