Welcome to the world of Generative AI! You've probably used tools like ChatGPT or seen headlines about what they can do. They can write poems, generate code, and answer complex questions. But sometimes, they say things that are bizarrely and hilariously wrong.
In 2022, cognitive scientist Douglas Hofstadter asked an AI, "What's the world record for walking across the English Channel?". The AI confidently replied: "18 hours and 33 minutes".
This is, of course, impossible. Hofstadter called the AI "cluelessly clueless" because it "had no idea that it had no idea". The tech industry has a fancy term for this: "AI hallucination". But is "hallucinating" the right word? Does it tell the whole story?
This guide will help you think panoramically about this phenomenon. We'll move beyond the simple "glitch" to understand the deeper story of AI failures, exploring them from technical, social, and philosophical perspectives.
When we say an AI "fails," we often mean it produces nonsensical or false information. This is a huge issue for the companies building them; when Google first demonstrated its AI model, Bard, it made a factual error about the James Webb Space Telescope that went viral.
A key reason these failures are so significant lies in how modern AI is built. Most of the AI tools we use today are based on foundation models.
Think of it like this: Imagine if every cookbook in the world was based on a single, giant online recipe database. If that database mistakenly listed salt instead of sugar in a key cake recipe, suddenly thousands of bakers worldwide would start making salty cakes. A single flaw gets copied and scaled, leading to potentially "catastrophic" consequences.
Food for Thought:
It’s tempting to think of AI failures as simple technical bugs we can eventually fix. But critical scholars argue these failures aren't just accidents; they are often a direct reflection of the society that created them.
These failures, therefore, are not just technical problems. They are complex social realities shaped by our history, our biases, and our economic incentives.
Food for Thought:
This is where we zoom out to the widest panoramic view. To truly understand what's happening, we need to distinguish between an AI failure and an AI error.
Philosophers define error as a "mismatch between our judgments and reality". It happens when we take real elements from the world but combine them in a way that creates a false picture.
An Ancient Analogy: The Nyāya school of Indian philosophy uses a classic example: a person in a dimly lit room mistakes a coiled rope for a snake. The person isn't seeing something that isn't there. They see real elements—a long, coiled shape, the dark room—but their mind makes an erroneous cognitive relation. The judgment is flawed, not the individual pieces of reality.
Generative AI works in a similar way. It statistically combines words and phrases from its training data that are likely to appear together. It sees the "shape" of language but has no understanding of the "rope" of reality behind it. The "hallucination" is a failure that stems from this fundamental error in its way of knowing.
Food for Thought:
The final piece of our panoramic view comes from anthropology. The errors in our AI systems also stem from a simple, unavoidable fact: we don't have, and can never have, data that represents the full, complex diversity of human life—what some call the 'pluriverse'.
The real problem is that our systems are built on the fallacy that human experience can be flattened into measurable data. It can't. Human life is messy, unpredictable, and culturally specific—something our current AI systems are incapable of grasping.
Thinking through these different layers—technical, social, philosophical, and anthropological—is panoramic thinking. It's the ability to see that an AI "hallucination" is not just a computer glitch. It is:
We cannot simply "fix" AI to be perfect. AI systems are human-made, and they will always be fallible, just as we are. The goal is not to fear AI, but to approach it with wisdom, humility, and a critical eye. By learning to recognize the deep-seated sources of its errors, you can move beyond being a simple user of technology and become a thoughtful, critical citizen in a world increasingly shaped by it.
Final Question to Ponder: If we can't make AI perfect, what does it mean to use it responsibly?