Beyond the Glitch: A Panoramic Guide to AI Errors

Welcome to the world of Generative AI! You've probably used tools like ChatGPT or seen headlines about what they can do. They can write poems, generate code, and answer complex questions. But sometimes, they say things that are bizarrely and hilariously wrong.

In 2022, cognitive scientist Douglas Hofstadter asked an AI, "What's the world record for walking across the English Channel?". The AI confidently replied: "18 hours and 33 minutes".

This is, of course, impossible. Hofstadter called the AI "cluelessly clueless" because it "had no idea that it had no idea". The tech industry has a fancy term for this: "AI hallucination". But is "hallucinating" the right word? Does it tell the whole story?

This guide will help you think panoramically about this phenomenon. We'll move beyond the simple "glitch" to understand the deeper story of AI failures, exploring them from technical, social, and philosophical perspectives.

Part 1: The Technical View – Why Do AIs 'Hallucinate'?

When we say an AI "fails," we often mean it produces nonsensical or false information. This is a huge issue for the companies building them; when Google first demonstrated its AI model, Bard, it made a factual error about the James Webb Space Telescope that went viral.

A key reason these failures are so significant lies in how modern AI is built. Most of the AI tools we use today are based on foundation models.

  • Foundation Models: Think of these as a massive, general-purpose brain (like BERT or the models behind ChatGPT) that has been trained on a huge portion of the internet. Companies then take this one "brain" and adapt it for many different tasks.
  • Homogenization: This process, called homogenization, means that many different AI applications share the same underlying model. While this is efficient, it's also risky. If the original foundation model has flaws, biases, or a tendency to make things up, then every application built on it will inherit those same flaws.

Think of it like this: Imagine if every cookbook in the world was based on a single, giant online recipe database. If that database mistakenly listed salt instead of sugar in a key cake recipe, suddenly thousands of bakers worldwide would start making salty cakes. A single flaw gets copied and scaled, leading to potentially "catastrophic" consequences.

Food for Thought:

  • If one AI model makes a mistake, how many different websites, apps, or services could be affected at the same time?
  • What are the risks if systems controlling critical infrastructure, like energy grids or financial markets, all rely on the same foundation model?

Part 2: The Social View – It’s Not Just a Bug, It’s a Feature of Society

It’s tempting to think of AI failures as simple technical bugs we can eventually fix. But critical scholars argue these failures aren't just accidents; they are often a direct reflection of the society that created them.

  • Data is Destiny: AI systems are trained on data from our world. If our world contains racism, sexism, and other forms of bias, the AI will learn and reproduce those biases. For example, a historical Amazon cataloging system once began censoring LGBT+ books because it learned from biased data. The failure wasn't a random glitch; it was the system correctly learning incorrect social values.
  • The Business of Failure: AI failures are also shaped by economic pressures. The tech industry often operates with a "good enough" culture, launching products they know are imperfect to avoid falling behind in the "AI race". The CEO of OpenAI even admitted that ChatGPT "will confidently state things as if they were facts that are entirely made up".

These failures, therefore, are not just technical problems. They are complex social realities shaped by our history, our biases, and our economic incentives.

Food for Thought:

  • Why is calling an AI's mistake a "hallucination" potentially misleading? Does it make the AI seem more human and less like a product of specific design and business decisions?
  • If an AI denies a loan to someone based on biased data, who is responsible? The programmer? The company that built it? The society that generated the biased data?

Part 3: The Philosophical View – The Crucial Difference Between 'Failure' and 'Error'

This is where we zoom out to the widest panoramic view. To truly understand what's happening, we need to distinguish between an AI failure and an AI error.

  • A failure is the event you see: the wrong answer, the biased outcome, the nonsensical sentence. It's the symptom.
  • An error, in a philosophical sense, is the root cause. It's not about a lack of knowledge, but about a flawed process of knowledge production.

Philosophers define error as a "mismatch between our judgments and reality". It happens when we take real elements from the world but combine them in a way that creates a false picture.

An Ancient Analogy: The Nyāya school of Indian philosophy uses a classic example: a person in a dimly lit room mistakes a coiled rope for a snake. The person isn't seeing something that isn't there. They see real elements—a long, coiled shape, the dark room—but their mind makes an erroneous cognitive relation. The judgment is flawed, not the individual pieces of reality.

Generative AI works in a similar way. It statistically combines words and phrases from its training data that are likely to appear together. It sees the "shape" of language but has no understanding of the "rope" of reality behind it. The "hallucination" is a failure that stems from this fundamental error in its way of knowing.

Food for Thought:

  • When an AI tells you the "world record for walking across the English channel," what "elements of reality" is it wrongly combining? (Hint: Think about what it knows about world records, walking, and the English Channel).
  • Can a system that operates only on mathematical and statistical reasoning ever truly "understand" human concepts like love, justice, or humor?

Part 4: The Anthropological View – The Impossibility of a 'Complete' Dataset

The final piece of our panoramic view comes from anthropology. The errors in our AI systems also stem from a simple, unavoidable fact: we don't have, and can never have, data that represents the full, complex diversity of human life—what some call the 'pluriverse'.

  • The Language Gap: Of the more than 7,000 languages spoken in the world, only a tiny fraction are well-represented in AI training data. One study of GPT-3 found that 93% of its training data was in English. This means our most powerful AI models are built on a fundamentally narrow and biased slice of human expression.
  • Beyond Text: As the anthropologist Franz Boas argued, language isn't just words in a text; it's a lived, embodied practice that changes constantly. An AI trained on the internet can read the text, but it can't understand the culture.

The real problem is that our systems are built on the fallacy that human experience can be flattened into measurable data. It can't. Human life is messy, unpredictable, and culturally specific—something our current AI systems are incapable of grasping.

Conclusion: Your Panoramic Toolkit

Thinking through these different layers—technical, social, philosophical, and anthropological—is panoramic thinking. It's the ability to see that an AI "hallucination" is not just a computer glitch. It is:

  • A technical outcome of how foundation models work.
  • A social reflection of our own biases and economic motives.
  • A philosophical problem of flawed knowledge production.
  • An anthropological limitation of capturing human diversity in data.

We cannot simply "fix" AI to be perfect. AI systems are human-made, and they will always be fallible, just as we are. The goal is not to fear AI, but to approach it with wisdom, humility, and a critical eye. By learning to recognize the deep-seated sources of its errors, you can move beyond being a simple user of technology and become a thoughtful, critical citizen in a world increasingly shaped by it.

Final Question to Ponder: If we can't make AI perfect, what does it mean to use it responsibly?