Why This Paper Matters: The Panoramic View
Welcome to a different way of thinking about Artificial Intelligence. In STAT S-115, we aim for "panoramic thinking"—the ability to see the big picture where technology, philosophy, and society all connect. This paper by Gregory Crane is a perfect example of that thinking in action.
Crane, a professor of both Classical Studies and Computer Science, doesn't just ask what AI can do. He asks a much deeper set of questions: Where has AI come from? How do its promises compare to its reality? And most importantly, how should we measure its ultimate value to humanity?
This guide will walk you through his arguments, connecting them to the core ideas of this course. Prepare to journey beyond the code and see AI through the long lens of human history.
One of Crane's first moves is to cut through the buzz. He agrees with the idea that the popular image of AI—a thinking, conscious machine like you see in movies—is mostly hype. The real, world-changing work, he argues, is happening in two other areas:
Crane argues that these two concepts are fundamental to the humanities because they directly impact our ability to understand our own culture and each other.
Anecdote from the "AI Winter": Crane shares a personal story from the 1980s, a time of massive AI hype. Researchers in Cambridge, Massachusetts nicknamed the area "AI Alley". As a graduate student, Crane was told he could easily collaborate on a project—all he needed was a "Lisp machine," a specialized computer for AI research. The price? A mere $100,000 in 1984 dollars! This story isn't just a funny look at old technology; it's a powerful reminder that hype often runs far ahead of practical reality and that access to technology is a major factor in who gets to innovate.
🧠 Thought-Provoking Questions:
A core part of panoramic thinking is understanding that while technology feels new, the patterns of change are ancient. Crane, the Classics scholar, brings in a lesson from the 2,500-year-old writings of the Greek historian Herodotus: over time, powerful entities rise and inevitably fall.
This isn't just ancient history. Crane uses it to frame the recent past.
The Ghosts of AI Alley: Crane recounts the dramatic fall of two technology titans:
- Digital Equipment Corporation (DEC): In 1987, DEC was the second-biggest computer company in the world, employing 140,000 people. By 1998, it was gone, acquired by another company. Crane notes that he had to hunt to find even a single historical plaque marking its existence, comparing its faint memory to a ruined statue in a famous poem.
- Sun Microsystems: A hugely innovative company that suffered from the tech bubble, its business evaporated and it was acquired by Oracle in 2010, now existing mainly as a "corporate trophy".
The Google Ngram chart in the paper provides stark evidence of this cycle, showing how interest in the term "Artificial Intelligence" peaked in the late 1980s and then crashed, entering a so-called "AI winter".
The Panoramic Connection: This is the Technological Obsolescence Paradox we discuss in STAT S-115. The specific technologies Crane worked with—Lisp machines, DEC minicomputers, Sun workstations—are now "museum pieces". But the questions he was asking about them remain urgent and constant. The technology changes, but the framework for thinking about it endures.
🧠 Thought-Provoking Questions:
So, if technology is always changing and prone to hype, how should we evaluate it? Crane offers a simple but profound answer from the ancient Greek thinker Protagoras: "human beings are the measure by which we evaluate everything".
This means we must move from asking how a technology works to why it matters. We must distinguish between price and value. Crane invokes Oscar Wilde's definition of a cynic: someone who knows the price of everything and the value of nothing.
For a humanist, machine learning is only valuable if it makes us "fundamentally more intelligent and deepens our understanding". A system might be technically brilliant, fast, and efficient (its price/performance), but if it doesn't contribute to human well-being, empathy, or wisdom, it has no real value from this perspective.
The Panoramic Connection: This directly relates to the course's emphasis on ethics, philosophy, and social analysis. A purely technical view might evaluate an algorithm on its accuracy. A panoramic view demands we also evaluate it on its fairness, its impact on human relationships, and its alignment with societal values.
🧠 Thought-Provoking Questions:
So what is the endgame for a humanist using these powerful tools? It's not to build a machine that thinks for us. It's to build what Crane calls "increasingly intelligent reading environments".
His goal is to use technology to help audiences "push as deeply as we wish into the language and culture that they are viewing". Imagine watching a show on Netflix from another country. An intelligent environment wouldn't just give you subtitles; it might allow you to instantly explore a cultural reference, understand a historical allusion, or see the linguistic nuance of a particular phrase. This is Intelligence Augmentation in its purest form—using technology to deepen empathy and understanding across cultures.
The Panoramic Connection: This is a perfect example of the course theme, "how can we use GAI to solve problems?". Crane's vision is not about the tech itself, but about applying it to solve a fundamental human problem: understanding one another across the vast distances of time and culture.
🧠 Thought-Provoking Questions: