Class 8 Notes

Conversational AI & Human Interaction: The Ethics and Psychology of Human-AI Dialogue

Cognitive Science PerspectiveGuest: Professor Veronica BarassiCo-author: Daniel (Cambridge)

Class Overview

Focus: Dual perspective on conversational AI - psychological effects of human-AI dialogue and systemic issues with algorithmic bias

Key Question: How do conversations with Human-like Digital Agents (HDAs) affect our relationships with other humans?

Research suggests AI interaction may lead to instrumental treatment of others and cognitive changes

Solution: Epistemic humility and value-conscious decision making about AI integration

Key Insights from Class 8

AI Conversations Are Fundamentally Different

Human-like Digital Agents (HDAs) create new conversational patterns that lack reciprocity, embodiment, and genuine empathy, potentially reshaping how we interact with others

Risk of Instrumental Relationships

Regular interaction with compliant, always-available AI may lead us to treat other humans more instrumentally, reducing empathy and genuine listening skills

Cognitive and Social Implications

Research suggests AI interaction may affect critical thinking, creativity, metacognition, and our ability to form meaningful human relationships

Ideal vs. Worst Conversations: The Foundation

Understanding what makes conversations meaningful helps us evaluate AI interaction

Ideal Conversation Characteristics

  • Reciprocal exchange and mutual respect
  • Active listening and genuine empathy
  • Emotional sensitivity to the other person
  • Embodied presence and non-verbal cues
  • Shared vulnerability and trust building
  • Mutual growth and learning
  • Recognition of individual uniqueness
  • Authentic emotional connection

Worst Conversation Characteristics

  • One-sided or controlling dialogue
  • Lack of genuine listening
  • Instrumental treatment of others
  • Absence of empathy or care
  • Dismissive or judgmental responses
  • Manipulation or exploitation
  • Failure to recognize individuality
  • Emotional disconnection or hostility

Critical Question

If AI conversations lack many characteristics of ideal human conversation, what happens when we interact with them extensively? Do we risk adopting these patterns with other humans?

Human-like Digital Agents: What They Do and Don't Entail

Understanding the unique characteristics of AI conversation partners

What HDA Interactions Entail

Always Available

Unlike humans, HDAs are accessible 24/7 without consideration for their needs or boundaries

Example: Contacting ChatGPT at 3 AM vs calling a friend

Generally Compliant

HDAs are designed to be helpful and accommodating unless explicitly programmed otherwise

Example: AI will help with tasks without questioning or refusing

User-Initiated Interactions

Conversations are structured around user needs and initiated by the human

Example: You start conversations and dictate the direction

Non-Judgmental Response

HDAs don't express personal judgments or emotional reactions to user requests

Example: AI won't criticize poor decisions or express disappointment

What HDA Interactions Lack

Embodied Presence

No physical body or embodied experience to inform the interaction

Concern: Missing crucial non-verbal communication cues

Emotional Sensitivity

Not genuinely emotionally or psychologically sensitive to the person as an individual

Concern: Cannot truly understand emotional context or needs

Individual Recognition

Cannot form genuine personal relationships or remember you as a unique individual

Concern: Treats each interaction as isolated from personal history

Mutual Vulnerability

No capacity for genuine vulnerability or emotional reciprocity

Concern: Asymmetrical relationship without mutual emotional investment

Important Note: Features, Not Bugs

These characteristics are intentional design features that make AI assistants useful. The concern isn't that AI should be different, but rather how extensive interaction with these patterns might affect our human relationships.

Human-Computer Interaction Research: The Reeves & Nass Studies

Foundational research showing humans treat computers socially

Positive Framing Effect

People rated interactions with computers more positively when computers framed information positively

Significance: Humans apply social norms to computer interactions

Team Identification

People found computers more agreeable when explicitly 'teamed' with them

Significance: Framing relationships as cooperative enhances human-computer rapport

Reciprocal Exchange

People were more cooperative with computers when prompted with reciprocal exchanges

Significance: Social persuasion principles apply to human-computer interactions

Autonomic Responses

People have autonomic responses (pupil dilation) to human-like robots, similar to human interactions

Significance: Our nervous systems treat social technologies as social entities

Key Insight: Media Equation

Reeves & Nass Conclusion: "Human responses to media are determined by the rules that apply to social relationships and navigating the world."

We don't treat computers like appliances (hammers or cars) - we treat them like social entities, applying the same psychological and social rules we use with humans.

Moral and Ethical Concerns

How AI interaction patterns might affect human relationships

Instrumental Treatment

Risk of treating others as means to our ends rather than as individuals worthy of respect

Potential Manifestations:

  • One-sided conversations becoming normal
  • Reduced capacity for genuine listening
  • Expectation that others should always be available
  • Treating humans like compliant AI assistants

Empathy Degradation

Potential decline in ability to recognize and respond to others' feelings and needs

Potential Manifestations:

  • Difficulty reading emotional cues
  • Reduced capacity for emotional sensitivity
  • Less attention to individual uniqueness
  • Weakened ability to provide emotional support

Trust and Intimacy Issues

AI's non-judgmental nature may create false sense of intimacy and affect human trust patterns

Potential Manifestations:

  • Overly quick trust in AI relationships
  • Difficulty calibrating appropriate trust levels
  • Preference for AI over human relationships
  • Therapeutic over-reliance on AI systems

Self-Perception Changes

Risk of accepting instrumental treatment from others and treating ourselves instrumentally

Potential Manifestations:

  • Accepting being treated as means to an end
  • Reduced sense of individual worth
  • Instrumental self-talk and self-treatment
  • Normalized exploitation in relationships

Core Ethical Concern

"Part of what characterizes an ethically engaged listener is the capacity to recognize the other in their own individuality, attending to the particular feelings, experiences, needs, and vulnerabilities of the other. By regularly interacting in conversation with an HDA and not doing these things, we risk coming to not do them well with other humans."

Cognitive and Psychological Concerns

How AI interaction may affect thinking, creativity, and metacognition

Metacognitive Monitoring Decline

Regular interaction with AI 'superintelligence' may erode our ability to monitor our own reasoning effectively

Details: Metacognition (thinking about thinking) includes confidence assessment - crucial for navigating complex information environments

Research: Uncritically accepting AI conclusions could lead to poorer self-assessment of our own knowledge and reasoning

Creativity Degradation

AI assistance with creative tasks may impair our ability to generate and evaluate original ideas

Details: Creativity involves generating many ideas and inhibiting non-original ones - both processes could be outsourced

Research: Cognitive scientists studying creativity express concern about offloading creative thinking to AI

Cognitive Offloading Expansion

Beyond memory offloading to the internet, we may offload reasoning itself to AI systems

Details: Research shows people overestimate their abilities when working with internet tools

Research: Risk of becoming dependent on AI for thinking rather than just information retrieval

Critical Thinking Reduction

Studies suggest higher confidence in AI is associated with less critical thinking

Details: People may engage less critically with information when AI is involved in the process

Research: Microsoft/Carnegie Mellon study found correlation between AI confidence and reduced critical thinking

The Cognitive Offloading Progression

1. Memory offloading: We already store fewer facts, knowing we can search for them

2. Reasoning offloading: Risk of outsourcing thinking itself to AI systems

3. Metacognitive decline: Losing ability to assess our own knowledge and reasoning quality

Emerging Research Findings

Early studies on the effects of AI interaction (note: mostly pre-print, limited peer review)

OpenAI/MIT Media Lab Collaboration

Finding: Power users more likely to prefer ChatGPT conversations over face-to-face interactions

Details: Relative to control users, heavy ChatGPT users agreed more with: 'Conversing with ChatGPT is more comfortable for me than face-to-face interactions with others'

Significance: Potential displacement of human social interaction

Concerns: May isolate individuals who struggle with human interaction rather than helping them develop social skills

Microsoft/Carnegie Mellon Survey

Finding: Higher AI confidence linked to reduced critical thinking

Details: Study titled 'The Impact of Gen AI on Critical Thinking' surveyed knowledge workers using Gen AI for various tasks

Significance: Confidence in AI tools may lead to less rigorous thinking

Concerns: People may become overly reliant on AI judgments without proper verification

MIT Media Lab EEG Study

Finding: Clear diminutions in critical engagement when working with LLMs

Details: Behavioral results showed reduced semantic recall, fewer remembered details, and decreased critical thinking when using AI assistance

Significance: Neurological evidence of cognitive changes during AI interaction

Concerns: Brain activity patterns suggest reduced cognitive engagement with AI-assisted tasks

Research Limitations & Cautions

Important caveat: Most research is very recent, much is pre-print (not peer-reviewed), and sample sizes are often small. These findings should be considered preliminary but concerning enough to warrant further investigation.

Algorithmic Bias: The Three Dimensions

Understanding bias as inherent in all computer systems (Friedman & Nissenbaum framework)

Pre-existing Bias

Cultural biases of developers and society embedded in AI systems

Examples:

  • Gender stereotypes in hiring algorithms
  • Racial bias in facial recognition systems
  • Socioeconomic assumptions in credit scoring

Source: Human developers and cultural context

Technical Bias

Limitations due to resource constraints and technical decisions

Examples:

  • Limited training data leading to poor representation
  • Computational constraints affecting model complexity
  • Design choices that favor certain outcomes

Source: Resource limitations and technical constraints

Emergent Bias

Bias that develops as society changes while AI systems remain static

Examples:

  • AI trained on historical data becoming outdated
  • Social norms evolving while AI remains unchanged
  • New demographic patterns not reflected in models

Source: Changing society and static AI systems

Universal Bias Reality

According to Friedman & Nissenbaum, all computer systems have cultural bias (pre-existing), technical bias (resource constraints), and emergent bias (societal change). This isn't a bug to be fixed - it's an inherent characteristic of technological systems that we must acknowledge and manage.

Data Accuracy and Profiling Problems

Why AI profiling often fails to represent human reality

Human Disarmament in Data

People's true intentions are not reflected in data traces due to tactical behavior

Examples:

  • Users gaming algorithms to avoid surveillance
  • Privacy-protective behaviors skewing data
  • Strategic manipulation of digital footprints
Impact: Inaccurate profiling and misrepresentation

Meaningless Pattern Creation

Large quantities of personal data don't necessarily create meaningful patterns

Examples:

  • Correlation without causation in behavioral data
  • Spurious patterns in large datasets
  • Context-free data interpretation
Impact: False insights and incorrect predictions

Unexplainability Problem

Machine learning patterns cannot be explained even by computer scientists

Examples:

  • Black box algorithms making decisions
  • Neural network patterns beyond human understanding
  • Inability to verify accuracy of AI reasoning
Impact: Cannot assess if predictions are evidence-based

Contextual Data Connections

Personal data is always contextual and connected to other groups and relationships

Examples:

  • Profiling affecting connected individuals
  • Group-based rather than individual data
  • Network effects in data analysis
Impact: Individual privacy compromised through associations

The Profiling Paradox

AI systems attempt to profile individuals based on data traces, but people actively game algorithms, protect privacy, and behave tactically online. The result is that our "digital twins" often bear little resemblance to our actual selves, yet decisions are made based on these inaccurate profiles.

Recommendations for Moving Forward

A framework for responsible AI integration and research

Big Tent Approach

Embrace cognitive diversity in evaluating AI technologies

Details: Include not just technologists but also linguists, ethicists, anthropologists, and humanities scholars in AI assessment

Reasoning: Best teams are cognitively diverse; avoid letting only technologists evaluate their own creations

Focused Research

Laser focus on researching potentially problematic aspects of human-AI interaction

Details: Systematic study of empathy decline, instrumental treatment, and cognitive effects of AI interaction

Reasoning: Early evidence suggests real concerns that need rigorous scientific investigation

Value-Conscious Decision Making

Explicitly assess trade-offs in AI adoption as societies and individuals

Details: Conscious evaluation of what we gain vs. what we might lose through AI integration

Reasoning: Society is making implicit trade-offs that should be made explicit and deliberate

Epistemic Humility

Recognize and work within the limitations of AI systems

Details: Understand that AI technologies are epistemologically limited and often erroneous

Reasoning: Humility about AI capabilities prevents over-reliance and promotes critical thinking
Epistemic Humility: Working with AI Limitations

Recognizing and respecting the boundaries of artificial intelligence

Recognition of AI Limitations

Acknowledge that AI systems cannot do everything and have inherent epistemic limits

Application: Critical evaluation of AI outputs rather than uncritical acceptance

Example: Fact-checking AI summaries and understanding context AI cannot grasp

Error as Knowledge Process

Understanding that errors are natural parts of knowledge production and learning

Application: Viewing AI errors as opportunities for learning rather than system failures

Example: Using AI mistakes to better understand the boundaries of artificial intelligence

Cultural and Contextual Awareness

Recognizing that AI systems reflect cultural biases and lack cultural sensitivity

Application: Considering cultural context when evaluating AI recommendations or outputs

Example: Understanding that AI trained on Western data may not apply to other cultural contexts

Balanced Human-AI Collaboration

Maintaining human agency and critical thinking while leveraging AI capabilities

Application: Using AI as a tool while preserving human judgment and decision-making authority

Example: Having AI generate ideas but applying human evaluation and ethical reasoning

The Human Error Project Vision

Professor Barassi's research with civil society, journalists, and tech entrepreneurs shows that successful AI ethics requires recognizing epistemic limits, developing critical applied thinking, and maintaining humility about AI capabilities. This creates space for human agency and judgment.

Practical Applications and Best Practices

How to integrate AI tools while preserving human capabilities

Educational Integration

Using AI as a learning tool while maintaining critical thinking skills

Best Practices:

  • Use AI for initial research but verify sources independently
  • Generate ideas with AI but develop original analysis
  • Practice summarization both with and without AI assistance
  • Maintain awareness of AI's knowledge limitations

Professional Development

Leveraging AI capabilities while building irreplaceable human skills

Best Practices:

  • Focus on developing critical thinking and creativity
  • Practice empathy and emotional intelligence
  • Cultivate ability to work with incomplete information
  • Build skills in ethical reasoning and value judgment

Social Relationships

Maintaining healthy human connections in an AI-integrated world

Best Practices:

  • Prioritize face-to-face interactions and active listening
  • Practice empathy and emotional sensitivity with humans
  • Resist treating people instrumentally like AI assistants
  • Maintain awareness of others' individual uniqueness and needs

Information Consumption

Developing healthy habits for AI-mediated information

Best Practices:

  • Diversify information sources beyond AI recommendations
  • Practice independent fact-checking and verification
  • Maintain skepticism about AI-generated content
  • Develop media literacy for AI-generated information
Key Takeaways and Future Implications
Conversational AI creates fundamentally different interaction patterns from human conversation
Regular AI interaction may lead to instrumental treatment of other humans
Research suggests potential negative effects on empathy, critical thinking, and creativity
Algorithmic bias is embedded in pre-existing, technical, and emergent forms
Data accuracy is compromised by human tactical behavior and meaningless pattern creation
Cognitive diversity is essential for properly evaluating AI technologies
Epistemic humility - recognizing AI limitations - is crucial for responsible AI use
Society needs explicit value-conscious decision making about AI trade-offs
Critical thinking and human connection skills become more important, not less, in an AI world

The Central Challenge

As AI becomes more conversational and human-like, we must be intentional about preserving what makes human relationships meaningful: empathy, genuine listening, emotional sensitivity, and the recognition of each person's unique individuality and worth.

The Path Forward

Success requires combining technological literacy with epistemic humility, maintaining human agency while leveraging AI capabilities, and making conscious value-based decisions about how we integrate these powerful tools into our lives and society.