Focus: Dual perspective on conversational AI - psychological effects of human-AI dialogue and systemic issues with algorithmic bias
Key Question: How do conversations with Human-like Digital Agents (HDAs) affect our relationships with other humans?
Research suggests AI interaction may lead to instrumental treatment of others and cognitive changes
Solution: Epistemic humility and value-conscious decision making about AI integration
Human-like Digital Agents (HDAs) create new conversational patterns that lack reciprocity, embodiment, and genuine empathy, potentially reshaping how we interact with others
Regular interaction with compliant, always-available AI may lead us to treat other humans more instrumentally, reducing empathy and genuine listening skills
Research suggests AI interaction may affect critical thinking, creativity, metacognition, and our ability to form meaningful human relationships
Understanding what makes conversations meaningful helps us evaluate AI interaction
If AI conversations lack many characteristics of ideal human conversation, what happens when we interact with them extensively? Do we risk adopting these patterns with other humans?
Understanding the unique characteristics of AI conversation partners
Unlike humans, HDAs are accessible 24/7 without consideration for their needs or boundaries
HDAs are designed to be helpful and accommodating unless explicitly programmed otherwise
Conversations are structured around user needs and initiated by the human
HDAs don't express personal judgments or emotional reactions to user requests
No physical body or embodied experience to inform the interaction
Not genuinely emotionally or psychologically sensitive to the person as an individual
Cannot form genuine personal relationships or remember you as a unique individual
No capacity for genuine vulnerability or emotional reciprocity
These characteristics are intentional design features that make AI assistants useful. The concern isn't that AI should be different, but rather how extensive interaction with these patterns might affect our human relationships.
Foundational research showing humans treat computers socially
People rated interactions with computers more positively when computers framed information positively
People found computers more agreeable when explicitly 'teamed' with them
People were more cooperative with computers when prompted with reciprocal exchanges
People have autonomic responses (pupil dilation) to human-like robots, similar to human interactions
Reeves & Nass Conclusion: "Human responses to media are determined by the rules that apply to social relationships and navigating the world."
We don't treat computers like appliances (hammers or cars) - we treat them like social entities, applying the same psychological and social rules we use with humans.
How AI interaction patterns might affect human relationships
Risk of treating others as means to our ends rather than as individuals worthy of respect
Potential decline in ability to recognize and respond to others' feelings and needs
AI's non-judgmental nature may create false sense of intimacy and affect human trust patterns
Risk of accepting instrumental treatment from others and treating ourselves instrumentally
"Part of what characterizes an ethically engaged listener is the capacity to recognize the other in their own individuality, attending to the particular feelings, experiences, needs, and vulnerabilities of the other. By regularly interacting in conversation with an HDA and not doing these things, we risk coming to not do them well with other humans."
How AI interaction may affect thinking, creativity, and metacognition
Regular interaction with AI 'superintelligence' may erode our ability to monitor our own reasoning effectively
Details: Metacognition (thinking about thinking) includes confidence assessment - crucial for navigating complex information environments
Research: Uncritically accepting AI conclusions could lead to poorer self-assessment of our own knowledge and reasoning
AI assistance with creative tasks may impair our ability to generate and evaluate original ideas
Details: Creativity involves generating many ideas and inhibiting non-original ones - both processes could be outsourced
Research: Cognitive scientists studying creativity express concern about offloading creative thinking to AI
Beyond memory offloading to the internet, we may offload reasoning itself to AI systems
Details: Research shows people overestimate their abilities when working with internet tools
Research: Risk of becoming dependent on AI for thinking rather than just information retrieval
Studies suggest higher confidence in AI is associated with less critical thinking
Details: People may engage less critically with information when AI is involved in the process
Research: Microsoft/Carnegie Mellon study found correlation between AI confidence and reduced critical thinking
1. Memory offloading: We already store fewer facts, knowing we can search for them
2. Reasoning offloading: Risk of outsourcing thinking itself to AI systems
3. Metacognitive decline: Losing ability to assess our own knowledge and reasoning quality
Early studies on the effects of AI interaction (note: mostly pre-print, limited peer review)
Finding: Power users more likely to prefer ChatGPT conversations over face-to-face interactions
Details: Relative to control users, heavy ChatGPT users agreed more with: 'Conversing with ChatGPT is more comfortable for me than face-to-face interactions with others'
Significance: Potential displacement of human social interaction
Finding: Higher AI confidence linked to reduced critical thinking
Details: Study titled 'The Impact of Gen AI on Critical Thinking' surveyed knowledge workers using Gen AI for various tasks
Significance: Confidence in AI tools may lead to less rigorous thinking
Finding: Clear diminutions in critical engagement when working with LLMs
Details: Behavioral results showed reduced semantic recall, fewer remembered details, and decreased critical thinking when using AI assistance
Significance: Neurological evidence of cognitive changes during AI interaction
Important caveat: Most research is very recent, much is pre-print (not peer-reviewed), and sample sizes are often small. These findings should be considered preliminary but concerning enough to warrant further investigation.
Understanding bias as inherent in all computer systems (Friedman & Nissenbaum framework)
Cultural biases of developers and society embedded in AI systems
Source: Human developers and cultural context
Limitations due to resource constraints and technical decisions
Source: Resource limitations and technical constraints
Bias that develops as society changes while AI systems remain static
Source: Changing society and static AI systems
According to Friedman & Nissenbaum, all computer systems have cultural bias (pre-existing), technical bias (resource constraints), and emergent bias (societal change). This isn't a bug to be fixed - it's an inherent characteristic of technological systems that we must acknowledge and manage.
Why AI profiling often fails to represent human reality
People's true intentions are not reflected in data traces due to tactical behavior
Large quantities of personal data don't necessarily create meaningful patterns
Machine learning patterns cannot be explained even by computer scientists
Personal data is always contextual and connected to other groups and relationships
AI systems attempt to profile individuals based on data traces, but people actively game algorithms, protect privacy, and behave tactically online. The result is that our "digital twins" often bear little resemblance to our actual selves, yet decisions are made based on these inaccurate profiles.
A framework for responsible AI integration and research
Embrace cognitive diversity in evaluating AI technologies
Details: Include not just technologists but also linguists, ethicists, anthropologists, and humanities scholars in AI assessment
Laser focus on researching potentially problematic aspects of human-AI interaction
Details: Systematic study of empathy decline, instrumental treatment, and cognitive effects of AI interaction
Explicitly assess trade-offs in AI adoption as societies and individuals
Details: Conscious evaluation of what we gain vs. what we might lose through AI integration
Recognize and work within the limitations of AI systems
Details: Understand that AI technologies are epistemologically limited and often erroneous
Recognizing and respecting the boundaries of artificial intelligence
Acknowledge that AI systems cannot do everything and have inherent epistemic limits
Application: Critical evaluation of AI outputs rather than uncritical acceptance
Understanding that errors are natural parts of knowledge production and learning
Application: Viewing AI errors as opportunities for learning rather than system failures
Recognizing that AI systems reflect cultural biases and lack cultural sensitivity
Application: Considering cultural context when evaluating AI recommendations or outputs
Maintaining human agency and critical thinking while leveraging AI capabilities
Application: Using AI as a tool while preserving human judgment and decision-making authority
Professor Barassi's research with civil society, journalists, and tech entrepreneurs shows that successful AI ethics requires recognizing epistemic limits, developing critical applied thinking, and maintaining humility about AI capabilities. This creates space for human agency and judgment.
How to integrate AI tools while preserving human capabilities
Using AI as a learning tool while maintaining critical thinking skills
Leveraging AI capabilities while building irreplaceable human skills
Maintaining healthy human connections in an AI-integrated world
Developing healthy habits for AI-mediated information
As AI becomes more conversational and human-like, we must be intentional about preserving what makes human relationships meaningful: empathy, genuine listening, emotional sensitivity, and the recognition of each person's unique individuality and worth.
Success requires combining technological literacy with epistemic humility, maintaining human agency while leveraging AI capabilities, and making conscious value-based decisions about how we integrate these powerful tools into our lives and society.