The Role of Emotions in Trusting Artificial Intelligence

Results of an Experiment Conducted in a CAVE-based Virtual Reality

Co-authored with Alexander Martin Brandenburg University of Technology

Research Overview

This groundbreaking experimental study investigates the impact of AI emotions on people's decisions towards trusting artificial intelligence. Conducted in a CAVE-based virtual reality environment at Brandenburg University of Technology, the research utilizes an economic trust game to examine how emotional expressions by AI agents influence human trust behavior. The study addresses a critical gap in understanding human-AI interaction, particularly focusing on the role of emotions in developing trust relationships with AI systems.

Research Methodology

CAVE Environment

Utilizing immersive CAVE (CAVE Automatic Virtual Environment) technology with cube-shaped projection screens creating authentic 3D virtual reality experiences for heightened participant immersion.

CAVE Virtual Reality Laboratory Setup at Brandenburg University of Technology

The CAVE environment at Brandenburg University of Technology showing the virtual trust game interface with AI avatar

Trust Game Design

Implementing the classical economic trust game (Berg et al., 1995) where participants act as investors playing alongside AI trustees with varying emotional expressions.

Emotional Manipulation

Systematically varying AI avatar appearances to express positive, negative, or neutral emotions across different experimental conditions and treatment phases.

AI Avatar Emotional Expressions: Negative, Neutral, and Positive

AI avatar emotional expressions used in the experiment: negative (left), neutral (center), and positive (right)

Key Hypotheses

  • H1-H3: General trust disposition, risk-taking behavior, and declared trust in AI are positively correlated with observed trust behavior towards virtual AI
  • H4: Virtual AI expressing positive emotions leads to higher levels of trust behavior compared to AI without emotional expressions
  • H5: Virtual AI expressing negative emotions leads to lower levels of trust behavior compared to neutral AI
  • H6: Positive emotions have a greater impact on trusting behavior than negative emotions in human-AI interactions

Experimental Design

The experiment involved 77 participants (average age 29.67 years, 45% female, 53% male) in a controlled laboratory setting. Participants were randomly assigned to two groups: Group 1 (positive emotion treatment) and Group 2 (negative emotion treatment). Each participant played 32 rounds of the trust game, with the first 16 rounds serving as baseline (neutral emotions) and rounds 17-32 as treatment phase (positive/negative emotions).

Experimental Conditions

  • Baseline Phase (Rounds 1-16): Both groups interact with neutrally emotional AI avatar
  • Treatment Phase (Rounds 17-32): Group 1 sees positive emotions, Group 2 sees negative emotions
  • Trust Measurement: Token investment behavior as proxy for trust in AI agent
  • Statistical Analysis: Wilcoxon tests, Mann-Whitney U tests, correlation analysis
  • CAVE Technology: Immersive cube-shaped virtual reality environment with wall projections

Key Findings

Positive Emotions Impact

Participants in the positive emotion group showed a significant increase in trust behavior (p = .021), with average investments rising from baseline to treatment phase.

Negative Emotions Limited Effect

Negative emotions had no significant impact on trust behavior (p = .556), suggesting asymmetric effects of emotional valence on human-AI trust.

Dispositional Trust Independence

AI-specific trust operates as a distinct construct, independent of general trust and risk-taking propensity, highlighting unique aspects of human-AI relationships.

Implications & Applications

The findings contribute to understanding human-machine interaction and have significant implications for AI technology deployment in organizations:

  • AI Design: Incorporating positive emotional expressions in AI interfaces to enhance user trust and acceptance
  • Organizational Implementation: Understanding how emotional AI design can facilitate technology adoption in workplace settings
  • Trust Calibration: Recognizing the importance of appropriate trust levels—neither over-trust nor under-trust in AI systems
  • VR Applications: Leveraging immersive environments for controlled studies of human-AI interaction dynamics
  • Policy Considerations: Informing responsible AI development that considers emotional manipulation concerns

Limitations & Future Research

  • Environmental Generalizability: CAVE laboratory setting may limit real-world applicability of findings
  • Cultural Considerations: Sample may not represent diverse demographic and cultural groups affecting trust perceptions
  • Avatar Diversity: Study focused on single virtual character type—future research should explore varied AI representations
  • Dynamic Feedback: Investigating whether trust interacts with adaptive AI responses and different trust subtypes
  • Longitudinal Studies: Examining long-term effects and persistence of VR-mediated trust in AI systems

Career Opportunities

Interested in my research background and looking to hire? I'm actively seeking new opportunities.

Get in Touch