This groundbreaking experimental study investigates the impact of AI emotions on people's decisions towards trusting artificial intelligence. Conducted in a CAVE-based virtual reality environment at Brandenburg University of Technology, the research utilizes an economic trust game to examine how emotional expressions by AI agents influence human trust behavior. The study addresses a critical gap in understanding human-AI interaction, particularly focusing on the role of emotions in developing trust relationships with AI systems.
Utilizing immersive CAVE (CAVE Automatic Virtual Environment) technology with cube-shaped projection screens creating authentic 3D virtual reality experiences for heightened participant immersion.
The CAVE environment at Brandenburg University of Technology showing the virtual trust game interface with AI avatar
Implementing the classical economic trust game (Berg et al., 1995) where participants act as investors playing alongside AI trustees with varying emotional expressions.
Systematically varying AI avatar appearances to express positive, negative, or neutral emotions across different experimental conditions and treatment phases.
AI avatar emotional expressions used in the experiment: negative (left), neutral (center), and positive (right)
The experiment involved 77 participants (average age 29.67 years, 45% female, 53% male) in a controlled laboratory setting. Participants were randomly assigned to two groups: Group 1 (positive emotion treatment) and Group 2 (negative emotion treatment). Each participant played 32 rounds of the trust game, with the first 16 rounds serving as baseline (neutral emotions) and rounds 17-32 as treatment phase (positive/negative emotions).
Participants in the positive emotion group showed a significant increase in trust behavior (p = .021), with average investments rising from baseline to treatment phase.
Negative emotions had no significant impact on trust behavior (p = .556), suggesting asymmetric effects of emotional valence on human-AI trust.
AI-specific trust operates as a distinct construct, independent of general trust and risk-taking propensity, highlighting unique aspects of human-AI relationships.
The findings contribute to understanding human-machine interaction and have significant implications for AI technology deployment in organizations:
Interested in my research background and looking to hire? I'm actively seeking new opportunities.
Get in Touch