Social status shapes moral perceptions of AI
How do people evaluate the performance of an AI system? New research by a group of University of Lucerne sociologists reveals that even non-human agents aren’t exempt from social prejudice.
Imagine two doctors with identical skills and impeccable performance: Would your judgment of their actions change if one worked at a top-ranked hospital, or was male or female? And what if one of these doctors weren’t a person at all, but an artificial intelligence (AI)?
A recent paper published in the journal Sociological Science sheds light on an intriguing factor that shapes how we perceive the morality of artificial intelligence (AI): social status.
Affiliation matters – also when you’re an AI
The paper’s authors – University of Lucerne sociologists Dr Patrick Schenk, Vanessa A. Müller and Luca Keiser – found that people are more likely to view AI as morally acceptable when it’s associated with high-ranking organisations. What’s more, people also tend to see human agents as more morally acceptable than AI, even when AI performs just as well – or sometimes better – on specific tasks.
In the study, nearly 600 participants evaluated the moral acceptability of different agents (human, AI and basic computer programmes) performing three different tasks: cancer diagnosis in a hospital, fact-checking in a newspaper editorial office, and hiring by a recruitment agency. While AIs were judged similarly to computer programmes, both were generally seen as less morally acceptable than human agents. This tendency persisted regardless of the agent’s actual effectiveness.
Furthermore, factors like gender or giving the AI a human name made no significant difference in moral judgments, but organisational prestige did. When an AI was associated with a top-tier institution, people rated it more favourably.
Building awareness to overcome bias
The study highlights how social biases – such as associating prestige with moral value – shape our views on emerging technologies. The findings suggests that people might be more willing to trust and accept AI if it’s backed by reputable institutions, regardless of its actual capabilities. According to the study’s authors, recognizing this tendency toward “status bias” could help guide more fair and objective evaluations of AI systems in society.
The study, published under the title “Social Status and the Moral Acceptance of Artificial Intelligence” in the journal Sociological Science,is part of a Swiss National Science Foundation project on “Artificial Intelligence and Moral Decision-Making in Contemporary Societies: An Empirical Sociological Investigation,“ led by Sociology Professor Gabriel Abend and Dr Patrick Schenk.