Robots as malevolent moral agents: Harmful behavior results in dehumanization, not anthropomorphism
by ,
Abstract:
A robot’s decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human-like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and human) using short computer-generated animations. Harmful robotic agents were consistently imbued with mental states to a lower degree than benevolent agents, supporting the dehumanization account. Further results revealed that a human moral patient appeared to suffer less when depicted with a robotic agent than with another human. The findings suggest that future robots may become subject to humanlike dehumanization mechanisms, which challenges the established beliefs about anthropomorphism in the domain of moral interactions.
Reference:
Robots as malevolent moral agents: Harmful behavior results in dehumanization, not anthropomorphism (Aleksandra Swiderska, Dennis Küster), In Cognitive Science, volume 44, 2020.
Bibtex Entry:
@article{swiderska_robots_2020,
  title = {Robots as malevolent moral agents: Harmful behavior results in dehumanization, not anthropomorphism},
  volume = {44},
  issn = {0364-0213, 1551-6709},
  shorttitle = {Robots as malevolent moral agents},
  url = {https://www.csl.uni-bremen.de/cms/images/documents/publications/Swiderska_Kuester_RobotsAsMalevolentAgents_CognitiveScience_PrePrintVersion.pdf},
  doi = {10.1111/cogs.12872},
  abstract = {A robot’s decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human-like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and human) using short computer-generated animations. Harmful robotic agents were consistently imbued with mental states to a lower degree than benevolent agents, supporting the dehumanization account. Further results revealed that a human moral patient appeared to suffer less when depicted with a robotic agent than with another human. The findings suggest that future robots may become subject to humanlike dehumanization mechanisms, which challenges the established beliefs about anthropomorphism in the domain of moral interactions.},
  language = {en},
  number = {7},
  urldate = {2020-09-21},
  journal = {Cognitive Science},
  author = {Swiderska, Aleksandra and Küster, Dennis},
  month = jul,
  year = {2020}
}