Cover Image for System.Linq.Enumerable+EnumerablePartition`1[System.Char]

Hazardous machinery

OAI: oai:purehost.bath.ac.uk:openaire_cris_publications/9e5dee5d-cc84-4f4e-b16e-0c94bd718c53 DOI: https://doi.org/10.1016/j.jesp.2023.104582
Published by:

Abstract

Autonomous robots increasingly perform functions that are potentially hazardous and could cause injury to people (e.g., autonomous driving). When this happens, questions will arise regarding responsibility, although autonomy complicates this issue – insofar as robots seem to control their own behaviour, where would blame be assigned? Across three experiments, we examined whether robots involved in harm are assigned agency and, consequently, blamed. In Studies 1 and 2, people assigned more agency to machines involved in accidents when they were described as ‘autonomous robots’ (vs. ‘machines’), and in turn, blamed them more, across a variety of contexts. In Study 2, robots and machines were assigned similar experience, and we found no evidence for a role of experience in blaming robots over machines. In Study 3, people assigned more agency and blame to a more (vs. less) sophisticated military robot involved in a civilian fatality. Humans who were responsible for robots' safe operation, however, were blamed similarly whether harms involved a robot (vs. machine; Study 1), or a more (vs. less; Study 3) sophisticated robot. These findings suggest that people spontaneously conceptualise robots' autonomy via humanlike agency, and consequently, consider them blameworthy agents.