"Making moral machines: why we need artificial moral agents"
by Formosa, Paul; Ryan, Malcolm (2021)
Abstract
As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis of the relevant arguments for and against creating AMAs, and we argue that all things considered we have strong reasons to continue to responsibly develop AMAs. The key contributions of this paper are threefold. First, to provide the first comprehensive response to the important arguments made against AMAs by Wynsberghe and Robbins (in “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics 25, 2019) and to introduce several novel lines of argument in the process. Second, to collate and thematise for the first time the key arguments for and against AMAs in a single paper. Third, to recast the debate away from blanket arguments for or against AMAs in general, to a more nuanced discussion about the use of what sort of AMAs, in what sort of contexts, and for what sort of purposes is morally appropriate.
Keywords
No KeywordsThemes
Meaningful Work, AutomationLinks to Reference
- https://doi.org/10.1007/s00146-020-01089-6
- http://dx.doi.org/10.1007/s00146-020-01089-6
- https://link.springer.com/article/10.1007/s00146-020-01089-6
- https://philpapers.org/archive/FORMMM.pdf
Citation
Share
How to contribute.