Domesticating Artificial Intelligence
Abstract: For their deployment in human societies to be safe, AI agents need to be aligned with value-laden cooperative human life. One way of solving this “problem of value alignment” is to build moral machines. I argue that the goal of building moral machines aims at the wrong kind of ideal, and that instead, we need an approach to value alignment that takes seriously the categorically different cognitive and moral capabilities between human and AI agents, a condition I call deep agential diversity. Domestication is the answer to a similarly structured problem: namely, how to integrate nonhuman animals that lack moral agency safely into human society and align their behavior with human values. Just like nonhuman animals, AI agents lack a genuinely moral agency; and just like nonhuman animals, we might find ways to train them to nevertheless assist us, and live and work among us – to “domesticate” them, in other words. I claim that the domestication approach does well in explaining many of our intuitions and worries about deploying AI agents in our social practices.
- Location
-
Deutsche Nationalbibliothek Frankfurt am Main
- Extent
-
Online-Ressource
- Language
-
Englisch
- Bibliographic citation
-
Domesticating Artificial Intelligence ; volume:9 ; number:2 ; year:2022 ; pages:219-237 ; extent:19
Moral philosophy and politics ; 9, Heft 2 (2022), 219-237 (gesamt 19)
- Creator
-
Müller, Luise
- DOI
-
10.1515/mopp-2020-0054
- URN
-
urn:nbn:de:101:1-2022100614051842357176
- Rights
-
Open Access; Der Zugriff auf das Objekt ist unbeschränkt möglich.
- Last update
-
15.08.2025, 7:30 AM CEST
Data provider
Deutsche Nationalbibliothek. If you have any questions about the object, please contact the data provider.
Associated
- Müller, Luise