Hume’s Guillotine in Designing Ethically Intelligent Technologies
Saariluoma, P. (2020). Hume’s Guillotine in Designing Ethically Intelligent Technologies. In T. Ahram, R. Taiar, V. Gremeaux-Bader, & K. Aminian (Eds.), Human Interaction, Emerging Technologies and Future Applications II : Proceedings of the 2nd International Conference on Human Interaction and Emerging Technologies : Future Applications (IHIET – AI 2020) (pp. 10-15). Springer. Advances in Intelligent Systems and Computing, 1152. https://doi.org/10.1007/978-3-030-44267-5_2
Published in
Advances in Intelligent Systems and ComputingAuthors
Date
2020Copyright
© Springer Nature Switzerland AG 2020
Intelligent machines can follow ethical rules in their behaviour. However, it is less clear whether intelligent systems can also create new ethical principles. The former position can be called weak ethical AI and the latter strong ethical AI. Hume’s guillotine which claims that one cannot derive values from facts appears to be a fundamental obstacle to strong ethical AI. The analysis of human ethical information processes provides clarity to the possibility of strong ethical AI. Human ethical information processing begins with positive of negative emotions associated to situations. Situations can be seen as consequences of actions and for this reason people can define rules about acceptability of typical actions. Finally, socio-ethical discourse create general ethical rules. Intelligent systems can provide important support in ethical process and thus the difference between weak and strong ethical AI is polar.
Publisher
SpringerParent publication ISBN
978-3-030-44266-8Conference
International Conference on Human Interaction and Emerging TechnologiesIs part of publication
Human Interaction, Emerging Technologies and Future Applications II : Proceedings of the 2nd International Conference on Human Interaction and Emerging Technologies : Future Applications (IHIET – AI 2020)ISSN Search the Publication Forum
2194-5357Keywords
Publication in research information system
https://converis.jyu.fi/converis/portal/detail/Publication/35165798
Metadata
Show full item recordCollections
Additional information about funding
No funding information.License
Related items
Showing items with similar title or keywords.
-
AI Ethics : Critical Reflections on Embedding Ethical Frameworks in AI Technology
Salo-Pöntinen, Henrikki (Springer, 2021)Embedding ethical frameworks in artificial intelligence (AI) technologies has been a popular topic for academic research for the past decade [1, 2, 3, 4, 5, 6, 7]. The approaches of the studies differ in how AI technology, ... -
ECCOLA : a method for implementing ethically aligned AI systems
Vakkuri, Ville; Kemell, Kai-Kristian; Jantunen, Marianna; Halme, Erika; Abrahamsson, Pekka (Elsevier, 2021)Artificial Intelligence (AI) systems are becoming increasingly widespread and exert a growing influence on society at large. The growing impact of these systems has also highlighted potential issues that may arise from ... -
Governance of Ethical and Trustworthy Al Systems : Research Gaps in the ECCOLA Method
Agbese, Mamia; Alanen, Hanna-Kaisa; Antikainen, Jani; Halme, Erika; Isomäki, Hannakaisa; Jantunen, Marianna; Kemell, Kai-Kristian; Rousi, Rebekah; Vainio-Pekka, Heidi; Vakkuri, Ville (IEEE, 2021)Advances in machine learning (ML) technologies have greatly improved Artificial Intelligence (Al) systems. As a result, Al systems have become ubiquitous, with their application prevalent in virtually all sectors. However, ... -
Types of mimetics for the design of intelligent technologies
Karvonen, Antero; Kujala, Tuomo; Saariluoma, Pertti (Springer International Publishing, 2020)Mimetic design means using a source in the natural or artificial worlds as an inspiration for technological solutions. It is based around the abstraction of the relevant operating principles in a source domain. This means ... -
Implementing artificial intelligence ethics in trustworthy systems development : extending ECCOLA to cover information governance principles
Agbese, Mamia (2021)This Master's thesis assesses how to extend a higher-level developmental method for trustworthy artificial intelligent systems, ECCOLA, by evaluating it with Information Governance principles. Artificial intelligent systems ...