Hume’s Guillotine in Designing Ethically Intelligent Technologies
Saariluoma, P. (2020). Hume’s Guillotine in Designing Ethically Intelligent Technologies. In T. Ahram, R. Taiar, V. Gremeaux-Bader, & K. Aminian (Eds.), Human Interaction, Emerging Technologies and Future Applications II : Proceedings of the 2nd International Conference on Human Interaction and Emerging Technologies : Future Applications (IHIET – AI 2020) (pp. 10-15). Springer. Advances in Intelligent Systems and Computing, 1152. https://doi.org/10.1007/978-3-030-44267-5_2
Published inAdvances in Intelligent Systems and Computing
© Springer Nature Switzerland AG 2020
Intelligent machines can follow ethical rules in their behaviour. However, it is less clear whether intelligent systems can also create new ethical principles. The former position can be called weak ethical AI and the latter strong ethical AI. Hume’s guillotine which claims that one cannot derive values from facts appears to be a fundamental obstacle to strong ethical AI. The analysis of human ethical information processes provides clarity to the possibility of strong ethical AI. Human ethical information processing begins with positive of negative emotions associated to situations. Situations can be seen as consequences of actions and for this reason people can define rules about acceptability of typical actions. Finally, socio-ethical discourse create general ethical rules. Intelligent systems can provide important support in ethical process and thus the difference between weak and strong ethical AI is polar.
Parent publication ISBN978-3-030-44266-8
ConferenceInternational Conference on Human Interaction and Emerging Technologies
Is part of publicationHuman Interaction, Emerging Technologies and Future Applications II : Proceedings of the 2nd International Conference on Human Interaction and Emerging Technologies : Future Applications (IHIET – AI 2020)
Publication in research information system
MetadataShow full item record
Additional information about fundingNo funding information.
Showing items with similar title or keywords.
Salo-Pöntinen, Henrikki (Springer, 2021)Embedding ethical frameworks in artificial intelligence (AI) technologies has been a popular topic for academic research for the past decade [1, 2, 3, 4, 5, 6, 7]. The approaches of the studies differ in how AI technology, ...
Agbese, Mamia; Alanen, Hanna-Kaisa; Antikainen, Jani; Halme, Erika; Isomäki, Hannakaisa; Jantunen, Marianna; Kemell, Kai-Kristian; Rousi, Rebekah; Vainio-Pekka, Heidi; Vakkuri, Ville (IEEE, 2021)Advances in machine learning (ML) technologies have greatly improved Artificial Intelligence (Al) systems. As a result, Al systems have become ubiquitous, with their application prevalent in virtually all sectors. However, ...
Vakkuri, Ville; Kemell, Kai-Kristian; Jantunen, Marianna; Halme, Erika; Abrahamsson, Pekka (Elsevier, 2021)Artificial Intelligence (AI) systems are becoming increasingly widespread and exert a growing influence on society at large. The growing impact of these systems has also highlighted potential issues that may arise from ...
Implementing artificial intelligence ethics in trustworthy systems development : extending ECCOLA to cover information governance principles Agbese, Mamia (2021)This Master's thesis assesses how to extend a higher-level developmental method for trustworthy artificial intelligent systems, ECCOLA, by evaluating it with Information Governance principles. Artificial intelligent systems ...
Karvonen, Antero; Kujala, Tuomo; Saariluoma, Pertti (Springer International Publishing, 2020)Mimetic design means using a source in the natural or artificial worlds as an inspiration for technological solutions. It is based around the abstraction of the relevant operating principles in a source domain. This means ...