Recent Applications of Explainable AI (XAI) : A Systematic Literature Review
Saarela, M., & Podgorelec, V. (2024). Recent Applications of Explainable AI (XAI) : A Systematic Literature Review. Applied Sciences, 14(19), Article 8884. https://doi.org/10.3390/app14198884
Julkaistu sarjassa
Applied SciencesPäivämäärä
2024Tekijänoikeudet
© 2024 by the authors. Licensee MDPI, Basel, Switzerland
This systematic literature review employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web of Science database, 512 peer-reviewed journal articles met the inclusion criteria—namely, being recent, high-quality XAI application articles published in English—and were analyzed in detail. Both qualitative and quantitative statistical techniques were used to analyze the identified articles: qualitatively by summarizing the characteristics of the included studies based on predefined codes, and quantitatively through statistical analysis of the data. These articles were categorized according to their application domains, techniques, and evaluation methods. Health-related applications were particularly prevalent, with a strong focus on cancer diagnosis, COVID-19 management, and medical imaging. Other significant areas of application included environmental and agricultural management, industrial optimization, cybersecurity, finance, transportation, and entertainment. Additionally, emerging applications in law, education, and social care highlight XAI’s expanding impact. The review reveals a predominant use of local explanation methods, particularly SHAP and LIME, with SHAP being favored for its stability and mathematical guarantees. However, a critical gap in the evaluation of XAI results is identified, as most studies rely on anecdotal evidence or expert opinion rather than robust quantitative metrics. This underscores the urgent need for standardized evaluation frameworks to ensure the reliability and effectiveness of XAI applications. Future research should focus on developing comprehensive evaluation standards and improving the interpretability and stability of explanations. These advancements are essential for addressing the diverse demands of various application domains while ensuring trust and transparency in AI systems.
...
Julkaisija
MDPIISSN Hae Julkaisufoorumista
2076-3417Asiasanat
explainable artificial intelligence applications interpretable machine learning convolutional neural network deep learning post-hoc explanations model-agnostic explanations sovellusohjelmat syväoppiminen tekoäly neuroverkot ohjelmistokehitys koneoppiminen arviointimenetelmät systemaattiset kirjallisuuskatsaukset
Julkaisu tutkimustietojärjestelmässä
https://converis.jyu.fi/converis/portal/detail/Publication/243309382
Metadata
Näytä kaikki kuvailutiedotKokoelmat
Rahoittaja(t)
Suomen AkatemiaRahoitusohjelmat(t)
Akatemiatutkija, SALisätietoja rahoituksesta
The work by M.S. was supported by the K.H. Renlund Foundation and the Academy of Finland (project no. 356314). The work by V.P. was supported by the Slovenian Research Agency (Research Core Funding No. P2-0057).Lisenssi
Samankaltainen aineisto
Näytetään aineistoja, joilla on samankaltainen nimeke tai asiasanat.
-
Artificial Intelligence for Cybersecurity : A Systematic Mapping of Literature
Wiafe, Isaac; Koranteng, Felix N.; Obeng, Emmanuel N.; Assyne, Nana; Wiafe, Abigail; Gulliver, Stephen R. (IEEE, 2020)Due to the ever-increasing complexities in cybercrimes, there is the need for cybersecurity methods to be more robust and intelligent. This will make defense mechanisms to be capable of making real-time decisions that can ... -
Robustness, Stability, and Fidelity of Explanations for a Deep Skin Cancer Classification Model
Saarela, Mirka; Geogieva, Lilia (MDPI AG, 2022)Skin cancer is one of the most prevalent of all cancers. Because of its being widespread and externally observable, there is a potential that machine learning models integrated into artificial intelligence systems will ... -
Tracking a rat in an open field experiment with a deep learning-based model
Kantola, Lauri (2021)New artificial neural network methods have changed the way animals are tracked in neuroscience and psychology experiments. The purpose of this thesis is to test the state-of-the-art method of animal tracking DeepLabCut and ... -
The Impact of Regularization on Convolutional Neural Networks
Zeeshan, Khaula (2018)Syvä oppiminen (engl. deep learning) on viime aikoina tullut suosituimmaksi koneoppimisen menetelmäksi. Konvoluutio(hermo)verkko on yksi suosituimmista syvän oppimisen arkkitehtuureista monimutkaisiin ongelmiin kuten kuvien ... -
Explainability in Educational Data Mining and Learning Analytics : An Umbrella Review
Gunasekara, Sachini; Saarela, Mirka (International Educational Data Mining Society, 2024)This paper presents an umbrella review synthesizing the findings of explainability studies within the EDM and LA domains. By systematically reviewing existing reviews and adhering to the PRISMA guidelines, we identified ...
Ellei toisin mainittu, julkisesti saatavilla olevia JYX-metatietoja (poislukien tiivistelmät) saa vapaasti uudelleenkäyttää CC0-lisenssillä.