Comparison of feature importance measures as explanations for classification models
Saarela, M., & Jauhiainen, S. (2021). Comparison of feature importance measures as explanations for classification models. SN Applied Sciences, 3(2), Article 272. https://doi.org/10.1007/s42452-021-04148-9
Julkaistu sarjassa
SN Applied SciencesPäivämäärä
2021Tekijänoikeudet
© 2021 the Authors
Explainable artificial intelligence is an emerging research direction helping the user or developer of machine learning models understand why models behave the way they do. The most popular explanation technique is feature importance. However, there are several different approaches how feature importances are being measured, most notably global and local. In this study we compare different feature importance measures using both linear (logistic regression with L1 penalization) and non-linear (random forest) methods and local interpretable model-agnostic explanations on top of them. These methods are applied to two datasets from the medical domain, the openly available breast cancer data from the UCI Archive and a recently collected running injury data. Our results show that the most important features differ depending on the technique. We argue that a combination of several explanation techniques could provide more reliable and trustworthy results. In particular, local explanations should be used in the most critical cases such as false negatives.
...
Julkaisija
SpringerISSN Hae Julkaisufoorumista
2523-3963Asiasanat
Julkaisu tutkimustietojärjestelmässä
https://converis.jyu.fi/converis/portal/detail/Publication/51361793
Metadata
Näytä kaikki kuvailutiedotKokoelmat
Rahoittaja(t)
Suomen AkatemiaRahoitusohjelmat(t)
Profilointi, SALisätietoja rahoituksesta
This research was supported by the Academy of Finland (Grant No. 311877) and is related to the thematic research area DEMO (Decision Analytics Utilizing Causal Models and Multiobjective Optimization, jyu.fi/demo) of the University of Jyväskylä, Finland.Lisenssi
Samankaltainen aineisto
Näytetään aineistoja, joilla on samankaltainen nimeke tai asiasanat.
-
Robustness, Stability, and Fidelity of Explanations for a Deep Skin Cancer Classification Model
Saarela, Mirka; Geogieva, Lilia (MDPI AG, 2022)Skin cancer is one of the most prevalent of all cancers. Because of its being widespread and externally observable, there is a potential that machine learning models integrated into artificial intelligence systems will ... -
Comparing the forecasting performance of logistic regression and random forest models in criminal recidivism
Aaltonen, Olli-Pekka (2016)Rikosseuraamusalalla on viime vuosina kehitetty uusintarikollisuutta ennustavia malleja (Tyni, 2015), jotka perustuvat tyypillisesti rekisteripohjaisiin mittareihin, jotka mittaavat mm. tuomitun sukupuolta, ikää, rikostaustaa ... -
Unstable feature relevance in classification tasks
Skrypnyk, Iryna (University of Jyväskylä, 2011) -
Explainable AI for Industry 4.0 : Semantic Representation of Deep Learning Models
Terziyan, Vagan; Vitko, Oleksandra (Elsevier, 2022)Artificial Intelligence is an important asset of Industry 4.0. Current discoveries within machine learning and particularly in deep learning enable qualitative change within the industrial processes, applications, systems ... -
Towards explainable interactive multiobjective optimization : R-XIMO
Misitano, Giovanni; Afsar, Bekir; Lárraga, Giomara; Miettinen, Kaisa (Springer Science and Business Media LLC, 2022)In interactive multiobjective optimization methods, the preferences of a decision maker are incorporated in a solution process to find solutions of interest for problems with multiple conflicting objectives. Since multiple ...
Ellei toisin mainittu, julkisesti saatavilla olevia JYX-metatietoja (poislukien tiivistelmät) saa vapaasti uudelleenkäyttää CC0-lisenssillä.