{"id":1316,"date":"2026-04-09T08:47:45","date_gmt":"2026-04-09T08:47:45","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/1316\/"},"modified":"2026-04-09T08:47:45","modified_gmt":"2026-04-09T08:47:45","slug":"explainable-ai-needs-formalization-npj-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/1316\/","title":{"rendered":"Explainable AI needs formalization | npj Artificial Intelligence"},"content":{"rendered":"<p class=\"c-article-references__text\" id=\"ref-CR1\">European Commission. Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. <a href=\"https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=celex%3A52021PC0206\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=celex%3A52021PC0206\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=celex%3A52021PC0206<\/a> (2021).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR2\">Bishop, C. M. &amp; Nasrabadi, N. M. Pattern recognition and machine learning, vol. 4 (Springer, 2006).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR3\">Tjoa, E. &amp; Guan, C. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Transactions on Neural Networks and Learning Systems Vol. 32 4793-4813 <a href=\"https:\/\/doi.org\/10.1109\/TNNLS.2020.3027314\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"10.1109\/TNNLS.2020.3027314\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/doi.org\/10.1109\/TNNLS.2020.3027314<\/a> (2021).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR4\">Minh, D., Wang, H. X., Li, Y. F. &amp; Nguyen, T. N. Explainable artificial intelligence: a comprehensive review. Artificial Intelligence Review 55, 3503\u20133568 <a href=\"https:\/\/doi.org\/10.1007\/s10462-021-10088-y\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"10.1007\/s10462-021-10088-y\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/doi.org\/10.1007\/s10462-021-10088-y<\/a> (2022).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR5\">Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1\u201338 (2019).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/j.artint.2018.07.007\" data-track-item_id=\"10.1016\/j.artint.2018.07.007\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2Fj.artint.2018.07.007\" aria-label=\"Article reference 5\" data-doi=\"10.1016\/j.artint.2018.07.007\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"mathscinet reference\" data-track-action=\"mathscinet reference\" href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=3874511\" aria-label=\"MathSciNet reference 5\" target=\"_blank\">MathSciNet<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 5\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Explanation%20in%20artificial%20intelligence%3A%20Insights%20from%20the%20social%20sciences&amp;journal=Artif.%20Intell.&amp;doi=10.1016%2Fj.artint.2018.07.007&amp;volume=267&amp;pages=1-38&amp;publication_year=2019&amp;author=Miller%2CT\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR6\">Saporta, A. et al. Benchmarking saliency methods for chest X-ray interpretation. Nat. Mach. Intell. 4, 867\u2013878 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s42256-022-00536-x\" data-track-item_id=\"10.1038\/s42256-022-00536-x\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs42256-022-00536-x\" aria-label=\"Article reference 6\" data-doi=\"10.1038\/s42256-022-00536-x\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 6\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Benchmarking%20saliency%20methods%20for%20chest%20X-ray%20interpretation&amp;journal=Nat.%20Mach.%20Intell.&amp;doi=10.1038%2Fs42256-022-00536-x&amp;volume=4&amp;pages=867-878&amp;publication_year=2022&amp;author=Saporta%2CA\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR7\">Ribeiro, M. T., Singh, S. &amp; Guestrin, C. \u201c Why should I trust you?\u201d Explaining the predictions of any classifier. In Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135\u20131144 (ACM, 2016).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR8\">Lapuschkin, S. et al. Unmasking Clever Hans predictors and assessing what machines really learn. Nat. Commun. 10, 1096 (2019).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41467-019-08987-4\" data-track-item_id=\"10.1038\/s41467-019-08987-4\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41467-019-08987-4\" aria-label=\"Article reference 8\" data-doi=\"10.1038\/s41467-019-08987-4\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 8\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Unmasking%20Clever%20Hans%20predictors%20and%20assessing%20what%20machines%20really%20learn&amp;journal=Nat.%20Commun.&amp;doi=10.1038%2Fs41467-019-08987-4&amp;volume=10&amp;publication_year=2019&amp;author=Lapuschkin%2CS\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR9\">Anders, C. J. et al. Finding and removing Clever Hans: using explanation methods to debug and improve deep models. Inf. Fusion 77, 261\u2013295 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/j.inffus.2021.07.015\" data-track-item_id=\"10.1016\/j.inffus.2021.07.015\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2Fj.inffus.2021.07.015\" aria-label=\"Article reference 9\" data-doi=\"10.1016\/j.inffus.2021.07.015\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 9\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Finding%20and%20removing%20Clever%20Hans%3A%20using%20explanation%20methods%20to%20debug%20and%20improve%20deep%20models&amp;journal=Inf.%20Fusion&amp;doi=10.1016%2Fj.inffus.2021.07.015&amp;volume=77&amp;pages=261-295&amp;publication_year=2022&amp;author=Anders%2CCJ\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR10\">Wang, Z. J. et al. Interpretability, then what? editing machine learning models to reflect human knowledge and values. In Proc. of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 4132\u20134142 (2022).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR11\">Samek, W. &amp; M\u00fcller, K.-R. Towards explainable artificial intelligence. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Vol. 11700, 5\u201322 (Springer Nature, 2019).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR12\">Jim\u00e9nez-Luna, J., Grisoni, F. &amp; Schneider, G. Drug discovery with explainable artificial intelligence. Nat. Mach. Intell. 2, 573\u2013584 (2020).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s42256-020-00236-4\" data-track-item_id=\"10.1038\/s42256-020-00236-4\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs42256-020-00236-4\" aria-label=\"Article reference 12\" data-doi=\"10.1038\/s42256-020-00236-4\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 12\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Drug%20discovery%20with%20explainable%20artificial%20intelligence&amp;journal=Nat.%20Mach.%20Intell.&amp;doi=10.1038%2Fs42256-020-00236-4&amp;volume=2&amp;pages=573-584&amp;publication_year=2020&amp;author=Jim%C3%A9nez-Luna%2CJ&amp;author=Grisoni%2CF&amp;author=Schneider%2CG\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR13\">Tideman, L. E. et al. Automated biomarker candidate discovery in imaging mass spectrometry data through spatially localized shapley additive explanations. Anal. Chim. Acta 1177, 338522 (2021).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/j.aca.2021.338522\" data-track-item_id=\"10.1016\/j.aca.2021.338522\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2Fj.aca.2021.338522\" aria-label=\"Article reference 13\" data-doi=\"10.1016\/j.aca.2021.338522\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 13\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Automated%20biomarker%20candidate%20discovery%20in%20imaging%20mass%20spectrometry%20data%20through%20spatially%20localized%20shapley%20additive%20explanations&amp;journal=Anal.%20Chim.%20Acta&amp;doi=10.1016%2Fj.aca.2021.338522&amp;volume=1177&amp;publication_year=2021&amp;author=Tideman%2CLE\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR14\">Watson, D. S. Interpretable machine learning for genomics. Hum. Genet. 141, 1499\u20131513 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"noopener nofollow\" data-track-label=\"10.1007\/s00439-021-02387-9\" data-track-item_id=\"10.1007\/s00439-021-02387-9\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/link.springer.com\/doi\/10.1007\/s00439-021-02387-9\" aria-label=\"Article reference 14\" data-doi=\"10.1007\/s00439-021-02387-9\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 14\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Interpretable%20machine%20learning%20for%20genomics&amp;journal=Hum.%20Genet.&amp;doi=10.1007%2Fs00439-021-02387-9&amp;volume=141&amp;pages=1499-1513&amp;publication_year=2022&amp;author=Watson%2CDS\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR15\">Wong, F. et al. Discovery of a structural class of antibiotics with explainable deep learning. Nature 626, 177\u2013185 (2024).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41586-023-06887-8\" data-track-item_id=\"10.1038\/s41586-023-06887-8\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41586-023-06887-8\" aria-label=\"Article reference 15\" data-doi=\"10.1038\/s41586-023-06887-8\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 15\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Discovery%20of%20a%20structural%20class%20of%20antibiotics%20with%20explainable%20deep%20learning&amp;journal=Nature&amp;doi=10.1038%2Fs41586-023-06887-8&amp;volume=626&amp;pages=177-185&amp;publication_year=2024&amp;author=Wong%2CF\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR16\">Ustun, B., Spangher, A. &amp; Liu, Y. Actionable recourse in linear classification. In Proc. Conference on Fairness, Accountability, and Transparency, 10\u201319 (ACM, 2019).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR17\">Ates, E., Aksar, B., Leung, V. J. &amp; Coskun, A. K. Counterfactual explanations for multivariate time series. In Proc. International Conference on Applied Artificial Intelligence (ICAPAI), 1\u20138 (IEEE, 2021).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR18\">Wilming, R., Budding, C., M\u00fcller, K.-R. &amp; Haufe, S. Scrutinizing XAI using linear ground-truth data with suppressor variables. Machine Learning, Special Issue of the ECML PKDD 2022 Journal Track, 1\u201321 (Springer Nature, 2022).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR19\">Wilming, R. et al. GECOBench: a gender-controlled text dataset and benchmark for quantifying biases in explanations. Front. Artif. Intell. <a href=\"https:\/\/arxiv.org\/abs\/2406.11547\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"https:\/\/arxiv.org\/abs\/2406.11547\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2406.11547<\/a> (in the press). (2024).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR20\">Haufe, S. et al. On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage 87, 96\u2013110 (2014).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/j.neuroimage.2013.10.067\" data-track-item_id=\"10.1016\/j.neuroimage.2013.10.067\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2Fj.neuroimage.2013.10.067\" aria-label=\"Article reference 20\" data-doi=\"10.1016\/j.neuroimage.2013.10.067\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 20\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=On%20the%20interpretation%20of%20weight%20vectors%20of%20linear%20models%20in%20multivariate%20neuroimaging&amp;journal=Neuroimage&amp;doi=10.1016%2Fj.neuroimage.2013.10.067&amp;volume=87&amp;pages=96-110&amp;publication_year=2014&amp;author=Haufe%2CS\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR21\">Kindermans, P.-J. et al. Learning how to explain neural networks: patternNet and patternAttribution. In 6th International Conference on Learning Representations (ICLR, 2018).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR22\">Wilming, R., Kieslich, L., Clark, B. &amp; Haufe, S. Theoretical behavior of XAI methods in the presence of suppressor variables. Proc. 40th Int. Conf. Mach. Learn. 202, 37091\u201337107 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 22\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Theoretical%20behavior%20of%20XAI%20methods%20in%20the%20presence%20of%20suppressor%20variables&amp;journal=Proc.%2040th%20Int.%20Conf.%20Mach.%20Learn.&amp;volume=202&amp;pages=37091-37107&amp;publication_year=2023&amp;author=Wilming%2CR&amp;author=Kieslich%2CL&amp;author=Clark%2CB&amp;author=Haufe%2CS\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR23\">Conger, A. J. A revised definition for suppressor variables: a guide to their identification and interpretation. Educ. Psychol. Meas. 34, 35\u201346 (1974).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1177\/001316447403400105\" data-track-item_id=\"10.1177\/001316447403400105\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1177%2F001316447403400105\" aria-label=\"Article reference 23\" data-doi=\"10.1177\/001316447403400105\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 23\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=A%20revised%20definition%20for%20suppressor%20variables%3A%20a%20guide%20to%20their%20identification%20and%20interpretation&amp;journal=Educ.%20Psychol.%20Meas.&amp;doi=10.1177%2F001316447403400105&amp;volume=34&amp;pages=35-46&amp;publication_year=1974&amp;author=Conger%2CAJ\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR24\">Pearl, J. Causality (Cambridge University Press, 2009).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR25\">Clark, B., Wilming, R. &amp; Haufe, S. XAI-TRIS: non-linear image benchmarks to quantify false positive post-hoc attribution of feature importance. Mach. Learn. 113, 6871\u20136910 (2024).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR26\">Baehrens, D. et al. How to explain individual classification decisions. J. Mach. Learn. Res. 11, 1803\u20131831 (2010).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"mathscinet reference\" data-track-action=\"mathscinet reference\" href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=2660653\" aria-label=\"MathSciNet reference 26\" target=\"_blank\">MathSciNet<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 26\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=How%20to%20explain%20individual%20classification%20decisions&amp;journal=J.%20Mach.%20Learn.%20Res.&amp;volume=11&amp;pages=1803-1831&amp;publication_year=2010&amp;author=Baehrens%2CD\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR27\">Bach, S. et al. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10, 1\u201346 (2015).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1371\/journal.pone.0130140\" data-track-item_id=\"10.1371\/journal.pone.0130140\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1371%2Fjournal.pone.0130140\" aria-label=\"Article reference 27\" data-doi=\"10.1371\/journal.pone.0130140\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 27\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=On%20pixel-wise%20explanations%20for%20non-linear%20classifier%20decisions%20by%20layer-wise%20relevance%20propagation&amp;journal=PLoS%20ONE&amp;doi=10.1371%2Fjournal.pone.0130140&amp;volume=10&amp;pages=1-46&amp;publication_year=2015&amp;author=Bach%2CS\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR28\">Montavon, G., Bach, S., Binder, A., Samek, W. &amp; M\u00fcller, K.-R. Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognit. 65, 211\u2013222 (2017).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/j.patcog.2016.11.008\" data-track-item_id=\"10.1016\/j.patcog.2016.11.008\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2Fj.patcog.2016.11.008\" aria-label=\"Article reference 28\" data-doi=\"10.1016\/j.patcog.2016.11.008\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 28\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Explaining%20nonlinear%20classification%20decisions%20with%20deep%20Taylor%20decomposition&amp;journal=Pattern%20Recognit.&amp;doi=10.1016%2Fj.patcog.2016.11.008&amp;volume=65&amp;pages=211-222&amp;publication_year=2017&amp;author=Montavon%2CG&amp;author=Bach%2CS&amp;author=Binder%2CA&amp;author=Samek%2CW&amp;author=M%C3%BCller%2CK-R\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR29\">Shapley, L. S. A value for n-person games. Contrib. Theory Games 2, 307\u2013317 (1953).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"mathscinet reference\" data-track-action=\"mathscinet reference\" href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=53477\" aria-label=\"MathSciNet reference 29\" target=\"_blank\">MathSciNet<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 29\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=A%20value%20for%20n-person%20games&amp;journal=Contrib.%20Theory%20Games&amp;volume=2&amp;pages=307-317&amp;publication_year=1953&amp;author=Shapley%2CLS\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR30\">Lundberg, S. M. &amp; Lee, S.-I. A Unified Approach to Interpreting Model Predictions. In Guyon, I. et al. (eds.) Advances in Neural Information Processing Systems 30, Vol. 30, 4765\u20134774 (Curran Associates, Inc., 2017).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR31\">Aas, K., Jullum, M. &amp; L\u00f8land, A. Explaining individual predictions when features are dependent: more accurate approximations to shapley values. Artif. Intell. 298, 103502 (2021).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/j.artint.2021.103502\" data-track-item_id=\"10.1016\/j.artint.2021.103502\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2Fj.artint.2021.103502\" aria-label=\"Article reference 31\" data-doi=\"10.1016\/j.artint.2021.103502\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"mathscinet reference\" data-track-action=\"mathscinet reference\" href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=4249081\" aria-label=\"MathSciNet reference 31\" target=\"_blank\">MathSciNet<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 31\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Explaining%20individual%20predictions%20when%20features%20are%20dependent%3A%20more%20accurate%20approximations%20to%20shapley%20values&amp;journal=Artif.%20Intell.&amp;doi=10.1016%2Fj.artint.2021.103502&amp;volume=298&amp;publication_year=2021&amp;author=Aas%2CK&amp;author=Jullum%2CM&amp;author=L%C3%B8land%2CA\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR32\">Sundararajan, M., Taly, A. &amp; Yan, Q. Axiomatic attribution for deep networks. In ICML, Vol. 70 of Proc. Machine Learning Research (eds. Precup, D. &amp; Teh, Y. W.) 3319\u20133328 (PMLR, 2017).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR33\">Wachter, S., Mittelstadt, B. &amp; Russell, C. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2017).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 33\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Counterfactual%20explanations%20without%20opening%20the%20black%20box%3A%20Automated%20decisions%20and%20the%20GDPR&amp;journal=Harv.%20JL%20Tech.&amp;volume=31&amp;publication_year=2017&amp;author=Wachter%2CS&amp;author=Mittelstadt%2CB&amp;author=Russell%2CC\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR34\">Guidotti, R. et al. A survey of methods for explaining black box models. ACM Comput. Surv. 51, 1\u201342 (2019).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1145\/3236009\" data-track-item_id=\"10.1145\/3236009\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1145%2F3236009\" aria-label=\"Article reference 34\" data-doi=\"10.1145\/3236009\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 34\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=A%20survey%20of%20methods%20for%20explaining%20black%20box%20models&amp;journal=ACM%20Comput.%20Surv.&amp;doi=10.1145%2F3236009&amp;volume=51&amp;pages=1-42&amp;publication_year=2019&amp;author=Guidotti%2CR\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR35\">Jacovi, A. &amp; Goldberg, Y. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? in Proc. 58th Annual Meeting of the Association for Computational Linguistics, 4198\u20134205 (Association for Computational Linguistics, Online, 2020).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR36\">Weichwald, S. et al. Causal interpretation rules for encoding and decoding models in neuroimaging. Neuroimage 110, 48\u201359 (2015).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/j.neuroimage.2015.01.036\" data-track-item_id=\"10.1016\/j.neuroimage.2015.01.036\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2Fj.neuroimage.2015.01.036\" aria-label=\"Article reference 36\" data-doi=\"10.1016\/j.neuroimage.2015.01.036\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 36\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Causal%20interpretation%20rules%20for%20encoding%20and%20decoding%20models%20in%20neuroimaging&amp;journal=Neuroimage&amp;doi=10.1016%2Fj.neuroimage.2015.01.036&amp;volume=110&amp;pages=48-59&amp;publication_year=2015&amp;author=Weichwald%2CS\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR37\">Karimi, A.-H., Sch\u00f6lkopf, B. &amp; Valera, I. Algorithmic recourse: from counterfactual explanations to interventions. In Proc. ACM Conference on Fairness, Accountability, and Transparency, 353\u2013362 (ACM, 2021).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR38\">Caruana, R. et al. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proc. 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1721\u20131730 (ACM, 2015).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR39\">Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206\u2013215 (2019).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s42256-019-0048-x\" data-track-item_id=\"10.1038\/s42256-019-0048-x\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs42256-019-0048-x\" aria-label=\"Article reference 39\" data-doi=\"10.1038\/s42256-019-0048-x\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 39\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Stop%20explaining%20black%20box%20machine%20learning%20models%20for%20high%20stakes%20decisions%20and%20use%20interpretable%20models%20instead&amp;journal=Nat.%20Mach.%20Intell.&amp;doi=10.1038%2Fs42256-019-0048-x&amp;volume=1&amp;pages=206-215&amp;publication_year=2019&amp;author=Rudin%2CC\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR40\">Rai, A. Explainable AI: Ffrom black box to glass box. J. Acad. Mark. Sci. 48, 137\u2013141 (2020).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"noopener nofollow\" data-track-label=\"10.1007\/s11747-019-00710-5\" data-track-item_id=\"10.1007\/s11747-019-00710-5\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/link.springer.com\/doi\/10.1007\/s11747-019-00710-5\" aria-label=\"Article reference 40\" data-doi=\"10.1007\/s11747-019-00710-5\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 40\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Explainable%20AI%3A%20Ffrom%20black%20box%20to%20glass%20box&amp;journal=J.%20Acad.%20Mark.%20Sci.&amp;doi=10.1007%2Fs11747-019-00710-5&amp;volume=48&amp;pages=137-141&amp;publication_year=2020&amp;author=Rai%2CA\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR41\">Clark, B. et al. Correcting misinterpretations of additive models. in Proc. 39th Annual Conference on Neural Information Processing Systems (NeurIPS, 2025).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR42\">Shmueli, G. To explain or to predict? Stat. Sci. 25, 289\u2013310 (2010).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR43\">Del Giudice, M. The prediction-explanation fallacy: a pervasive problem in scientific applications of machine learning. Methodology 20, 22\u201346 (2024).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.5964\/meth.11235\" data-track-item_id=\"10.5964\/meth.11235\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.5964%2Fmeth.11235\" aria-label=\"Article reference 43\" data-doi=\"10.5964\/meth.11235\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 43\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=The%20prediction-explanation%20fallacy%3A%20a%20pervasive%20problem%20in%20scientific%20applications%20of%20machine%20learning&amp;journal=Methodology&amp;doi=10.5964%2Fmeth.11235&amp;volume=20&amp;pages=22-46&amp;publication_year=2024&amp;author=Giudice%2CM\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR44\">Doshi-Velez, F. &amp; Kim, B. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv: <a href=\"http:\/\/arxiv.org\/abs\/1702.08608\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"http:\/\/arxiv.org\/abs\/1702.08608\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/1702.08608<\/a> (2017).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR45\">Hedstr\u00f6m, A. et al. Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond. J. Mach. Learn. Res. 24, 1\u201311 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 45\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Quantus%3A%20An%20explainable%20ai%20toolkit%20for%20responsible%20evaluation%20of%20neural%20network%20explanations%20and%20beyond&amp;journal=J.%20Mach.%20Learn.%20Res.&amp;volume=24&amp;pages=1-11&amp;publication_year=2023&amp;author=Hedstr%C3%B6m%2CA\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR46\">Breiman, L. Random forests. Mach. Learn. 45, 5\u201332 (2001).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1023\/A:1010933404324\" data-track-item_id=\"10.1023\/A:1010933404324\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1023%2FA%3A1010933404324\" aria-label=\"Article reference 46\" data-doi=\"10.1023\/A:1010933404324\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 46\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Random%20forests&amp;journal=Mach.%20Learn.&amp;doi=10.1023%2FA%3A1010933404324&amp;volume=45&amp;pages=5-32&amp;publication_year=2001&amp;author=Breiman%2CL\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR47\">Meinshausen, N. &amp; B\u00fchlmann, P. Stability selection. J. R. Stat. Soc. Ser. B Stat. Methodol. 72, 417\u2013473 (2010).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1111\/j.1467-9868.2010.00740.x\" data-track-item_id=\"10.1111\/j.1467-9868.2010.00740.x\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1111%2Fj.1467-9868.2010.00740.x\" aria-label=\"Article reference 47\" data-doi=\"10.1111\/j.1467-9868.2010.00740.x\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"mathscinet reference\" data-track-action=\"mathscinet reference\" href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=2758523\" aria-label=\"MathSciNet reference 47\" target=\"_blank\">MathSciNet<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 47\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Stability%20selection&amp;journal=J.%20R.%20Stat.%20Soc.%20Ser.%20B%20Stat.%20Methodol.&amp;doi=10.1111%2Fj.1467-9868.2010.00740.x&amp;volume=72&amp;pages=417-473&amp;publication_year=2010&amp;author=Meinshausen%2CN&amp;author=B%C3%BChlmann%2CP\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR48\">Samek, W., Binder, A., Montavon, G., Lapuschkin, S. &amp; M\u00fcller, K. Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 28, 2660\u20132673 (2017).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1109\/TNNLS.2016.2599820\" data-track-item_id=\"10.1109\/TNNLS.2016.2599820\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1109%2FTNNLS.2016.2599820\" aria-label=\"Article reference 48\" data-doi=\"10.1109\/TNNLS.2016.2599820\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"mathscinet reference\" data-track-action=\"mathscinet reference\" href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=3721782\" aria-label=\"MathSciNet reference 48\" target=\"_blank\">MathSciNet<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 48\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Evaluating%20the%20visualization%20of%20what%20a%20deep%20neural%20network%20has%20learned&amp;journal=IEEE%20Trans.%20Neural%20Netw.%20Learn.%20Syst.&amp;doi=10.1109%2FTNNLS.2016.2599820&amp;volume=28&amp;pages=2660-2673&amp;publication_year=2017&amp;author=Samek%2CW&amp;author=Binder%2CA&amp;author=Montavon%2CG&amp;author=Lapuschkin%2CS&amp;author=M%C3%BCller%2CK\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR49\">Hooker, S., Erhan, D., Kindermans, P.-J. &amp; Kim, B. A benchmark for interpretability methods in deep neural networks. in Advances in Neural Information Processing Systems, Vol. 32, (eds. Wallach, H. et al.) 9737\u20139748 (Curran Associates, Inc., 2019).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR50\">Rong, Y., Leemann, T., Borisov, V., Kasneci, G., &amp; Kasneci, E. A Consistent and Efficient Evaluation Strategy for Attribution Methods. In International Conference on Machine Learning 18770\u201318795 (PMLR, 2022).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR51\">Bl\u00fccher, S., Vielhaben, J. &amp; Strodthoff, N. Preddiff: Explanations and interactions from conditional expectations. Artif. Intell. 312, 103774 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/j.artint.2022.103774\" data-track-item_id=\"10.1016\/j.artint.2022.103774\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2Fj.artint.2022.103774\" aria-label=\"Article reference 51\" data-doi=\"10.1016\/j.artint.2022.103774\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"mathscinet reference\" data-track-action=\"mathscinet reference\" href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=4475906\" aria-label=\"MathSciNet reference 51\" target=\"_blank\">MathSciNet<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 51\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Preddiff%3A%20Explanations%20and%20interactions%20from%20conditional%20expectations&amp;journal=Artif.%20Intell.&amp;doi=10.1016%2Fj.artint.2022.103774&amp;volume=312&amp;publication_year=2022&amp;author=Bl%C3%BCcher%2CS&amp;author=Vielhaben%2CJ&amp;author=Strodthoff%2CN\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR52\">Adebayo, J. et al. Sanity checks for saliency maps. in Proc. 32nd International Conference on Neural Information Processing Systems, Vol. 31, 9525\u20139536 (Curran Associates Inc., 2018).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR53\">Holzinger, A., Langs, G., Denk, H., Zatloukal, K. &amp; M\u00fcller, H. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 9, e1312 (2019).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1002\/widm.1312\" data-track-item_id=\"10.1002\/widm.1312\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1002%2Fwidm.1312\" aria-label=\"Article reference 53\" data-doi=\"10.1002\/widm.1312\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 53\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Causability%20and%20explainability%20of%20artificial%20intelligence%20in%20medicine&amp;journal=Wiley%20Interdiscip.%20Rev.%20Data%20Min.%20Knowl.%20Discov.&amp;doi=10.1002%2Fwidm.1312&amp;volume=9&amp;publication_year=2019&amp;author=Holzinger%2CA&amp;author=Langs%2CG&amp;author=Denk%2CH&amp;author=Zatloukal%2CK&amp;author=M%C3%BCller%2CH\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR54\">Biessmann, F. &amp; Refiano, D. Quality metrics for transparent machine learning with and without humans in the loop are not correlated. in Proc ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI. <a href=\"http:\/\/arxiv.org\/abs\/2107.02033\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"http:\/\/arxiv.org\/abs\/2107.02033\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2107.02033<\/a> (2021).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR55\">Jesus, S. et al. How can i choose an explainer? An application-grounded evaluation of post-hoc explanations. in Proc. ACM conference on fairness, accountability, and transparency, 805\u2013815 (ACM, 2021).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR56\">Bu\u00e7inca, Z., Lin, P., Gajos, K. Z. &amp; Glassman, E. L. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. In Proc. 25th International Conference on Intelligent User Interfaces, 454\u2013464 (ACM, 2020).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR57\">Bansal, G. et al. Does the whole exceed its parts? The effect of AI explanations on complementary team performance. in Proc. CHI Conference on Human Factors in Computing Systems, 1\u201316 (ACM, 2021).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR58\">Trout, J. D. Scientific explanation and the sense of understanding. Philos. Sci. 69, 212\u2013233 (2002).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1086\/341050\" data-track-item_id=\"10.1086\/341050\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1086%2F341050\" aria-label=\"Article reference 58\" data-doi=\"10.1086\/341050\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 58\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Scientific%20explanation%20and%20the%20sense%20of%20understanding&amp;journal=Philos.%20Sci.&amp;doi=10.1086%2F341050&amp;volume=69&amp;pages=212-233&amp;publication_year=2002&amp;author=Trout%2CJD\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR59\">Oala, L. et al. Machine learning for health: algorithm auditing &amp; quality control. J. Med. Syst. 45, 1\u20138 (2021).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"noopener nofollow\" data-track-label=\"10.1007\/s10916-021-01783-y\" data-track-item_id=\"10.1007\/s10916-021-01783-y\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/link.springer.com\/doi\/10.1007\/s10916-021-01783-y\" aria-label=\"Article reference 59\" data-doi=\"10.1007\/s10916-021-01783-y\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 59\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Machine%20learning%20for%20health%3A%20algorithm%20auditing%20%26%20quality%20control&amp;journal=J.%20Med.%20Syst.&amp;doi=10.1007%2Fs10916-021-01783-y&amp;volume=45&amp;pages=1-8&amp;publication_year=2021&amp;author=Oala%2CL\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR60\">DIN SPEC 92001-3:2023-04. Artificial Intelligence\u2014Life Cycle Processes and Quality Requirements\u2014Part 3: Explainability (DIN Deutsches Institut f\u00fcr Normung e. V., 2023).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR61\">Sokol, K. &amp; Flach, P. Explainability fact sheets: a framework for systematic assessment of explainable approaches. in Proc. Conference on Fairness, Accountability, and Transparency, 56\u201367 (ACM, 2020).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR62\">Amann, J. et al. To explain or not to explain?\u2013Artificial intelligence explainability in clinical decision support systems. PLoS Digit. Health 1, e0000016 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1371\/journal.pdig.0000016\" data-track-item_id=\"10.1371\/journal.pdig.0000016\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1371%2Fjournal.pdig.0000016\" aria-label=\"Article reference 62\" data-doi=\"10.1371\/journal.pdig.0000016\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 62\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=To%20explain%20or%20not%20to%20explain%3F%E2%80%93Artificial%20intelligence%20explainability%20in%20clinical%20decision%20support%20systems&amp;journal=PLoS%20Digit.%20Health&amp;doi=10.1371%2Fjournal.pdig.0000016&amp;volume=1&amp;publication_year=2022&amp;author=Amann%2CJ\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR63\">Vetter, D. et al. Lessons learned from assessing trustworthy AI in practice. Digit. Soc. 2, 35 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"noopener nofollow\" data-track-label=\"10.1007\/s44206-023-00063-1\" data-track-item_id=\"10.1007\/s44206-023-00063-1\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/link.springer.com\/doi\/10.1007\/s44206-023-00063-1\" aria-label=\"Article reference 63\" data-doi=\"10.1007\/s44206-023-00063-1\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 63\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Lessons%20learned%20from%20assessing%20trustworthy%20AI%20in%20practice&amp;journal=Digit.%20Soc.&amp;doi=10.1007%2Fs44206-023-00063-1&amp;volume=2&amp;publication_year=2023&amp;author=Vetter%2CD\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR64\">Ghassemi, M., Oakden-Rayner, L. &amp; Beam, A. L. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit. Health 3, e745\u2013e750 (2021).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/S2589-7500(21)00208-9\" data-track-item_id=\"10.1016\/S2589-7500(21)00208-9\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2FS2589-7500%2821%2900208-9\" aria-label=\"Article reference 64\" data-doi=\"10.1016\/S2589-7500(21)00208-9\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 64\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=The%20false%20hope%20of%20current%20approaches%20to%20explainable%20artificial%20intelligence%20in%20health%20care&amp;journal=Lancet%20Digit.%20Health&amp;doi=10.1016%2FS2589-7500%2821%2900208-9&amp;volume=3&amp;pages=e745-e750&amp;publication_year=2021&amp;author=Ghassemi%2CM&amp;author=Oakden-Rayner%2CL&amp;author=Beam%2CAL\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR65\">Sokol, K. &amp; Flach, P. One explanation does not fit all: the promise of interactive explanations for machine learning transparency. KI-K.\u00fcnstliche Intell. 34, 235\u2013250 (2020).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"noopener nofollow\" data-track-label=\"10.1007\/s13218-020-00637-y\" data-track-item_id=\"10.1007\/s13218-020-00637-y\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/link.springer.com\/doi\/10.1007\/s13218-020-00637-y\" aria-label=\"Article reference 65\" data-doi=\"10.1007\/s13218-020-00637-y\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 65\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=One%20explanation%20does%20not%20fit%20all%3A%20the%20promise%20of%20interactive%20explanations%20for%20machine%20learning%20transparency&amp;journal=KI-K.%C3%BCnstliche%20Intell.&amp;doi=10.1007%2Fs13218-020-00637-y&amp;volume=34&amp;pages=235-250&amp;publication_year=2020&amp;author=Sokol%2CK&amp;author=Flach%2CP\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR66\">Weber, R. O., Johs, A. J., Goel, P. &amp; Silva, J. M. XAI is in trouble. AI Mag. 45, 300\u2013316. (2024).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR67\">Freiesleben, T. &amp; K\u00f6nig, G. Dear XAI community, we need to talk! fundamental misconceptions in current XAI research. in World Conference on Explainable Artificial Intelligence, 48\u201365 (Springer, 2023).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR68\">Afroogh, S. et al. Beyond Explainable AI (XAI): An Overdue Paradigm Shift and Post-XAI Research Directions. arXiv preprint arXiv:2602.24176 (2026).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR69\">Babic, B., Gerke, S., Evgeniou, T. &amp; Cohen, I. G. Beware explanations from AI in health care. Science 373, 284\u2013286 (2021).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1126\/science.abg1834\" data-track-item_id=\"10.1126\/science.abg1834\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1126%2Fscience.abg1834\" aria-label=\"Article reference 69\" data-doi=\"10.1126\/science.abg1834\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 69\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Beware%20explanations%20from%20AI%20in%20health%20care&amp;journal=Science&amp;doi=10.1126%2Fscience.abg1834&amp;volume=373&amp;pages=284-286&amp;publication_year=2021&amp;author=Babic%2CB&amp;author=Gerke%2CS&amp;author=Evgeniou%2CT&amp;author=Cohen%2CIG\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR70\">Bordt, S., Finck, M., Raidl, E. &amp; von Luxburg, U. Post-hoc explanations fail to achieve their purpose in adversarial contexts. in Proc. ACM Conference on Fairness, Accountability, and Transparency, 891\u2013905 (ACM, 2022).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR71\">Hedstr\u00f6m, A. et al. The meta-evaluation problem in explainable AI: identifying reliable estimators with MetaQuantus. Trans. Mach. Learn. Res. <a href=\"https:\/\/openreview.net\/forum?id=j3FK00HyfU\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"https:\/\/openreview.net\/forum?id=j3FK00HyfU\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/openreview.net\/forum?id=j3FK00HyfU<\/a> (2023).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR72\">Bluecher, S., Vielhaben, J. &amp; Strodthoff, N. Decoupling pixel flipping and occlusion strategy for consistent XAI benchmarks. Trans. Mach. Learn. Res. <a href=\"https:\/\/openreview.net\/forum?id=bIiLXdtUVM\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"https:\/\/openreview.net\/forum?id=bIiLXdtUVM\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/openreview.net\/forum?id=bIiLXdtUVM<\/a> (2024).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR73\">Dombrowski, A.-K. et al. Explanations can be manipulated and geometry is to blame. In Proc. Advances in Neural Information Processing Systems, Vol. 32 (NeurIPS, 2019).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR74\">Xin, X., Huang, F. &amp; Hooker, G. Why you should not trust interpretations in machine learning: adversarial attacks on partial dependence plots. arXiv preprint arXiv: <a href=\"http:\/\/arxiv.org\/abs\/2404.18702\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"http:\/\/arxiv.org\/abs\/2404.18702\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2404.18702<\/a> (2024).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR75\">Kauffmann, J. et al. From clustering to cluster explanations via neural networks. IEEE Trans. Neural Netw. Learn. Syst. 35, 1926\u20131940 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1109\/TNNLS.2022.3185901\" data-track-item_id=\"10.1109\/TNNLS.2022.3185901\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1109%2FTNNLS.2022.3185901\" aria-label=\"Article reference 75\" data-doi=\"10.1109\/TNNLS.2022.3185901\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"mathscinet reference\" data-track-action=\"mathscinet reference\" href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=4710270\" aria-label=\"MathSciNet reference 75\" target=\"_blank\">MathSciNet<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 75\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=From%20clustering%20to%20cluster%20explanations%20via%20neural%20networks&amp;journal=IEEE%20Trans.%20Neural%20Netw.%20Learn.%20Syst.&amp;doi=10.1109%2FTNNLS.2022.3185901&amp;volume=35&amp;pages=1926-1940&amp;publication_year=2022&amp;author=Kauffmann%2CJ\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR76\">Clark, B., Oliveira, M., Wilming, R. &amp; Haufe, S. Feature salience&#8211;not task-informativeness&#8211;drives machine learning model explanations. arXiv preprint arXiv:2602.09238 (2026).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR77\">Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R. &amp; Yu, B. Definitions, methods, and applications in interpretable machine learning. Proc. Natl. Acad. Sci. 116, 22071\u201322080 (2019).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1073\/pnas.1900654116\" data-track-item_id=\"10.1073\/pnas.1900654116\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1073%2Fpnas.1900654116\" aria-label=\"Article reference 77\" data-doi=\"10.1073\/pnas.1900654116\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"mathscinet reference\" data-track-action=\"mathscinet reference\" href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=4030584\" aria-label=\"MathSciNet reference 77\" target=\"_blank\">MathSciNet<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 77\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Definitions%2C%20methods%2C%20and%20applications%20in%20interpretable%20machine%20learning&amp;journal=Proc.%20Natl.%20Acad.%20Sci.&amp;doi=10.1073%2Fpnas.1900654116&amp;volume=116&amp;pages=22071-22080&amp;publication_year=2019&amp;author=Murdoch%2CWJ&amp;author=Singh%2CC&amp;author=Kumbier%2CK&amp;author=Abbasi-Asl%2CR&amp;author=Yu%2CB\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR78\">Zicari, R. V. et al. Z-Inspection\u00ae: a process to assess trustworthy AI. IEEE Transactions on Technology and Society, 2, 83\u201397 (2021) .<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR79\">Borgonovo, E., Ghidini, V., Hahn, R. &amp; Plischke, E. Explaining classifiers with measures of statistical association. Comput. Stat. Data Anal. 182, 107701 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/j.csda.2023.107701\" data-track-item_id=\"10.1016\/j.csda.2023.107701\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2Fj.csda.2023.107701\" aria-label=\"Article reference 79\" data-doi=\"10.1016\/j.csda.2023.107701\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"mathscinet reference\" data-track-action=\"mathscinet reference\" href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=4550794\" aria-label=\"MathSciNet reference 79\" target=\"_blank\">MathSciNet<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 79\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Explaining%20classifiers%20with%20measures%20of%20statistical%20association&amp;journal=Comput.%20Stat.%20Data%20Anal.&amp;doi=10.1016%2Fj.csda.2023.107701&amp;volume=182&amp;publication_year=2023&amp;author=Borgonovo%2CE&amp;author=Ghidini%2CV&amp;author=Hahn%2CR&amp;author=Plischke%2CE\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR80\">Karimi, A.-H., Von K\u00fcgelgen, J., Sch\u00f6lkopf, B. &amp; Valera, I. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. Adv. Neural Inf. Process. Syst. 33, 265\u2013277 (2020).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 80\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Algorithmic%20recourse%20under%20imperfect%20causal%20knowledge%3A%20a%20probabilistic%20approach&amp;journal=Adv.%20Neural%20Inf.%20Process.%20Syst.&amp;volume=33&amp;pages=265-277&amp;publication_year=2020&amp;author=Karimi%2CA-H&amp;author=K%C3%BCgelgen%2CJ&amp;author=Sch%C3%B6lkopf%2CB&amp;author=Valera%2CI\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR81\">Sixt, L., Granz, M. &amp; Landgraf, T. When explanations lie: why many modified BP attributions fail. in Proc. 37th International Conference on Machine Learning, 9046\u20139057 (PMLR, 2020).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR82\">Bilodeau, B., Jaques, N., Koh, P. W. &amp; Kim, B. Impossibility theorems for feature attribution. Proc. Natl. Acad. Sci. 121 e2304406120 (2024).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR83\">Frye, C., Rowat, C. &amp; Feige, I. Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability. Advances in neural information processing systems, 33, 1229\u20131239 (2020).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR84\">Martin, J. &amp; Haufe, S. cc-Shapley: Measuring Multivariate Feature Importance Needs Causal Context. arXiv preprint arXiv:2602.20396 (2026).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR85\">Gj\u00f8lbye, A., Haufe, S. &amp; Hansen, L. K. Minimizing false-positive attributions in explanations of non-linear models. In Proc. 39th Annual Conference on Neural Information Processing Systems. The Thirty-ninth Annual Conference on Neural Information Processing Systems, <a href=\"https:\/\/openreview.net\/forum?id=ORrCEtiiVX\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"https:\/\/openreview.net\/forum?id=ORrCEtiiVX\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/openreview.net\/forum?id=ORrCEtiiVX<\/a> (2025).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR86\">Oberkampf, W. L. &amp; Roy, C. J.Verification and Validation in Scientific Computing (Cambridge University Press, 2010).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR87\">Imbert, C. &amp; Ardourel, V. Formal verification, scientific code, and the epistemological heterogeneity of computational science. Philos. Sci. 90, 376\u2013394 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1017\/psa.2022.78\" data-track-item_id=\"10.1017\/psa.2022.78\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1017%2Fpsa.2022.78\" aria-label=\"Article reference 87\" data-doi=\"10.1017\/psa.2022.78\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"mathscinet reference\" data-track-action=\"mathscinet reference\" href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=4598034\" aria-label=\"MathSciNet reference 87\" target=\"_blank\">MathSciNet<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 87\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Formal%20verification%2C%20scientific%20code%2C%20and%20the%20epistemological%20heterogeneity%20of%20computational%20science&amp;journal=Philos.%20Sci.&amp;doi=10.1017%2Fpsa.2022.78&amp;volume=90&amp;pages=376-394&amp;publication_year=2023&amp;author=Imbert%2CC&amp;author=Ardourel%2CV\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR88\">Ismail, A. A., Gunady, M., Pessoa, L., Corrada Bravo, H. &amp; Feizi, S. Input-cell attention reduces vanishing saliency of recurrent neural networks. in Proc. Advances in Neural Information Processing Systems, Vol. 32 (Curran Associates, Inc., 2019).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR89\">Yalcin, O., Fan, X. &amp; Liu, S. Evaluating the correctness of explainable AI algorithms for classification. arXiv preprint arXiv:<a href=\"http:\/\/arxiv.org\/abs\/2105.09740\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"http:\/\/arxiv.org\/abs\/2105.09740\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2105.09740<\/a> (2021).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR90\">Arras, L., Osman, A. &amp; Samek, W. CLEVR-XAI: a benchmark dataset for the ground truth evaluation of neural network explanations. Inf. Fusion 81, 14\u201340 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/j.inffus.2021.11.008\" data-track-item_id=\"10.1016\/j.inffus.2021.11.008\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2Fj.inffus.2021.11.008\" aria-label=\"Article reference 90\" data-doi=\"10.1016\/j.inffus.2021.11.008\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 90\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=CLEVR-XAI%3A%20a%20benchmark%20dataset%20for%20the%20ground%20truth%20evaluation%20of%20neural%20network%20explanations&amp;journal=Inf.%20Fusion&amp;doi=10.1016%2Fj.inffus.2021.11.008&amp;volume=81&amp;pages=14-40&amp;publication_year=2022&amp;author=Arras%2CL&amp;author=Osman%2CA&amp;author=Samek%2CW\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR91\">Zhou, Y., Booth, S., Ribeiro, M. T. &amp; Shah, J. Do feature attribution methods correctly attribute features? in Proc. AAAI Conference on Artificial Intelligence, Vol. 36 (AAAI, 2022).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR92\">Budding, C., Eitel, F., Ritter, K. &amp; Haufe, S. Evaluating saliency methods on artificial data with different background types. in Medical Imaging meets NeurIPS. An official NeurIPS Workshop. <a href=\"http:\/\/arxiv.org\/abs\/2112.04882\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"http:\/\/arxiv.org\/abs\/2112.04882\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2112.04882<\/a> (2021).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR93\">Oliveira, M. et al. Benchmarking the influence of pre-training on explanation performance in MR image classification. Front. Artif. Intell. 7, 1330919 (2024).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.3389\/frai.2024.1330919\" data-track-item_id=\"10.3389\/frai.2024.1330919\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.3389%2Ffrai.2024.1330919\" aria-label=\"Article reference 93\" data-doi=\"10.3389\/frai.2024.1330919\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 93\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Benchmarking%20the%20influence%20of%20pre-training%20on%20explanation%20performance%20in%20MR%20image%20classification&amp;journal=Front.%20Artif.%20Intell.&amp;doi=10.3389%2Ffrai.2024.1330919&amp;volume=7&amp;publication_year=2024&amp;author=Oliveira%2CM\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR94\">Oramas, J., Wang, K. &amp; Tuytelaars, T. Visual explanation by interpretation: Improving visual feedback capabilities of deep neural networks. in International Conference on Learning Representations (2019).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR95\">Fok, R. &amp; Weld, D. S. In search of verifiability: Explanations rarely enable complementary performance in AI-advised decision making. AI Mag. 45, 317\u2013332 (2023).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR96\">Janzing, D. &amp; Sch\u00f6lkopf, B. Detecting non-causal artifacts in multivariate linear regression models. in International Conference on Machine Learning, 2245\u20132253 (PMLR, 2018).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR97\">Hvilsh\u00f8j, F., Iosifidis, A. &amp; Assent, I. ECINN: Efficient Counterfactuals from Invertible Neural Networks. In British Machine Vision Conference: Online 22nd-25th (2021).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR98\">Scholkopf, B. et al. Toward Causal Representation Learning Proceedings of the IEEE. 109, 612\u2013634 <a href=\"https:\/\/doi.org\/10.1109\/JPROC.2021.3058954\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"10.1109\/JPROC.2021.3058954\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/doi.org\/10.1109\/JPROC.2021.3058954<\/a> (2021).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR99\">Ahuja, K., Mahajan, D., Wang, Y. &amp; Bengio, Y. Interventional causal representation learning. In International conference on machine learning pp. 372-407. (PMLR, 2023).<\/p>\n<p class=\"c-article-references__text\" id=\"ref-CR100\">Hastie, T. et al. The Elements of Statistical Learning (Springer, 2009).<\/p>\n","protected":false},"excerpt":{"rendered":"European Commission. Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules&hellip;\n","protected":false},"author":2,"featured_media":1317,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,1632,1633],"class_list":{"0":"post-1316","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-computational-science","11":"tag-computer-science"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/1316","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=1316"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/1316\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/1317"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=1316"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=1316"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=1316"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}