Actionable Explainable AI (AxAI) 2023

Special Session: in the Cross Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE 2023)

to be held in conjunction with the 18th International Conference on Availability, Reliability and Security (ARES 2022 – http://www.ares-conference.eu)

August 29 – September 01, 2023

Methods of Explainable Artificial Intelligence (XAI) are developed especially with the goal to make decisions of opaque machine-learned models (e.g., Deep Learning) transparent, interpretable and comprehensible. However, merely establishing transparency, interpretability, and comprehensibility is not enough to derive value from explanations. In addition, it is important to improve models by creating opportunities to act on and learn from explanations. This can be achieved through so-called actionable concepts, methods, measures, and metrics for explainable learning and reasoning (Gunning and Aha, 2019). An important aspect of actionable XAI is the incorporation of psychological insights into the design of explanations and interactive interfaces for the purpose of model understandability, validation, and correctability. Similarly, it is important to develop evaluation criteria that enable meaningful and generalizable comparability of explanations from a user and application perspective. The goal is to find the best possible explanatory approaches for the respective context of use. In this special session, we want to bring together interdisciplinary researchers who are working on exactly these aspects of Explainable Artificial Intelligence and who want to present and discuss new, groundbreaking research that goes beyond testing existing work in new application areas.
Invited contributions: full research papers and short research papers. Extended versions of the accepted papers will be solicited for a special issue to be published on the Machine Learning and Knowledge Extraction (MAKE) journal

Topics of interest include, but are not limited to

– Concepts and methods for user-centered explainable artificial intelligence
– Approaches for context-aware explainability
– Methods to integrate human knowledge into automated decision-making in ML systems
– Novel data set benchmarks to validate ML systems from a user and application domain perspective
– Methods that allow for corrective feedback by means of a human-in-the-loop to improve explanatory approaches and ML models
– Novel techniques for the evaluation of ML systems based on the aspect of fidelity and robustness of an explanation
– Experimental and empirical studies on ML systems that show the suitability of novel explanatory approaches in different application domains (e.g., environmental studies, medicine, autonomous driving, etc.)

Important Dates
Submission Deadline March 27, 2023 (AoE) 
Author Notification June 01, 2023
Proceedings Version June 22, 2023 (AoE)
Conference August 29 – September 01, 2023
Special Session Chairs

Bettina FINZEL, University of Bamberg, Germany
Anna SARANTI, Human-Centered AI, University of Life Sciences, Vienna

Program Committee 2023

tba

Submission Guidelines

The submission guidelines valid for this special track are the same as for the CD-MAKE conference, the submissions will be held to the same quality criteria as all other CD-MAKE submissions and will be published in the CD-MAKE conference proceedings. The guidelines can be found at https://cd-make.net/submission/ .