Explanation in Human-AI Systems

This week on Journal Club session Epaminondas Kapetanios will talk about his editorial work in the "Explanation in Human-AI Systems" journal. Please see the journal description below for more details.


Intelligent systems and applications, mainly Machine Learning (ML) based Artificial Intelligence (AI), have been employed at almost all levels and in all domains of society: from AI systems in agriculture (e.g., greenhouse optimization) to algorithm-based trading in finance as well as in our personal companions such as social robots and personal voice assistants (e.g., Siri, Alexa). Concerns, however, have been raised on the grounds of their - transparency, safety and liability, algorithmic bias and fairness, and trustworthiness. In response to these concerns, regulatory frameworks governed by AI principles in society have emerged at both, institutional and governmental levels. In addition, a response from Artificial Intelligence and Machine Learning (AI/ML) communities has emerged in the form of interpretable models and Explainable AI (XAI) tools and approaches. However, these come with limitations in explaining the behavior of complex AI/ML systems to technically inexperienced users.

This Research Topic focuses on how to conceptualize, design, and implement human-AI systems that can explain their decisions and actions to different types of consumers and personas. Current approaches in Machine Learning are tailored more towards interpretations and explanations that are more suitable for modelers and less for technically inexperienced users. In other human-AI interactions, for instance, Google Assistance, Alexa, Social Robots, Web search, and recommendation systems, explanations for recommendations, search results, or actions are not even considered as an integral part of the human-AI interaction mechanism. As a result, there is a need to revisit the conceptualization, design, and implementation of human-AI systems in a way that they provide more transparency in their way of reasoning and how they communicate this via adaptive explanation techniques for different types of users. This can be better achieved by taking a cross-disciplinary approach to the concept of “explanation” and views of “what is a good explanation”. For instance, disciplines such as philosophy and psychology of science (e.g., theory of explanation, causal reasoning), social sciences (e.g., social expectations), psychology (e.g., cognitive bias), communication, and media science offer an intellectual basis of what ‘explanation’ is and how to do people select, to evaluate, and communicate explanations.

This Research Topic invites researchers and practitioners from academic institutions and private companies to submit their articles on the conceptualization, design, and implementation of explainable human-AI systems from a theoretical/systemic, and practical standpoint.


Links:

Date: 2022/07/15
Time: 14:00
Location: online

Share this post on: Twitter| Facebook| Google+| Email