Bielefeld University has launched an independent research group focusing on Explainable Artificial Intelligence (XAI) and its impact on human trust in AI. Led by Dr. David Johnson at CITEC, the group investigates how explanations of AI decision recommendations can be designed to enhance decision-making by enabling appropriate trust in AI-assisted decision support systems. In this interview, Dr Johnson shares the goals and methods of this research initiative.
What are the main goals of your research group “Human-Centric Explainable AI”?
Our research group takes a human-centric approach to developing AI decision support systems that help users better understand why an AI system made a specific decision recommendation. Specifically, we explore how explanations should be designed to help human decision-makers trust the model when it is correct and question it’s decisions when it is incorrect, also known as appropriate trust. We are investigating which types of commonly used explanation approaches are most effective in enabling this appropriate trust towards AI systems. To achieve this, we actively involve users in the design process and conduct extensive evaluations. Our long-term goal is to create new interactive decision-making systems that can be applied to real-world challenges, such as mental health assessments.

© TRR 318/ Mike-Dennis Müller
Why is explainability in artificial intelligence such an important area of research and will your research group collaborate with other disciplines?
AI systems are playing an increasingly critical role in high-stakes areas like mental health assessment. However, these systems are not perfect and can be biased. Explanations are crucial to help users understand why an AI system made a specific diagnosis or recommendation. This understanding enables them to make informed decisions rather than blindly trusting the AI. Research in XAI is essential for building trust and improving human-AI interaction.
Our work is interdisciplinary, combining computer science, psychology, and human-computer interaction. One key focus is our collaboration with Professor Dr Hanna Drimalla’s group “Human-centered Artificial Intelligence: Multimodal Behavior Processing.” Together, we aim to study how mental health practitioners interact with explanations provided by AI-assisted decision support systems and how these explanations should be designed to optimize decision-making. This collaboration will allow us to apply our findings more broadly and develop AI solutions for real-world problems.
How do you plan to actively incorporate user perspectives into the design process of XAI systems?
First, we plan to build a foundation of knowledge on which types of explanations work best to enable appropriate trust by employing a basic research approach to perform large-scale online studies. We are doing this by developing an evaluation framework that simulates high-stakes decision making for a general audience. This knowledge will help inform us which types of explanations would be useful for real-world AI-assisted decision support systems that could be used for areas like mental-health assessment. Of course, we will still incorporate real-world experts in the final design of such systems through iterative human-centered design approaches, but the online framework will help us to quickly and more cheaply test what works and doesn’t before spending time with experts that can be costly for the experts and the researchers.