Skip to main content

‘Development of new interactive decision-making systems’


Text: Dr Kristina Nienhaus

Bielefeld University has launched an independent research group focusing on Explainable Artificial Intelligence (XAI) and its impact on human trust in AI. Led by Dr. David Johnson at CITEC, the group investigates how explanations of AI decision recommendations can be designed to enhance decision-making by enabling appropriate trust in AI-assisted decision support systems. In this interview, Dr Johnson shares the goals and methods of this research initiative.

What are the main goals of your research group “Human-Centric Explainable AI”?

Our research group takes a human-centric approach to developing AI decision support systems that help users better understand why an AI system made a specific decision recommendation. Specifically, we explore how explanations should be designed to help human decision-makers trust the model when it is correct and question it’s decisions when it is incorrect, also known as appropriate trust. We are investigating which types of commonly used explanation approaches are most effective in enabling this appropriate trust towards AI systems. To achieve this, we actively involve users in the design process and conduct extensive evaluations. Our long-term goal is to create new interactive decision-making systems that can be applied to real-world challenges, such as mental health assessments.

Dr. David Johnson im Gespräch
Bielefeld University has launched an independent research group focusing on Explainable Artificial Intelligence (XAI) and its impact on human trust in AI. The group is led by Dr David Johnson at CITEC.

Why is explainability in artificial intelligence such an important area of research and will your research group collaborate with other disciplines?

AI systems are playing an increasingly critical role in high-stakes areas like mental health assessment. However, these systems are not perfect and can be biased. Explanations are crucial to help users understand why an AI system made a specific diagnosis or recommendation. This understanding enables them to make informed decisions rather than blindly trusting the AI. Research in XAI is essential for building trust and improving human-AI interaction.
Our work is interdisciplinary, combining computer science, psychology, and human-computer interaction. One key focus is our collaboration with Professor Dr Hanna Drimalla’s group “Human-centered Artificial Intelligence: Multimodal Behavior Processing.” Together, we aim to study how mental health practitioners interact with explanations provided by AI-assisted decision support systems and how these explanations should be designed to optimize decision-making. This collaboration will allow us to apply our findings more broadly and develop AI solutions for real-world problems.

How do you plan to actively incorporate user perspectives into the design process of XAI systems?

First, we plan to build a foundation of knowledge on which types of explanations work best to enable appropriate trust by employing a basic research approach to perform large-scale online studies. We are doing this by developing an evaluation framework that simulates high-stakes decision making for a general audience. This knowledge will help inform us which types of explanations would be useful for real-world AI-assisted decision support systems that could be used for areas like mental-health assessment.  Of course, we will still incorporate real-world experts in the final design of such systems through iterative human-centered design approaches, but the online framework will help us to quickly and more cheaply test what works and doesn’t before spending time with experts that can be costly for the experts and the researchers. 

Developing Explanations Together

Algorithm-based approaches, such as machine learning, are becoming increasingly complex. This lack of transparency only makes it more difficult for human users to understand and accept the decisions proposed by Artificial Intelligence (AI). In the Transregional Collaborative Research Centre 318 “Constructing Explainability”, researchers are developing ways to involve users in the explanation process and thus create co-constructive explanations. Therefore, the interdisciplinary research team investigates the principles, mechanisms, and social practices of explaining and how these can be taken into account in the design of AI systems. The goal of the project is to make explanatory processes comprehensible and to create understandable assistance systems.

Dr David Johnson and his research group are associated members of the Collaborative Research Centre/ TRR 318.

Privacy Policy

This website uses cookies and similar technologies. Some of these are essential to ensure the functionality of the website, while others help us to improve the website and your experience. If you consent, we also use cookies and data to measure your interactions with our website. You can view and withdraw your consent at any time with future effect at our Privacy Policy site. Here you will also find additional information about the cookies and technologies used.