Skip to main content

Understandable insights into AI research


Author: Dr. Kristina Nienhaus

Artificial intelligence (AI) is now encountered by people in almost all areas of life. AI is supposed to support, advise and provide objective and optimized solutions for difficult decision-making. But how can people understand why and how an AI has arrived at its respective result? In the Collaborative Research Centre/Transregional Research Centre (SFB/TRR) 318 of Bielefeld and Paderborn Universities, an interdisciplinary research team is dealing with this problem: the goal is to develop understandable assistance systems that can, for example, answer users’ queries and provide precise explanations. In the new research podcast, the researchers give an insight into their work and explain why explainable AI represents an important step into the future of AI research.

Researchers from six disciplines – computer science, linguistics, media studies, psychology, sociology and economics – are conducting research in the SFB on the process of interactive explanation and how this can be transferred to AI systems. The podcast “Explaining Ex-plainability” brings all disciplines together: In it, host Professor Dr Britta Wrede talks to two researchers from different disciplines about their current research, challenges, areas of application and the relevance of Explainable AI.

Professor Dr Britta Wrede, and Professor Dr Philipp Cimiano can be heard alongside Professor Dr Katharina Rohlfing from the University of Paderborn in the new research podcast.

The first episode approaches the topic of explainability from the perspectives of computer science and linguistics. The speakers of the SFB, Professor Dr. Philipp Cimiano (computer science) from Bielefeld University and Professor Dr. Katharina Rohlfing (psycholinguistics) from Paderborn University, will be guests. What does explainability actually mean, why and in which areas is it so important? Where does research currently stand – and what are the greatest challenges?

How can highly technical processes be presented in a way that everyone can understand?

“What makes AI research at the TRR so special is the focus on the person to whom something is being explained,” says Philipp Cimiano. “How can highly technical processes and functions be rendered in a generally understandable way? Contrasts, comparisons and gradations can shed light on this. For example, if we want to explain the decision-making process through a machine-learned model, then the properties of the case in which a decision is made, so-called features, play an important role. The decision can be explained by making the importance of each feature in the decision understandable.” In the podcast, the researchers use concrete examples to explain what their research is about and give insights into their everyday research.

Further episodes will follow every two months. The next podcast episode will be about the process of everyday explanations: What has linguistics found out so far about the structures of everyday explanations – and how can computer science process these findings for the design of AI systems?

The SFB/TRR 318 “Constructing Explainability”

How can AI become comprehensible? Technical explanations often presuppose knowledge about how AI works and are difficult to comprehend. In the Collaborative Research Centre/Transregio Constructing Explainability (SFB/TRR 318), researchers are developing ways to involve users in the explanation process. To this end, the interdisciplinary research team is investigating the principles, mechanisms and social practices of explaining and how these can be taken into account in the design of AI systems. The aim of the project is to make explanation processes comprehensible and to create understandable assistance systems. The co-construction of explanations is being investigated by a total of 22 project leaders with around 40 research assistants from linguistics, psychology, media studies, sociology, economics and computer science at the universities of Bielefeld and Paderborn.