How can a robot best support us linguistically in solving tasks? Researchers from TRR 318 Constructing Explainability at Bielefeld University and Paderborn University have investigated this question. In subproject A05, they developed a model that enables the robot Nao to choose appropriate verbal explanation strategies depending on how the human has previously behaved and what cognitive state they are likely to be in.
Instructions can be formulated directly or through negations—by saying what should not be done. For example, in the context of a task aimed at mixing liquids in a bottle: Don’t swirl, but shake. “Negations can help redirect focus away from an object or activity, thereby fostering participants’ understanding,” explains André Groß from project A05, “Contextualized and online parametrization of attention in human–robot explanatory dialog.”

© TRR 318/Michael Adamski
Based on these linguistic and psychological insights, computer scientists André Groß, Dr Birte Richter, and Professor Dr Britta Wrede from Bielefeld University developed a computational model. This model enables the robot Nao to select the most suitable explanation strategy. Nao observes how human partners behave and react to cues, such as through eye movements. Using the model, it then decides when and whether to provide explanations based on negations.
Fewer Errors Through Adaptive Explanations
In the study, participants wore mobile eye trackers to record their visual attention. They were asked to solve 20 medical tasks on a touchscreen. Sitting next to them, robot Nao provided helpful information. The system recorded online whether tasks were solved correctly or incorrectly. Nao used the eye-tracking data to infer participants’ current performance and cognitive load, adapting its explanations accordingly. A comparison group, in contrast, only received neutral, non-adaptive instructions.

© TRR 318
The results show that the model works and improves performance: “Especially for difficult tasks, an adaptive explanation strategy and the targeted use of negations proved beneficial,” says Groß. “In the group using our model, 23 percent fewer errors were made.”
Professor Dr Britta Wrede, who is active in two research foci at Bielefeld University (FAITH and AI*IM) and serves as principal investigator of TRR 318 projects A05, A03, and the public outreach project, puts the findings into perspective: “By reading users’ attention and tailoring its explanations accordingly, Nao enables adaptive and comprehensible communication. With this model, TRR 318 takes a step closer to the goal of co-constructing explanations between humans and machines—making future interactions more flexible, individual, and ultimately more understandable.”