On the occasion of the retirement of neuroinformatician Professor Dr Helge Ritter, the Faculty of Technology is hosting a symposium next Monday, 25 March. 150 guests will look back on his scientific career and his work in neuroinformatics, machine learning and robotics. Five of his companions will give talks that bridge the gap between past and present.
‘Professor Ritter has an outstanding role to play in researching neural information processing and its computer-aided applications,’ says Professor Markus Nebel, Dean of the Faculty of Technology. ‘With his highly interdisciplinary work, he has made a significant contribution to the future development of artificial systems with cognitive capabilities,’ says Nebel.
© Susanne Freitag
How robots are capable of acting intuitively
For three decades, Ritter’s research has focussed on the human hand and grasping and understanding. He is primarily dedicated to the principles of neural computation, in particular self-organising and learning systems, and their application to robot cognition, data analysis and interactive human-machine interfaces.
In his numerous research projects and collaborations, he has investigated, among other things, how we learn to literally grasp our environment thanks to our hands. When these findings are transferred to robots, they enable the robots to check their perception of reality and uncover gaps in their understanding. Ritter also expanded the concept of self-organising maps, topological feature maps that are used to structure patterns in high-dimensional data and make them intuitively recognisable for humans. Areas of application include the control of complex robot movements.
Helge Ritter: The large-scale project FAMULA is about understanding manual intelligence and how it is connected with language and learning. From a technical perspective – this is about coordinating hands – robot hands with tactile sensors – in order to be able to do the things that we do in everyday life: we pick up objects, we inspect them.
How can we understand how we gain familiarity with an object? For instance, inspecting this object, I recognize that I can open something up here.
Then I notice another object in the box. I see that it is something round. As a child, I may have seen this for the very first time, and could learn through language that this is an apple.
But then there are other more complex objects that are currently still extremely difficult to be inspected by the robots of today, such as this napkin, which varies in its shape. For us, this is entirely ordinary, but to manipulate something like this with a robot requires going beyond a representation of only rigid bodies and to include articulated or even flexible objects.
We don’t even think twice about it when we handle such objects ourselves. Right now, I’m not even looking at the object: the sense of touch in my hands tells me exactly what I’m holding, how I can fold it – all things that today’s robots still are not able to do.
Thomas Schack: We have, for instance, a study in which a test subject’s eyes are covered, to first limit visual input. They then work with artificial objects.
Weight, shape, and size were systematically varied with these objects in order to figure out how we approach such characteristics, how we remember these characteristics and then later use them again in manual actions. This is also about finding out what the functionality of such objects is.
That is, figuring out what the functional characteristics are and here too, the context of the tasks is manipulated: On the one hand, objects are picked up, but different things also have to be done with the objects. Overall, we are working to create a database that can also be used in human-robotic interaction because ultimately, it’s humans who should be helping robots to understand the characteristics of these objects.
Sven Wachsmuth: As of now, robots either have to be given very detailed models of objects, or the robot can only handle objects in a very rough way. In Famula, the goal is to give the robot during the exploration and learning process with the objects a fine-grained knowledge of how to handle these objects, also by including human feedback that can help the robot with this.
Helge Ritter: Major accomplishments concern the finger tips: They now have miniaturized tactile sensors. That we have developed over many years.
The inner parts of the robot hands now also have a tactile surface, and very recently, we equipped the hands with sensory fingernails that enable very fine interactions, such as scratching something with a fingernail. These are all key elements whose interplay is crucial in replicating the high dexterity of the human hand for robots.
And we of course want to replicate this with the robot hands, and for this reason we need this. A lot of progress has also been made in the software.
Software is, in a sense, the brainpower of the system, and ist development is essential to endow the hands with the capability of coordinated and dexterous action. This is definitely a basic research project.
Its focus on five-fingered, human-like hands that are very complex and expensive is very different from what we find in industry, where a topmost priority is on very economical devices for interaction, such as two- or three-pronged grabbers and everything is trimmed down and streamlined for rapid repetition of the same movements – and this is not the focus of this project. The focus here is instead on understanding dexterous interaction and exploring it as we find it in humans.
Lectures from five companions
The symposium lectures honour the interdisciplinary collaboration with Ritter. All five speakers share an academic connection to Ritter – for example as doctoral students or as colleagues in research collaborations. They will discuss joint research successes and the particular strengths that characterise Ritter as a person. The speakers are computer scientist and human biologist Professor Dr Kerstin Schill (University of Bremen), roboticist Professor Dr Tamim Asfour (Karlsruhe Institute of Technology, KIT), neuro- and bioinformatician Professor Dr Thomas Martinetz (University of Lübeck), neuroinformatician Professor Dr Heiko Wersing (Honda Research Institute Europe and Bielefeld University) and roboticist Dr Risto Koiva (Bielefeld University).
Living interdisciplinarity
Helge Ritter worked continuously to create synergies between different specialist areas in order to find common insights in the field of intelligent learning. Ritter pursued this interdisciplinary approach in particular as coordinator of the Cluster of Excellence CITEC. He was involved in the development of an international platform for research and research networking. He also drove forward the expansion of the research infrastructure, which helped to attract many talented students and researchers at early career stages. Today, CITEC builds on these achievements as the Centre for Cognitive Interaction Technology at Bielefeld University to continue to promote innovative research and collaboration.
[Translation generated with automated support]