Skip to main content

How social should AI be?

Author: Bielefeld University

For humans, social interaction comes in many different forms, whether it is working together, co-creation, cooperation, or collaboration. Even if the individual actions are functionally similar, what is done together ultimately depends on how it is done in cooperation with others. The philosopher Professor Dr Johanna Seibt, from Aarhus University in Denmark, foresees challenges here for applications in artificial intelligence (AI) and robotics. Seibt, an expert in the philosophy of robotics, will provides a glimpse into her research on Bielefeld University’s campus. The talk is part of the lecture series called “Co-Constructing Intelligence,” which is being put on jointly by Bielefeld University, the University of Bremen, and Paderborn University. Attendance to this English-language talk is free of charge, and registration is not required.

As Seibt contends, the scenarios of social interaction that are currently conceivable for the future public use of robots and AI must be designed as artificial social agents. In her talk ‘The Problems of Artificial “Social Others”: How Much Sociality Do We Need for Hybrid Intelligence?’ the philosopher delves into the theoretical construct of ‘anthropomorphising,’ which refers to the phenomenon of attaching human characteristics to non-human entities and argues that this must be reconceptualized for AI and robotics. Instead, something more akin to ‘sociomorphisation’ is taking place – where AI systems are given cooperative attributes that enable them to interact socially in various ways but are not necessarily ascribed human characteristics.

‘Johanna Seibt is one of the leading international authorities on human-robot interaction,’ says Professor Dr Philipp Cimiano, who heads the Semantic Computing Group at Bielefeld University, and is an organizer of the talk. ‘She is currently working on robophilosophy, which is a new area of applied interdisciplinary philosophy. Her research provides far-ranging insights into how the phenomenon of human-robot interaction gives rise to new tasks for philosophy – both in ethics and in theoretical philosophy.’

Recording of the lecture by Professor Dr Johanna Seibt

In her lecture, Professor Dr Johanna Seibt discusses the phenomenon that AI systems can interact socially without necessarily being ascribed human abilities.

[Transcript was generated automatically]
We have a broad scope of interdisciplinary collaborators. Hiroshi Ishiguro is a used to be a very famous roboticist. I don’t know whether he’s still so famous. He was famous for the Gemini. It’s and B used the so-called Tylenol robot Arms for quite some time. He is also, interestingly, one of the few roboticists who has a honorary Ph.D. in philosophy. So his collaboration with us gave him earned him an honorary PhD in philosophy from his university. So it’s anybody’s interest in just collaborating with us. My dog has three parts. I will first present you how I perceive the problem that we are facing right now. I’m sure that this is not unfamiliar to you. It’s a quick rehearsal and basically all of a sudden the perception of a problem with the automation crisis. I am interested in hearing whether you share this perception, whether you find it exaggerated, how we differ. And then I come into a big presentation or yeah, hopefully sufficiently as a quick succession informative presentation of Oasis, the ontology of asymmetric social interactions that I hope my interesting also for the projects that you have, or at least in relation to them. And finally then I talk about this paradigm are in the paradigm of integrative social robotics that we have developed basically based on philosophy of science, insights on interdisciplinarity and transdisciplinary and the theory of values in particular in order to deal with the problem. So it all hangs nicely together. I hope you will see that good and we also organize the role of philosophy conferences. And I want to start out with this right away. And the robot philosophy conferences are the world’s, to our own surprise, the world’s largest events in humanities research in and on social robotics. So the anthropologists historic eons, philosophers, historians of ideas, cultural theorists and so on also work in and on social robotics. And this is the place where they come together. There are about 100 research papers at this time. It is again in in Denmark in the Danish pronunciation is almost German is property of course, and also 6 hours from here by train. It’s not so that and in a very nice it’s in August the deadline for the papers in so I think February 15 now that you know the conference organizer can talk about the deadlines so and we have also comparatively many workshops that your workshop proposal is has a deadline of January 31. So that’s also a very nice way of organizing, using, as, you know, a conference to organize a workshop. And unlike the HRA conference or the Tripoli conferences are cheap. Our participation. So fees are very modest. I think that is astronomic, 600, €700 things. I just think it’s 150 of the sort equivalent. So we are basically serving you were feeding you good thoughts and yeah, it’s good company. It’s but it’s a sort of a community that so that was it Thank you very much. So this is this is our new talking to you Fantastic. So this was the invitation for everybody to join us for the Global Philosophy 24 conference in You are specialists on AI, so we really need you so far we have been mostly focused more on the behavioral aspects of social robotics, often in these conferences. So I want to start with the problem. No, and you take it. You all have seen Google predictions by McKinsey 2018, bit by 2055 halves of the world activities could be automated by now. Interestingly, the prediction has accelerated. It’s now 2045, so we all have many potentials and we now have a new potential, namely our automation potential in particular, of course, for the academic teachers. Unfortunately, we are also very automatable. So these are bleak prospects. And what is, from my point of view as a philosopher, interesting, is that at least at that time McKinsey was blithely blatantly optimistic. So they come up with statements, statement saying the ability of technology to disrupt something is nothing new. The influence is overwhelmingly positive. No indication why What this assessment is based on. I would have expected, for instance, would have introduced or included a quick reflection on the differences in which we worked together, or rather in which sort of the differences in roughly co-working scenarios. You since you are all in interdisciplinary labs, you work together, you have the wonderful advantage of sharing new ideas, working as a team mostly, or at least to coordinate your work. But it’s a competitive business. People mostly work, at least along side. They don’t work together, but they sit next to each other nevertheless. They recognize each other as a social agent and contrasted with that is the idea of working next to where you simply adapt to dynamic changes in your environment. But you are no longer relating to a social agent. And now the big question is, of course, which forms of coworking will we actually produce? What is that Is still the robots next to you? Is this working next to you or is this working alongside? Do the workers here recognize these robots as a social agent, or do they treat them as machines? And in particular, what’s the role of technological literacy in that? Yes. So from human robot to the actual research, we all know that humans have a tendency to, as it is said, anthropomorphize real robots and to ascribe to them feelings. So to recognize them or acknowledge them is so ancient, but we don’t know whether that will actually remain once we learn more about technology. So it might be that we are changing our working environments from predominantly working together environments to working next to environment. We don’t know yet whether we will ever be able to work together to really collaborate with a robot. You all are working on that research, right? You are working towards it. But the question is precisely what is it that you actually will achieve if all the research works well, What is what is it that you will produce? For instance, people can do analogical reasoning with temporal reasoning, right? So let’s not get ahead of ourselves. I want to say the reason why we are right now not in the position to the question what kind of a working relationship to create is not a matter of choice, but a matter of deep ignorance. And that’s I want to explain a little bit. We are there’s a classification of problems and some of them are the so-called wicked problems. And in fact, we are the wicked problems has a you don’t know the means and the the aims are somewhat vague. These are the wicked problems, but you at least have the category of the aim now, right now we are in a situation where I claim we are beyond wicked problems. We are introducing into society a new class of artifacts. And at first it appears as though these artifacts are rust. You have more or less smart instruments and smart toys, but as soon as we bring in an interactive perspective, it’s things change. So through what is a robot that can move physical and symbolic space of human interaction, fine. But in a technology, it’s a robot that is designed to set certain social cues to involved or for social interaction. So even even a transport role like this one is actually a signal link. I am preserving social space. I am a social navigator, right? Just by breaking at the right time and accelerating. Okay, already that is either. And now probably we won’t be able to see the next one yet. I mean that these social cues are sent and that these robots are formed. A response from us invite a response from us. That perception as social agents, that is clear. And so when they are humanoid or when they are so right, so animal like, but let’s see whether that works. It is also very clear it is an empathy object of emotion through physical gestures and listens to people talking to each other and for sequester based on the conversation. So. So the poor robot becomes scared. We don’t need much more than that. Sometimes something that wants to go through. See, how can I come back? Maybe this was not a good idea. Good. So it is mostly, as you probably know. So then I do. The acceptance of an item as a social robot is very much triggered by motion perceptions in the human. So we know that robots can, no matter what they are, shaped like they work on our so-called free conscious mechanisms of social cognition on the low level recognition so small, the resonance of perceptual resonance mechanisms, for instance, of creating joint attention, the gaze killing effect, and then also the interaction that invites us to take the so called intentional said so to ascribe to them intentions, beliefs and so on. This is fairly old neuroscience research, as you can see here. As you know better, they are more sophisticated studies by now. And yes, could be Kostka is is of course somebody who’s serious. That’s in this field, but also some of your colleagues, I take it. Right. So I would say in a sense, we are in a very curious situation. For the first time in human cultural history. I mean, this cannot be underestimated. We are no longer building artifacts that are instruments with tools for the first time that we are building artifacts that aren’t artifacts, they say of themselves, the same way in which the pictures is not a pipe, but they say of themselves, Look, I’m not a robot. I’m actually a social other. I’m a social agent, an artificial social object. So we have a new class of objects. And now the point is that disruption potential is currently unknown. You have a picture from an experiment by Peter Kahn. This young gentleman who’s talking to a robot just lied to a human being in order to protect the robot, even though that robot doesn’t look particularly huggable or morally capable. So we have the association problem in human reactions to robots when we simulate social agency, we need to do that, preferably by also simulating emotional connectedness. There are many, many studies showing that Sophia interactions run much smoother. So indicate sort of emotional cues. You indicate that the robot has somehow understood the emotion in a sense. Of course, great emotional analysis at least, but also expressive, emotional, sensitive expressions. But once we simulate emotion, connected humans immediately have a tendency to recognize these items as moral agents or moral patients. Yeah. So they are inclined to attribute to the item something like rights. They feel obligated towards them. And that’s a major problem that the philosophers are discussing. To what extent, what happens actually to the notion of moral status, given the data that we have, you know, And so when we accept humans as moral agents, patients, that has immediate consequences for the foundations of our more of Western democracies. I have another slide on that. But it the same way in which social media you can say destroy the concept of truth and there is a danger that social robotics will actually undermine the concept of moral, political and epistemic sovereignty by that, a little more so in that. So keeping that in mind, we need to say what we see right now. And in fact, that’s Sherry Turkle was a very perceptive psychologist and anthropologist who saw that review early. Based on her studies, we see that there is a great tendency amongst humans to at least initially accept these artificial social agents and interact with them in a very caring and friendly way, in fact, very often to pressure them to the reals to people. So the performance of connection seemed connection enough. Sherry Turkle writes very making friends is much easier than making friends and simply program what you want to see. Okay, that’s one. And the degree of attachment, that’s one unknown disruption category. Another one is the inexplicable aggression towards a third one is more subtle and we don’t know yet whether people will be able to learn it. Something that it’s done to a robot is not permissible. It’s done to a human who has exactly the same trick. Or to take another example from our research, will people be able to learn that? Yes, with robots, these kinds of social others, you can be truly emotionally engaged authentically at one time, but at the next time you totally disengage, you turn it off. Will we be able to learn that we can do that to these sociable others, but not to other social others to people so that these subtle elements, more subtle elements of disruption, the whole development of sociality, experiences that is currently unknown. And in fact, the interesting thing is there’s this old category, the unknown unknowns and notes, unknowns and so on. So right now I would say we have deep ignorance in the sense that there unknown new categories we do not know yet. And I’m going to give you some examples of where that unknown of the unknown could lie. We cannot make it. And that’s a situation where we can say what we know, what we cannot predict. Right? That would be the known unknowns. But in fact, we are right now reading something in the dark. Did we have to say it? It’s not clear what we are creating fear Now. Part of the problem is that these applications are produced by engineers and we say I have the absolute the greatest respect. Engineers are highly intelligent, highly creative people, but they are trained in a certain way. There’s a disciplinary restriction. And so engineers are trained to to deal with technical malfunctions of that kind that are due to physical or biological contingencies. So when engineers undertake such a robotics, technological failure takes on a completely new dimension because when you realize that it means the police robot that has fallen into the sound of it, singing along is a completely different level. It is a cultural, a symbolic thing, and it has deep unintended consequences. So essentially, you have the following situation safety aspects, which engineers are extremely well trained to anticipate and address. These safety issues are only the tip of the iceberg. When robots, social robots, when they are, when they start to move in the symbolic space of human social interactions, they are moving in what is currently the most complex domain that we know. You know that if you work on an online trend, how difficult that is, But language is only one variable social reality. So the cosmos in a sense is not very complex. Its social reality that is the most complex domain, you know, And strangely enough, we don’t all work together, the experts on social reality, or at least some of the experts, namely the social scientists and the humanities scholars, are typically excluded from the development of the applications. So we say so for robotics requires intense interdisciplinary collaboration. So that’s the even in my research. Right. And some of the scientists, but in particular also humanities are marginalized. And there is an additional problem. A try is an emerging discipline, and it’s rather young. It doesn’t have a unified theoretical framework and it doesn’t have unified methodological standards. Right. So you cannot, as a policymaker right now go to HRA research and say, should I introduce this particular piece of assistive robotics or this key or robot or this tutor, should I introduce this in this nursing home or new school? Because there is no clear answer that you get from age Orion. And this whole thing is sort of nicely heated up by market pressures. Precious Right. So with our big data, the big fine and the financial sums involved, Now for those of you who studied tech design, this should be a familiar situation. David. David Collingridge, I think in the 1980s formulated the famous Collingridge dilemma and he said, well, increasingly we are creating technologies we are introducing into society of technology, the consequences of which can only be assessed at a time when we no longer can extract it on smartphones and social media. A beautiful example. You cannot take it out again, even if we wanted to. And unfortunately due to the market pressure, the tech industry often reacts in the following when they like it. It was asked in 2017 what he was concerned about ethical problems and putting everywhere into Google. And she says, well, some answers to the question concerning how we should ethically regulate air. We can only give off the fruit of progress. We don’t know yet what we are dealing with. Well, my reaction would have been exactly we don’t know yet what we’re dealing with. Don’t do it right. And here the reaction is very different. So we introduce irrevocably the potentially most disruptive technology ever. We do that blindly in the sense we have right now no reliable means of risk calculation, and we do that faster. And the research cycles in nature, I can sit in the right. So technology is simply always technology behind is always ahead of the rather slow empirical research that’s part of the automation crisis. Another part of the automation crisis is that it’s now we’re getting a little bit into some stuff, legal technicalities. We are actually introducing a technology that really unhinging the foundations of Western democracy since it dismantles the notion that is at the center of that, the notion of subjectivity, and from some sense, the Enlightenment notion. So the 18th century that says rationality, consciousness, emotionality mentality, in particular normative competence, free will, that’s all a package deal. You cannot separate that out. And once you believe that, then it is clear that only humans are rational. Only humans are the source of moral and also political authority. That’s why we go voting. It’s going to be allowed to vote because there is the basic assumption that each individual is actually the ultimate arbiter of what is good and what is right or should be in a society. But now we are changing that rationality is put into machines, right? And no normative competence is also something apropos machine ethics that we are trying to automate. Yeah, so we’re studying subpart A that’s all very interesting and there is nothing that is sort of intrinsically bad about a crisis, but we need to accept it is a crisis that is that’s something that we need to carefully reflect, think through, work together. Another aspect of the automation crisis is that this challenges our common notions of responsibility at all levels. And we always have this tension here between optimization and control. You probably have heard of go to so-called calls meetings, all human control, but if you had to if you were if you were to wait for a kidney, would you rather get it from the AI who has decided that you should have that kidney? Knowing about the fallibility of A.I.? Or would you rather get it from the doctor who has a different kind of solubility? But at least you might say no. The existential Derek. So cases like that where we want to optimize decisions for the same time always want to keep the human in the loop in some fashion, right? This played the concept of a conflict of responsibilities and we don’t yet have good notion. So it’s more for the philosophers, good notions for responsibilities in so-called social technological systems, because a new forms of collective responsibility, notions that we need to develop. So right now, if we go down again, a little more to the concrete case here, we’ve talked about work and we know that there are certain types of isolation at work or lack of contact and so on, that they are definitely creating meaning that psychologically have a negative effect. We know that. But at the same time, we are a working towards a technology that might create these effects where so policy making work linked, the research that’s working scenarios might affect wellbeing positively or negatively, but we aren’t there yet because each on actually is not yet producing this kind of research. So you can say, it’s more about the Collingridge dilemma. Another way of putting it is to say normal information flow in policy. So research based policymaking is actually gridlocked in three different places you get for reasons of research, ethics, subject people to long term studies. So we have no idea whether the early acceptance of a social robot as a social other is a novelty phenomenon or would actually remain after a longer period of time. So we don’t quite know how to describe it. What is happening in human interaction and without descriptions, we can’t really integrate the research that we have. So the evaluation is rather uncertain and that can inform regulation. In response to that, we created this new paradigm of how to develop applications that I call integrative. So the robotics box is I first want to explain that we can also do something that description, and that’s the next part. Now of my talk, I want to introduce to you the framework of it’s now Oasis is I call it an apology, but it’s a little more modest. It’s ontology is normally a slightly different type of classifying literary framework. This one is a rather flat descriptive classification framework, but that first one go into the reasons for why we need that. You might have seen announcements like that. The robots have feelings and moods that they can smile, that they convey at least five emotions conveying means that you have them right. But also in these classical definitional texts from 2002 and three, you find sentences like that robots are able to recognize someone or see, to interpret, to experience a robot. Experience is something that they are capable of following behavioral norms. And what you have here is of roboticists and h.r. I really relish researchers trying to say what would robots do without having the right vocabulary for it? They use just the vocabulary that we use for humans. Right? So we have a situation. I looked at your research description here and i’m really curious to learn afterwards in the discussion how you solve these description problem because as you describe it, you say we are trying to make robots aware of human beings. So you describe what, from your point of view, the robot is supposed to pick up. And then comes the big problem. How do you describe when the robot reciprocates? What about the human side? How would you describe what the human sees from the robot? That’s so how do we, you would say, experience and describe what the robot does to you. Also use the so called intent analyst idiom, namely that you see now the human sees that the robot has seen what it feels. And of course, you know, it’s very practical, but the robot doesn’t see, it’s just the robot does not see. So this is a typical insistence of the philosopher on this. I did critique conceptual details, but they’re very, very important. So therefore, what strategies right now for how to resolve the description problem? And some people say, Well, why don’t you have a we are reductionist futurists. This is just a sort of the tool that we use and what the robot does and what we do as human beings is just the same thing. Ultimately, it’s a matter of neurons firing and nothing more. Then there is the constructivist strategy where the same reaction is presented. But now by saying, well, all the concepts are constructed, nobody has owns the concept or the semantics of the work. So yeah, you know, we just extend the usage a little bit. So what? And there’s the so-called fiction of strategy, and that is often popular amongst in particular. So for roboticists following Brazil, she said at the very beginning, you know, people should experience the robot as if it were a human being. So it’s some sort of theater that we are supposed to play with. The robot we are creating make believe scenarios. But still, it’s a very interesting strategy and there are some more sophisticated versions that are being developed at the moment. I go for the diversification strategy and I say the fiction list is wrong because it’s not only as if there are certain social capacities that the robot literally has and we shouldn’t forget about those. So that’s there’s an interesting everywhere. It is an as if in both so as if as such doesn’t tell you anything. And I want to draw attention to the fact that the as if of simulation that is not sufficiently researched on. Okay. So now another piece of motivation. If you were to look at the in philosophy, the traditional notion of social interactions, you would find that this is rather demanding. You need to be able to infer a mental state. So you need to have consciousness to engage in a social interaction, to have the capacity to follow a norm. But that’s how the philosophers have approached it because they are moral. It was always human human interaction. But as even there’s lots of was noted in the late 1990s, at least there’s a problem with animals, right? We do interact, it appears with animals also socially, but they don’t have all of these capacities. And then of course there’s a problem when we interact with robots. But now the question is where the sociality really begins, Where do we want to draw the line? What’s the difference between a purely mechanical interaction and what is a social interaction? So we have simply a teaching. The quarreling can be more in a pedestrian zone. Things become already a little iffy right, with animals of the living. And then we have situations where we have even greater question marks. These two robots operate in accordance with norms, but they cannot follow a norm That’s a big difference. So where does it begin? Right? We have mechanistic correlations or normative coordination. We call that the end. We called it a social interaction. We call them social animals. Here, I don’t want to go into the the video again, it’s just drones trying to figuring out how to keep their distance and they’re flying with next to each other and that’s something that we call the so-called elevated diffusion effect. When you step into the elevator, people continuously rearranged themselves to have greatest distance to of people, right. In particular in American elevators. That’s about most neuro biology. And so so you might say, okay, you need to be aware of each other. What about you? Isn’t sufficiently clear what the differences with the audience between acting in accordance and following the norm? And there’s this problem that so many different disciplines are involved and so robotics and each one has a different notion of social interaction. So when you start out, it’s very confusing. And our strategy was you take a look so nobody’s right or wrong here. They all picked up on a very important aspect of social interactions. What we need to do is we need to integrate these notions into a more complex account. And so we came up with this idea that there are that sociality has something to do, obviously with coordination. And now we can list coordination types and ten different levels, again, simply abstracting or filtering out or learning from the different disciplines that study social interaction. So at the very beginning, I mean, right now let me say that at the very beginning, whether they are now ten or 11 or nine, it doesn’t matter that much. What matters is the idea that they are levels of coordination and these are levels of, as we see in a moment, levels of sociality. Also in other students. So the first one is the lowest one where we want to say, you know, we want to somehow get acknowledged that this is a form of not only tokenistic coordination, it’s either a look at pollution or, yes, it’s evolution, or so there’s there’s some sort of evolutionary burning in it. So it’s not just something that can be acknowledged that that’s the most the basic level of all messages. Then we move a little up here, right? So when we negotiate a social space, there is the individual conditioning of a living organism that creates its own level of coordination. And we learn how it flows in a in a in a society, how close we can be to someone and what we need to have with a distance. So navigations dispense food, but also modes of turn taking over have slight variations in different societies and cultures. And you have examples here. And then the next level we’re moving up to colonial and forms to behavior etiquette. This is incidentally one of the rare occasions when Angela merkel has a real, authentic smile and then several photos the moment she meets a politician. But when she meets robots, she has a genuine smile on her. You can find it on. Okay, so next level here, right? So here this is implicit practical understanding. And then we are moving higher up. I mean, it’s getting a little more complex now. The so-called social phenomenology is to say we actually learn from the verb how to proceed, what other people are doing. It’s a matter of direct perception, and we are not calculated that somebody wants to open the door. Sometimes we do that, but often we just see at a glance what the other person is up to, and that’s the perception. And then there is at the other series of so-called mind reading or mental lies and client that are more based on inferences or simulations. So we simply put them in as two different levels here. And also animals. And as you know from the research by from some research, just a little of the muscle, of course, I draw inferences about other other animals intentions. And then finally, we have more. The lexical instrumental understanding underlying coordination, or you collaborative order just to your own part. There’s a certain convention sometimes then actually you go along with conventions and you have a more elaborate explanation for why you do it in terms of folk theory of beliefs, desires and intentions. And then we have the last three levels where the collaboration is motivated either by completely egocentric intentions or by when you have this level here, by an understanding that you can only serve your egocentric intentions by carefully your plans with your coordination partner. So that is a bit of a lead. We understand you get both. And then finally, the last level, the highest level is when you collaborate as a team, when you are. It’s a very clear feeling that there is a new subject involved. Yeah, these are all distinctions that are made in the so-called theory of collective intentionality, sort of finer points, for example. Yet again, it doesn’t really matter how we are precisely drawn lines here and what we have taken. Just the idea that all the disciplines that analyze social interactions and identify a very valuable and important aspect of a social interaction. And so the best way of proceeding is to bring them into of you faster victory scheme that we call levels, levels of coordination and levels of sociality. Now comes the next part, namely robots, as you know, and mostly only simulate in certain capacities that they can realize the very low level capacities of coordination based on machine learning if you want. And that is something that they can realize as well as humans. But otherwise, then better or worse or better or worse at simulating. And strangely enough, that’s the notion of simulation, to my knowledge, hasn’t really happened in the attention that it deserves in the first instance. Simulation is a similarity relation between two processes. Very simple. And so what we need to do is we need to acknowledge that the traditional notion of reciprocity is something that we need to give up if we want to describe what happens between the human and the robot, acknowledge that it is that something can be a social interaction. Even if one partner realizes a certain level of coordination and the other one only simulates that level. And then you can say, Yeah, like I said before, it’s just similar to relation between processes. And then you can say, Well, okay, structured similarity, you can have the two different levels sort of along the entire at all functional parts of the processes, though only at certain parts. And that gives you roughly an overview over degrees of simulation. Very bad simulations are extremely cold screen. Right? And then finally you have functional replication that is the most fine grained type of simulation. So high degree of simulation similarity and lower degree of similarity of these two processes gives you different degrees of simulating. And that’s useful because we have so many robots and they are better at some things and worse at other things in terms of simulating. So when you describe, say, for instance, the smile of echo or kismet with smiles on my car or, the pier right? Do they both smile? Yeah. But what they do is rather different. And even if we only talk at the level, not if we don’t look at these sort of the precise architecture behind some of the software vision. But if you just look at what the interplay with what they the way they realize this or the way that you produce a smile, these are very different smiles and. So when we describe what’s happening between the human and the robot and when we describe what the robot is doing without immediately turning to the intention list idiom, we need to carefully distinguished these degrees of simulation. So here you have a very simple interaction. Somebody recognizes somebody else, and this person, not just the other one. And then the smiles, the second sets raises her eyebrows. First one says Hello, two. Second is what’s your typical greeting right? And now you can hear it’s the interaction matrix and now this and sentence interaction matrix where you not only list the realization of these two of these interactions, but you also list for this side the different ways in which a robot could simulate the first interaction part. So this is a function of recognition of a one. This is an imitation of what you wanted to some division but expression and then approximating so worse and worse types of simulation. And then you could say, well, you have kismet and kismet. It is very bad at I mean all of this is for the sake of the arguments, right. So it’s not so well researched. So we can criticize, but Kismet has a fear that it’s a bad recognition module. So there are certain things that it can recognize it not so. It’s only displaying what is even worse. It’s his smile. Just make smart and it’s dismal. But Kismet actually has a very nice voice box. So when Kismet says hello, this is a mimic, so that now we can contrasted with Sophia and Sophia could simulate her interaction or as follows Let’s say for the sake of the argument or recognition models you got to have, Science is certainly good, though not as well as some economists want to point out. And then and say her voice box in a sort of a phonetic production of Arabic is rather backed up. So you can see in a sense they do the same thing. But with this very simple classification to you can actually describe in a rather differentiated way what they do and where they differ in their stimulatory capacities. So you can say here you have two greetings and possible performs that assumption and only by obsession will this simulation decrease of the symbol of the different interaction parts. So you can say, well, I’m not surprisingly the lower the level of sociality applied, the higher degree of the degree of simulation, and again, when it comes to social navigation, to primitive forms of self navigation, I think that actually robots realize this sort of coordination and then at the moment, but again, a gesture is really disturbing. This bit of it used to be the high degree of social socialization applies to though the possible degree of simulation, but doesn’t quite hold anymore because of sense. I, I skipped that slide here. This is sort of the algorithm that you that you do. You combine the stimulatory extension matrix with the levels of sociality and that’s how you describe in greater detail what is going on on the side of the robots. Now comes one part that is particularly important and perhaps also the part that might be most interesting for the research for you as you were sitting here. We often overlook that social interactions are sort of dynamic relationships that are very, very different from just if you want physical relationships. There is a truth of the matter, whether I am from my perspective, let’s say that I don’t actually have perspective. There’s a truth of the matter, whether I’m a meter or two meters away. They could all agree on that, but such a reaction. Social interactions are different. They are. They have a different truth. There are seven perspectives that are involved. And in order to describe what it is, you need to enshrine all the seven perspectives. Why does some think and why why White is holding out the hand for the greeting. She again not reflective fully, not consciously, but as a sort of automatism as immediately some expectations about what the standard of that action are. So she knows if I hold out my hand, who will understand it as a greeting. So she takes the second person point of view trying to make sort of imagine what the other one will understand of the action. At the same time, she fulfills a internalize action norm and the greeting. And the same goes for blue. And then there is an outside observer from society at large or the impersonal observer. They pick a researcher, but it’s a sentence perspective. And only once you have described what’s going on. Some of these seven perspectives. Only then you have captured the interaction. So that would be, in a sense, the description of what’s going on here. And so it’s, you know, to capture all seven of them. And of course, it’s interesting that we’re having a robot is only simulating certain types of semantic understanding or normative understanding of action norms and so on. So now this one human now can speculate. I’m holding out my hand. How clever is the robot? Is it clever enough so that it will understand that this is a greeting now and it doesn’t get many things from the roboticist will now describe this line here. This is the action that I produce. Is it performed in such a way that other wants to understand that this is simulation of the acceptance of the greetings and so on. So you can capture in this with this very simple course pictorial display. How, how much coherence and how much incongruence you have between these action perceptions. But that’s the full perspective of account. You need to always spell out these seven perspectives. But more interestingly here, you can say, Well, two people can interact with each other by systematically, as it were, misunderstanding each other. And from the outside point of view, you can really see that. You can also say well of it, and that would be some sort of correctness. So the interactions in some sense, correct, Because the social interaction meshes, there’s some understanding between the two agents here. But nevertheless what they do is very different from what the external observer seems, and that’s different from the situation where it’s fully congruent. So all the action norms that are involved mesh with each other and what the external services is just the same as the norms that they are painting on. But now what do you get? Can use that simple description scheme in order to describe different forms of sentimentalism. So sentimentalism is when the human misunderstands what the robot can do. The estimates over interprets so the robot is only approximate. So it’s a very low, simply a low degree simulator. But the human misunderstands and thinks that actually the robot does have emotions or at least something. So let’s say very light emotions. So there’s attributing the humanness, attributing to the robot a high degree of simulation. But I’m almost done. And so as you think it is now, the new technical term we be going to introduce instead of anthropomorphizing, it is simply wrong and claim in the enterprise literature that all of the interactions with robots involve anthropomorphizing, anthropomorphizing events, that we have a tendency to ascribe to. An nonhuman item can be an animal, can be a robot, can even be simply an object. Human capacities. Right. And we are saying this sort of piece of fictional human capacities and we’re saying, no, no, no, you know, we might do something in order to rationalize that and so on and so on. But it’s not the human capacities, but it’s social capacities. These may be non-human social capacities. So this is a more sophisticated contrast here. Or how did two notions deviate from each other? But there is the host of qualitative data in our research and also other research where we can see that people try to associate more of the role. Why that is, they ascribe to it coordination capacities without thinking for a moment that these are human social capacities. So here we have a couple of texts here, but I skipped then I lots of deliberations going through people’s heads. What next? Learn how to coordinated, what it cannot understand. But it is, even when it comes to linguistic understanding, not a human understanding that is ascribed to it. And that’s the interesting bit. And now comes the aspect that brings us back to our initial question What kind of working scenarios are we going to create? Forms of social morphing actually correlate in interesting ways with the phenomenology that we are experiencing here? Yeah, we all know there are these interesting differences of what it is to be like with a dog versus a cat versus your dog versus a horse. Being with a stranger is different from being with a friend, and we all know that there are some feels differently, right? And the sentiment I would just call that the what it is like. So understanding expression, so different forms of being with they call it and being with the robot is a very distinct phenomenology here, depending on how well the robot simulates which level of sexuality and that makes it now sort of brings it to a head here. So we have seen we see three different types of social morphing on the human side. So the human always anticipates certain a certain way of being understood and that correlates with the way in which the being with the sociality is experienced. And I could skip a couple of points this the more subtle points because we were a little late. That’s a intelligence here. But what is interesting is that when White tries to understand the sort of questions as a suspect towards what is it the blue understands of me, how will you understand my action? Why me? Who also starts thinking about not only how this might understand my action, but also how does Blue view me? Am I for blue when we are on a date? Of course we do that all the time, but that’s a model of the other and self model of these relationships. That’s something that you can actually in the recipient design very nicely. Prince cast in Fisher and I have found that there are these nice interactions here so that recipient design and certain assumptions of social working dovetail so I’ll give you has a similar interpretation of what I actually is how a cat understands what you doing sober cat lover will say, Well, my cat is only interested in food and pleasant experiences along with skin and I am for the cat. Nothing else but the dispenser of these pleasant, highly egocentric food and the sentimentalists content owner will say, my cat loves me and she comes and she’s empathy for me and is very effective and so on. And similar things happen with the robots. So simple. Here’s the summary in a sense. So these basic ideas together, in particular the notion of social work, think that is empirically accessible via the recipient design. The description problem is something that I hope you have seen so far. So you’re getting a very differentiated we are getting tools for a very differentiated description of what the robot does, and at the same time we can use the same tools in order to describe in very differentiated fashion how the human experiences that which caused us from different perspectives. And so this is now let’s see the last slides. This is the, I would say, still standard model, the R&D model research of our DNA. So the research design and development model for robots that because you have the cortex departments all being produced like the little arrow here they are more or less dropped into human social interaction space. Then policy makers call on an ethics committee who were mostly not so well informed about the technology. They say yes or no. The, for instance, the Danish Ethics Committee, the Uppermost Ethics Council, simply decided in 2011 that it was, quote, too undignified for human beings to interact with a robot and for the longest time, for that reason, they have send care robots from in particular, also power from many nursing homes. That was simply a decision by the theologians and by the philosophy. So instead of that suggest that this is the way and it’s hard to draw and we don’t need to approach matters, we may take in all the different disciplines together. Social reality is so complicated, so complex, use in your research all different time and such that we have experimented with conceptual, similar and what is the most important? Don’t think that you produce an object. Start with the idea that you are producing a new social interaction. That’s what you put into interaction space, not an object, an interaction or a set of interruptions, of course, and that this whole enterprise be guided by values. Start with the values you don’t know with what we are doing. We have no idea. Deep ignorance, deep ignorance. So what we need to do is we need to start. This is sort of a rough flow. I need to start with some value analysis and sociality analysis. This is in the sociality analysis. We use this OASIS framework finally to say what kind of interactions are performed here? Where do people understand that misunderstand each other? What happens once they have sort of gone from the first point of view? What happens when we put the first prototype in? And so on? But let’s start with the values, and here comes the message that Brendan fortunately liked, namely that you have to without the philosophers or any other value expert, because values are not they don’t has to structure the engineers scientists normally like and not something like standards so that you can say, here’s my standard that creates this in this rule and here is the objectivity or whatever I have produced that fits the standard. This kind of top down approach doesn’t quite work for values because values are in this interaction. They are inextricably contextual and dynamic. So you have one interaction that is supposed to support this, but some so be negatively affecting some elements that you also like to preserve. So these concrete interactions in our scenarios are simply a complex, their dynamics and change they require certain very patient forms of analysis that the anthropologists mostly are trained to perform. And in interaction with philosophers who spell out values and how they realize this is our explore. There’s a wonderful paper I very much uselessness of ASX and that says nothing else what I just said, namely it’s simply not. You cannot do a top down. It’s good we have these rules and risks and act and so on. But ultimately you just start with the context and the values within the context. So we the situation right now, we do focus very much on functionality, but given that we don’t know what kind of disturbances we will produce within. So for reality, if we focus on functionally workable subsequent to actually our messages to say, why don’t we start with the values? If you try to maximize and or at least preserve the certain value, you can’t go that wrong. You’ll still be surprised. But then you preserve in value enhancing intentions at the very beginning, give you a chance to do a little better. And so that’s why we always start with the number principle. We say robots may only do what humans should but cannot do. And then my philosopher friends over was shocked because they say, Well, what do you mean by cannot know? So this exactly. That’s how we need to start in the context of the the intended implication, the nursing name, the transport robot and the nursing. And we need to start with this principle and need to find out jointly what do we mean by cannot here is it so it’s a juristic principle. It’s a principle is supposed to catch a value discourse going. And what we have seen again, I do this quickly so we can talk more about it in the discussion. We have seen the robots in particular, this robot here, that it has this special effect. So people experience a as you might know, they experience all sorts of robots as eliciting the so-called social desirability bias. So people feel that they are less judged in the interaction, dialogical interaction with the robot. And here it was particularly strong because this robot doesn’t have a a gender and it doesn’t have age. So it feels funny when you talk to it. So what we did to how we said is that something that humans should do but cannot do where this robot could be useful? And we said, well, you know, humans can not ask gender and they cannot have age, so why don’t we try out and see what happens is, for instance, I hope it’s the next one here, maybe not. And if we use this robot in scenarios of the job applications in Denmark, people from ethnic minorities have a very bad chance of getting a job. But if they are represented by that. They use that to in fashion and have a much better chance only to enter the culture. The answer is coming soon, but the city is covenant and the same gender songs. So you, the robot tried out whether in toxic mediation, for instance, where gender also plays an important role that which conflicts right. If the mediator is a man, the woman thinks hopeless and vice versa for job interviews and so on. I think the next slide is perhaps the the scenario we could show this is one of our nicest results is that in a hostile negotiation like it, the biggest conflict in the robots created not only more integrated as a win win situations, but also the more creative ones. So if you still are not convinced that you need us of friends to ask yourself could you define what autonomy, dignity and trust are? But you specify how these values are realized in a social interaction in certain contexts. Right? And could you do so as an analogy and value experience? And I hope that you will see honestly, I couldn’t. This is why you need us. So we need that collaboration to come back to our initial question here in order to find out what it is that we are going to do, which new work scenarios we are creating. So with always this, for instance, you would ask, okay, precisely how would this the person in a hybrid working with an AI or with a robot, how does the person overestimate or underestimate the robot’s capacities? So how does the how is the person affected by the understanding wants to understand the robot’s actions so the robot acts wrong. And then what? We really have an effect on the self of the agent and then in particular scenarios like this where you say, here the woman is really engaging in a sentimentalist over interpretation of the robot can do. And now we as societies can decide. Now we take sort of 3 to 7 perspective. The external observer. We can say this do we as societies want to allow for that or would you maybe step in and try to create technological literacy of that poor woman who was deluded about what X can do? And then in particular, what should we as societies actually step in when we see that there is in general a wrong framing of what the technology can do? So this is sort of social sentimentalism right now. I would say we are engaging in social sentimentalism as long as we say I’m sorry, as long as you say that the robots can understand Cassini’s experience and so on. And that was it. I think these are the takeaway messages here. Very simply, what it’s important that we are aware, all of us together, about the risks and potentials of AI. I think we are doing much better. We will all feel much better if we work with a value driven approach. And in particular, I mean not so much for research but for the application development. So it’s a real application development. So participation instead of top down regulation that I’ve seen in you, I think you, you also talk about co-creation right in your group. So this is so similar. And then I think that simple, but nevertheless, the simple conceptual apparatus that are presented today that allows you to produce some rather detailed descriptions of these interactions, they might facilitate the value analysis and and also the analysis of some of the descriptive analysis of which sociality. That’s clever to say, if would be appreciate a closing word. Okay, good.

Descriptive ontological framework for social robotics

Seibt serves as the Director of the Robophilosophy Research Unit at Aarhus University, which conducts interdisciplinary philosophical and empirical research on human-robot interaction and robotics. In her talk, she will be presenting the recently developed descriptive ontological framework called OASIS, which stands for the ‘ontology of asymmetric social interactions.’ This framework can be used to examine in depth which forms of sociomorphisation are suitable for a planned application.

Distinguishing between ten levels of sociality and five degrees of simulation, OASIS delivers highly differentiated descriptions of human social interaction with robots and AI, which are needed to help users adjust their expectations of the reliability of AI and perform ethical risk assessments of AI.

Lecture series on interpreting the environment as a joint effort

The talk is part of the ‘Co-Constructing Intelligence’ lecture series. The universities of Bielefeld, Bremen, and Paderborn are working together on this lecture series. Philipp Cimiano is organizing the series, along with Professor Dr-Ing. Britta Wrede, a computer scientist at Bielefeld University, Professor Dr Michael Beetz, a computer scientist at the University of Bremen, and Professor Dr Katharina Rohlfing, a linguist at Paderborn University. The lecture series is offered as a collaborative research initiative from these three universities. The consortium uses the principle of co-construction to adapt robots’ comprehension and capabilities based on those of humans. The researchers are thus working to create a foundation for flexible and meaningful interaction between robots and humans in everyday life. The term co-construction refers to the dynamic in which the interpretation of the environment and execution of actions take place as an interplay between humans and robots.