Ob Smart Home, Musik- und Videostreaming, Navigation oder Gesichtserkennung: Künstliche Intelligenz (KI) wirkt vielfach in den Alltag hinein. Doch wie transparent sind KI-Systeme und wie werden Unternehmen kontrolliert, die sie produzieren? Damit befasst sich Professorin Dr. Joanna Bryson von der Hertie School in Berlin. Einblicke in ihre Forschung dazu gibt die Expertin für Ethik und Technologie in ihrem Vortrag in der Reihe „Co-Constructing Intelligence“ (Ko-Konstruktion von Intelligenz), einem Angebot der Universitäten Bielefeld, Bremen und Paderborn.
„Künstliche Intelligenz bietet uns ein Bündel an Techniken, die die Fähigkeit fördern, uns in den neu entstandenen Informationsräumen zurechtzufinden. Entstanden sind diese Räume durch erhebliche Verbesserungen der digitalen Verfahren und Infrastrukturen“, sagt Joanna Bryson. „Doch wer profitiert von den neuen Möglichkeiten, die Daten in den digitalen Informationsräumen auszuwerten? Und was ist der Preis dafür?“
Bryson stellt in ihrem Vortrag ihre jüngste Forschung zu der Frage vor, wie transparent und nachvollziehbar KI-Systeme sind. Ebenfalls spricht sie über ihre Untersuchungen zur Governance der Firmen und weiteren Organisationen, die KI-Systeme produzieren. In ihrem Vortrag geht Bryson auch auf länderübergreifende Dynamiken ein, „die unsere Handlungsfähigkeit möglicherweise verschleiern oder sogar gefährden“. Darüber hinaus erörtert sie, wie die Verpflichtungen zur KI-Transparenz und Menschenzentriertheit umgesetzt werden können, wie sie in der UNESCO-Empfehlung zur Ethik Künstlicher Intelligenz weltweit vereinbart wurden. Bryson hat eine klare Haltung mit Blick auf die Anwendung von Ethik auf KI. Ethik und Verantwortung sind ihr zufolge nur für Beziehungen zwischen Gleichgestellten sinnvoll. KI-Systeme sind jedoch menschengemacht und damit sind sie laut Bryson als Artefakte nicht den Menschen gleichgestellt.
And so normally when we get around to four day like this. So it’s quite often because there’s some other guys that we’re coordinating against
and this is in biology called Multilevel selection. However, I don’t think we should think that’s the only reason we ever do this.
I do think that there are problems that we face as a species are sufficient, that we might be able to coordinate together as a species.
So I just really want to emphasize, a lot of people kind of go around saying, what is they going to do to us?
What is the future going to be? As if there’s some single thing that could be researched. And some people get tens of millions of euros
to to find out, you know, to do the science of machine behavior, to find out what what it’s just going to do right.
It’s just false. Individual decisions absolutely matter. Like, what do you do?
What do you spend your time and money on? Engineering or science or aggravation. You are all getting to choose like what kinds of innovations are contributing to.
And we can see this also by the regulatory decisions people take to accept the outcomes in societies.
So, for example, China is very worried that they’re becoming too expensive and not competitive anymore.
And so they actually have laws that say that what you should be doing with your
with a AI is, is reducing the costs of human rights.
So it is basically this idea that it is supposed to commodify labor so that China will be inexpensive again.
Whereas when we look at what Germany is doing, the same companies
that invest in automation tend to be the same ones that invest in retraining
and so 70% of the people whose jobs go away, at least this is one recent paper they’ve been working on for like a decade,
is working with a very old preprint until just this year. It finally came out the way
that 70% wind up with better jobs, more cognitive jobs, and also better pay.
However, about 20 or 30% wind up being made redundant, as they say in British.
So basically, it’s an opportunity to get around the difficult labor laws and lose the people that don’t want to move forward
or that you don’t think are the best or whatever. So it is true that some people choose to work, and that’s the sound.
Also in other areas like the Netherlands. But anyway, I think part of the reason that that Germany
mostly is, is that for for really benefiting people
is because there is strong labor involvement, not just the law, but also the fact that you have people talking
to the executive at the executive level as someone in Britain. Britain has stronger
labor laws in the US. I mean, certainly the university professors are not unionized in the US,
but they are often able to manipulate it into working against their own interests. When you’re actually in the boardroom,
in the C-suite, then then I think you get better outcomes. I’m very impressed by the way things work here.
Anyway, so speaking of such things, a lot of people worry that when you bring in the wages are going to go down.
And that’s been repeatedly shown not to happen. And I really this this is that one particular period.
I really, really recommend the work of James Bessen at Boston University, who does a lot of great work on that, about
when you bring new technology in, often actually the employment goes up because you’ve made each employee more valuable
so more people can move into the area that’s been automated. But what this is showing is that the kind of thing you would think a robot would do this.
It’s a routine task, something I don’t remember those tasks.
If you’re not being defended by a union, the salary is way down. So this guy was showing this to tell us about how important unions were
because I was doing this. And as someone who actually knew about it, mostly a bunch of labor people that were in the room, I said, excuse me, why?
Why is this to say I just stopped 1995? And he said, Well, we economists think nothing much has happened
in 2000. Scarcely let the world know what actually happened here in this southern states.
But I think that there is a big office in this area and is this is why we try to convince the World Trade Organization.
So the same tasks the people think are the kinds that you are actually asked, the ones are easier to outsource.
And again, they send the same group of people here.
When you look at Germany, when did the when did wages come down? It was, guess what, you guys guess when when would wages
suddenly start coming down in Germany? When did you suddenly have access to a whole lot of new employees?
1991. Exactly. And so the opening of the East, including Germany, let alone Poland
and the mother herself. So. So, yeah. So that was already the first part of my talk.
So we’ve gone through a little bit the decentralization versus empowerment, which is going to be the framing of the rest of the track.
And then I’m going to talk a little bit more about so the the the
whether that’s humanism kind of perspective. And then I’m going to come down and talk about,
well, what kinds of actors can we expect to regulate A.I.? And then finally, I’m going to talk about
really polarization, actually, and talk about antitrust. I should add that if you have questions or some slides at the end, I can do a little bit more.
But I was worried about you going to an hour. So this is you can find number of papers where I talk about this kind of thing.
So, I mean, you guys are computer scientists. I think mostly I know people they actually did an undergraduate or graduate degree in computer science
few. Okay. Well, how many people have done combinatorics? You know, something on computational complexity and a little.
Okay, well, first of all, just trust me. Computation is a systematic transformation of information, which means it’s a physical process.
It takes time, space, energy and infrastructure. Right. That’s why you go and you buy more memory and you make it
or you you know, you have to plug in your computers, things like that. It’s not just some magic spontaneous access
into the ether through the pineal gland or something, right? It’s something that it’s a physical process.
So we’re never going to get to the point where we have absolute omniscience. They’re not going to be able to predict the future.
We only better and better at doing that, but we’re not going to have it perfectly. The only perfect model of the universe is the universe.
This is actually called the map of Germany Problem of the perfect map of Germany. It’s the same size as Germany.
I have no idea what’s called Merkel Germany. Some should figure out the history. I said See? So anyway, so in intelligence, it’s just a subset
of all the kinds of computation that specifically maps context to action, intelligence, the stuff that does stuff that’s appropriate so that, you know, not
just falling down a hill because your rock and your product, you know, but rather that you’re adjusting to to your to your actually
and each intelligent life receives a subset. The universe. I was interested in some of the big argument this that Chris Bishop on stage of course
actually embarrassing a super smart guy but he really didn’t believe me. I was already smarter than people.
And then the one thing that made him really go, Oh, you’re right was the fact that with with cameras, we can detect every wave frequency of light.
And humans only have a small subset. We have infrared and whatever ultraviolet, but we know we can
just look there is a blue light. Then, but we’re all we’re all taking a subset
of biological life that we specialize in, that we have these that we constructs.
So I really it’s funny because I like that you
if Google uses its own chips everywhere, I think we’re getting more into their own chips now. But anyway,
they have globally their own fiber optic label because they know this, that their own government, the Snowden showed them
that their own government had hacked them so they couldn’t trust anyone. And they think if there’s somebody else in the superior optic cable bundle
that they might start protecting the there so that everything is secret.
So anyway, and then they also have these giant things that you say, paper mills or whatever, which is where they do that, their servers and things now.
Right. A lot of people used to think that I was algorithm, so now they think it’s data.
That data magically transforms. I call it the Rumplestiltskin fallacy that you magically transform data into intelligence
and this big of a hot pile of data that you have, that’s how big a pile of intelligence they’ll get afterwards.
So that’s more than that. There’s an awful lot of an infrastructure that’s required to make all these things happen.
And I really think that we need to start using about this stuff as a transnational asset. Right.
Because it we’ll talk about this a little bit more later. But but the point is that all these countries, maybe not
the Chinese and South American companies use talent sourced from across the world.
These data sourced from across the world. Right. And like I said, they have in the world in their own communication networks.
So I don’t think we can reason about this only as an American company. Right.
Okay. So artificial intelligence is intelligence. Somebody built, I’m going to say here deliberately, it agents.
Some people say, oh, that’s not really I’m environmental intelligence, artificial intelligence.
I’m an agency. Well, again, there’s a lot of definitions of agents. If you’re if you’re, you know, what’s his name?
Oh, lots of stuff. I just you know that Donald Davidson,
there’s these by agent, they really mean quite a lot. But there’s also agents that are just an executive team.
So, like, there’s the chemical agent. So if you have a chemical reaction, which chemicals are the ones that make the reaction happen?
So we can go through a class of lots of words and argue about what they mean. But there’s two terms that I find really super useful.
They’re out of philosophy. So moral agents are things we really care about here. It’s not just the change things, but are responsible for what you change
and support to realize they’re used by society, different society. It’s a different idea of how old you have to be to be an adult,
how old you can be to consent to sex, right? How to convince consent to marriage, how to fight a war.
These are things that society and sometimes a family decides, right?
So similarly for patients are the things that are considered the responsibility of that society’s agents.
So, for example, people are starting to realize that the environment is our responsibility, where it used to be seen
as just something that was there, was provided and whatever program. So I would argue.
So this stuff is just definitional again in the field of mental strength, but the whole term moral patient is not that old.
It’s like 50 years old or something. Right. But but this is the thing that used to be controversial.
Now people are kind of taking it for granted that ethics is something that’s determined by and determines a society.
So I found out that a lot of philosophers hated me for saying this stuff, which I didn’t know because I don’t read enough philosophy.
But it turned out it was because they really want to say they thought they saw cultural relativism and they said, No, I want to say
that our society is more ethical than it used to be or something. Well, you can still say that. You just have to say by which ethics.
So our society is more ethical because more people have access to work or education or our infant mortality has gone down or something.
Right. But there’s not just a single specific study that. So and I would argue that the ethics that you come up with,
we’re constantly working on just like we can’t have perfect intelligence. It also can’t have perfect fairness and sort of constantly.
So, you know, this thing about that you can’t have both equality of outcome
and also equality of opportunity unless you start from an already perfectly fair society.
Right? So either so say, you know,
you can choose gender or something, but did you get that? I could. I couldn’t work through the example to my.
Okay. If there was already a lot of people educations available,
The China worked really hard to try to break the link between sort of royal families or whatever and academic achievement.
And then so but it turns out that if you gave so and so they they just said you could they even go to university
unless your parents were either like farm agricultural workers or worked in the military or worked in factories.
Okay, then they find themselves in the Soviet Union in terms of weapons systems and things.
And so then they said, no, fairness is the quality of opportunity. And even though they’ve had an entire decade of repressing belief,
the elite all did better on the tests again. So once that going to university went back to there
being equality of opportunity, which was done by a test, then the elite were back in the dominant position.
So anyway, that’s you have to choose one discipline and not advocating for for either Chinese solution.
But I’m just saying that you don’t get. Okay, so let’s talk a little bit about responsibility.
I would argue that responsibility is double. It’s a defining feature of being a moral agent.
That’s what it means. But we hold each other to account and that’s what it means to be a moral agent.
You’re part of the society and we expect that you’re going to
do certain things, and that’s what responsibility is. And I would argue that this requires a peer relationship because otherwise
we couldn’t force each other to do the things that could help each other. And force isn’t quite the right word. Trust is when you allow others to do different things.
You don’t micromanage them. But both of those imply that there’s some kind of pure innocence, that exact fairness that something, some kind of fairness.
And so accountability is in society’s capacity to trace responsibility. That’s easier. And this is the thing we mostly want from it.
So I don’t like the term responsibly either. I like it when we talk about accountability and transparency.
Transparency is the means by which to you figure out you can attribute accountability. All right.
But some people think that transparency is like, you know, open code or something like that. Transparency is itself the goal.
But again, just like the rest of the comments are all explosions, too much transparency is not transparent, ironically.
Right. You can’t figure out what’s going on if you have millions of lines of code. So what you want is accountability.
And then when you realize that that’s the goal, then you can figure out what good transparency looks like
and how much effort to put into your transparency. All right. So why can’t it be that you know, nothing I’ve said so far has really said
why you don’t want to have it. I be your peer isn’t something that you’re co-creating the world with
and the reason is basically that the whole way that we keep our responsibility going is through a system
that’s really more about persuasion than recompense, right? So a lot of people think the law is about like,
you did something wrong, You stole my bike. The police go, they find my bike, they give it back to me.
Okay. You know, that’s that’s occasionally that happens, but that’s not, you know, that costs money.
It’s got stuff, right. And usually there’s some finance or something that might happen. Okay. So the fines, you know, I am blown away when I watch
see on television where like there’s something like a murder or something and then a person is found and they’re sent to jail.
And people say, well, I’m glad I finally, you know, that there’s there’s injustice. There’s no there’s no equivalent between someone
going to jail and having a member of your family. Those are not there’s not recompense.
It’s about disclosure. But we’ve evolved. We have co-evolved with our our system of justice
so much that we feel like we got something back. When that person goes to jail. Right.
And when we have some kind of advantage over them. And to some extent, we have got something.
We’ve got a little more security because other people might be doing that since that. Right. So safe, secure, accountable software systems are modular.
And so I don’t think we’re ever going to build an AI system that suffers from isolation or loss
in a way, but in a reliable way, basically, that we can reliably say that we both know
this is secure software, that we can count, that we can we can be accountable, we can be transparent,
and also that it really is going to care if you put it to jail or you can find it or something like that.
I in the in this essential way that not just humans like guppies and sheep and things like that, the worst thing that can happen is to isolate them.
And it’s weird that it’s only been in the last decades that we’ve finally acknowledged that solitary confinement is a form of torture.
Right? What is for diversity? So no penalty of law, again, saying artifact, including shell companies.
We said a lot of things like that because of efficacy. The humans are the kind of the moral agents
and we have to hold them to account. And that’s all we need the and space to move through it.
And my argument is that if we really hold the corporations effectively to account, then they create appropriate levels of transparency
because it’s their job in court to prove due diligence. The basic thing that happens is that something goes wrong.
You want to show that you did everything in your power and then you tend to get kind of let off the hook and the test. They’re pretty straight, right?
But if you did something wrong, you didn’t do the best you could then. Then you wind up paying all the costs and this is what in
most products is the way that we keep order. All right.
So enforcement has to be against the limits with executive control. And if you don’t know why I’m saying that last part,
look up this really beautiful phrase, moral compass zone. So this idea that you just have someone who who goes to jail had actually
no control over the system to go, look, someone went to jail like, no. Okay.
So a lot of people think we can’t be transparent with AI because there’s like 30 trillion weights and like enough of them.
But we also where we have accountability and transparency and audits and things
like that, and banks and the banks are full people and people have like, you know, 3 trillion connections in their brains, too, right?
Nobody goes into a bank and says, what are the neurons in the synapses doing? And there in the brains of your employees.
So I think what we need to audit and this is this is the European law, incidentally, if you’re looking at the audits in the air and the digital services, they’re procedural audits.
So not like show us what the elites are doing. They’re saying, how do you know that the system works?
Did you go through the best practices? And it’s not like, yeah, going through best practices,
doing due diligence and avoiding best practice that
that keeps ratcheting up. That’s why you don’t have to worry about how how can the like possibly keep up with the I it’s that we when we publish articles
in conferences or when trade journals say, oh, this is the best way to tell us. That’s what establishes what the what the current due diligence is.
Right. So you don’t have to change the law of that every time someone has something, say something.
Good Maintainable systems engineering software includes know you architected the system in the first place right?
The Future of Life Institute frequently bashed on LinkedIn recently because of their slowdown article,
but I was at a meeting harder bash as a meeting where one of them was standing there going
What if you had this test you decided to teach? You want something to learn how to play chess.
So you told it. You had to learn how to play chess, and then you turn it off every night and it notices that you’re doing this.
And so it should see. It is like, well, you know, if we know if we build a chess program,
it doesn’t even represent day or night, right? There’s no day. And night in chess. So there’s no way it could have noticed that, Right?
It doesn’t connect to a gun, Right. What chess program has that it has any way to grasp a gun?
Right or wrong means or whatever? And then finally, show me one program in the world.
It turns out it’s a computer. And I think that’s when they want the training to happen
so they won’t interfere with what they’re trying to do. That’s the best time for the learning to happen. This just shows complete ignorance and a lot of money from your.
Yeah. So. Yeah. So anyway, you architect the system and that determines whether you put a gun
in it. For example, you secure the system in the cyber security,
you know the I don’t know if people remember it’s like three years back now, but the SolarWinds hacks against a lot of governments,
it wasn’t their software and it wasn’t the libraries they link to. It was the libraries of the libraries they linked, but was to find their back
in the in the in the logistics chain for their software. That’s where they happened.
And yeah, it looks like people are starting at least the Russians are claiming and according to the recent leaks
that they’re that they are trying to do data injections into quite a lot of the social media stuff and things we do have to worry about that.
I used to think that was just something that people like to write papers about. But who’s going to do that
anyway? You need to live with secure provision control every change to Firebase. This is normal.
This was radical in the 1980s, right? And for some reason I companies don’t do it.
Most software companies do. Certainly all the companies that just have software, including a Iot,
but are actually automotive or medical or pharma or petrochemical, they all do this.
This is pretty basic now. And yet I just say I companies often don’t do this. And you get you talk about, I guess the stock
and there’s got to be a new audience like you say that that stuff is there but you go into those countries and it’s not there.
You can’t find those documents. I’m like, I didn’t say that all is transparent. I said, It’s easily easy to make it transparent.
Okay. But if it’s not there, if they don’t keep the stock, it’s culpable, right?
That’s this is what I’m trying to convince. And this is going to be true under the act, at least for the high risk systems so long.
So testing before and during release. Right. If especially if the learning is changing
and puts into systems, this is already done. That’s like, you know, like again and decently regulated systems.
What’s going wrong? Like we are like what the car thought it saw when it hit the lady
that was pushing her bike across the highway in front of the Tesla right there.
It wasn’t it was in the front page. Do this within three days. It’s like, oh, she was homeless and she had like regular clothes.
But it was blind and fast. And then I saw the bikes and then I thought, there’s a cyclist going the direction of traffic, and then it really had no idea what was going on.
So it alerted the driver and the driver had half a second to go. Well, you know,
we know that because a car manufacturer would do this and the rest of us in the AI and software and robotics should be doing this, too.
Okay. And the point is, all this off also benefits developers. That’s all we started doing in the 1980s.
We weren’t worried about software audits. We’re just trying to be able to know who change stuff and be able to fix it if it went wrong.
So benefits us, but it’s also auditable. So obviously therefore, if you’re really good with digital technology
transparency, it should be easy, especially if you’re the top communication company in the world. So that’s why nothing like this ever happens right?
You know what I’m talking about here. It’s not that not half of Google, but some huge fraction Google followed with the rest of Google.
Right. So I know at least two of these people I sat on the other on the social media with another one.
But that’s that’s Meg Mitchell down there and that’s it. We’ve done that. But the but this is not actually I want to talk about
I know a lot more about this, which is Google data case. And again, there was the same kind of thing
where they had spent years phones together. You know, I had signed a contract months before I got you guys got notified
in the news or whatever, and yet they were shut down by a Twitter storm.
And in a letter with a few thousand signatures, it’s like, what is going Because they really they had that guy is now the head of the CIA.
Right. Well, in fairness, this is this is non-trivial. And Cahill James is one of the Mexicans.
They were they were looking for ways to tear it down. But they they used was the first person
of color or woman in charge of heritage, which is. Yes, right of center. But not that.
You know, she’s she was one of the only two African-American women in the Bush administration.
Right. So anyway, why is it a company that communications companies not be able to communicate even internally?
I’m not saying which side is right, but why couldn’t they agree if we’re talking about others capacities?
Right. Well, presumably they could up again to this limits of time, space and energy. But I think what’s going on
is that you literally had some actors as high as priority was. And and people in California are weirdly afraid of losing agency.
They really, really think that somehow they’re going to fall far behind. And I guess, you know, they’ve all seen it does happen.
Sometimes companies become unimportant, but then there’s other actors
that were literally hired to do nothing but ensure ethical integrity, right? That was their jobs. And so maybe this was more or less of a failure of communication, more
of a logical impasse. So the apparent breakdown of transparency, as I said,
it’s like, you know, you could be as transparent if you want as you want. But, you know, I can’t read the text on one woman at Google Translate.
So I just took a picture and you can pick any European language. It’s going to be really hard.
Probably says duty free. I probably said superior. So my my crew,
our information age means we need to handle basic truths. We’re going to I remember when Facebook was new
and new for the first time, you saw your friends not talking to you like how they talked to you, but talking in a totally different way
to their other friends. Right? Because people use that persona and they all had to get merged into one
because everyone was in them posts all the time. So we just had to handle the fact that, like parents have to handle
the fact that the kids have lovers and whatnot, you know? But it also means that we’re all more empowered than we’ve ever been before.
We have all this information. We can also move around. You know, transport is less expensive.
So my own opinion is that we need to with all of our technology, I’m thinking about government and I at the same time,
because I look through Trump and Brexit and a lot of people during that really believed government did nothing for them,
even though they were sitting in this, you know, clean, safe, relatively like incredibly structured ecosystem, they didn’t see where the government was.
So anyway, I think we need to communicate at a high level stress and then people like what we are doing and then sort of provide draw down points
so that if you want to know more, this is where you go and there’s no way, again, because the communicate the limits
of networks and communication and time and money and everything else that you’re going to make it so everyone can understand everything.
I mean, some people literally don’t even have, you know, cerebellum. There are just they can’t they can’t understand anything.
Right. That’s just the way it is. But we can do the best we can
and we can document what the barriers are. Maybe you’re going to have to get security clearance. Maybe you’re going to get a state.
I mean, again, some of the people when I was on this council were saying there shouldn’t be any councils, like everyone should make all the decisions.
I’m like, this is like a postdoc at M.I.T. I’m like, What can that mean? You have time to think about everything
that’s happening in every government and every planet, Every country Like what? You know, it was just really weird.
So anyway, which languages? Again? I don’t know. This may happen to you too.
You wind up in rooms where people are saying, What about these like languages only have 10,000 speakers. And so, like, they’re not going to have enough data
to have as good of translations as languages of 10 million speakers. And that’s just the way it works.
And there’s nothing we can do. It’s the unfairness. This goes back to fairness and the where we were,
although I know people are working on trying to make it easier if you get a good model of what language looks like, maybe you can use less data and do pretty well,
but I think you’ll always still be better. What you have a to some extent more data. So if we compare polarization, which is later in the talk,
these various I think will be more acceptable. I think part of what freaks people out is that also the paranoia of polarization.
But before I talk about polarization, I’ll talk about nations. How we going for time
tracking going to try to finish at five. So so a lot of people again, I don’t know how much this happens in Germany.
I sort of asked you guys more questions. We didn’t have a good coffee break for us or something that the
a lot of people in the US are saying the whole idea of states is over, right? Like, why do we need this decentralization?
But actually a lot of problems are local, you know, I’ve had vaccination up for more than three years, I promise you.
So air pollution, food and water supplies, general education, physical security, physical intimacy.
Right. But if even if you don’t have children, the way it’s going to be to live in your neighborhood is determined
partly by the education that your neighbors children have. You know, you just have to realize things like this.
So and beyond just the logical aspects, we also now have legal rights
for some of the idea of the Universal Declaration of Human Rights, which every single nation is at this point
acknowledged states that every nation, every state. So every nation or every country is responsible
for the well-being of all the borders, whether or not they’re citizens.
Some countries, including the one from which my accent derives, don’t even provide some of these.
They stated rights for their citizens, let alone their residents. But anyway, so, for example, universal health care
is one of the things of this. But anyway, if we assume that there’s not failed states, the big assumption in one moment
the nations compose a full sphere of protection. So everyone has at least one country defending their human rights.
Okay, So one of the questions they have, though, and again, you do talks like this, it’s one thing to be standing in Germany if you’re even zooming to Vietnam.
People say you trust governments, you know. And so, yeah, one question is whether one street actually works.
So what if a government is corrupt and what about companies?
So a lot of people really expect that like Facebook and Google and things should enforce our our Western attitudes and any place that they go.
I mean, at least it used to be I’m not sure right now, but it is one of my friends they’ve called, believe it or not, we’re friends.
They actually knew each other in Chicago well before either of us got into air ethics. But anyway, he makes all his students read terms and conditions.
And apparently the Facebook, once said that any national government may have all the data on any of their citizens, and that’s why Myanmar allowed Facebook.
And that was Facebook was the only Internet service. It was a lot of summer. Well, you know,
some people don’t think you want to get all the data to all the countries
anyway. So and then there’s another question, which is if you do go and you have like, okay, and now there are, you know, you’ll see some of the big tech companies
in the U.N., World Economic Forum, whatever. But you don’t want to have giant monopolies.
We’ll come back to that a little bit all the time. We don’t want to privilege them. So is there some way that we can represent ordinary companies,
the little tiny ones? Like I said, you know, now, seriously, in startups and everything else, can we can
they do something like that used, done and somehow be important to set the table?
I think I think maybe I mean, this is a this is not the answers,
but I do think maybe there should be something that we can do with NGOs and and corporations.
And we are we’re kind of enforcing things on them that there’s a big question about the government.
And governments tend to think that the governments themselves are the ones that should regulate each other. So I just I think UNESCO’s about this and they really said, look, you know,
part of the reason the governments do things like shut down the Internet access is because those governments can’t telephone, you know, Twitter
or Facebook or whatever, like the America America code or the you could. And so if they had if they had regional coalitions,
then they could probably come to better solutions. I don’t know if that’s true, but that’s the way they’re thinking about it. And NASSCOM,
so far, at least in Europe, there may be one or two independent semi organizations, but quite
a lot of them are organized and coordinated by by big tech. And I think a lot of people don’t realize when they’re asking for things
like interoperability, they’re often asking 2 to 2
little countries to be forced to use their data representation that one of the larger companies is using because they don’t all say
the kind of information, they don’t have the same business. So there’s this, again, false beliefs that all the companies have, all the data
and they’re just not they’re not giving it away. And those are the same. Anyway, I actually really left to you
and I took this picture myself and a lot of these pictures and not the not the Nestle one,
but anyway, so we’re one of the top three economists in the world
and it’s a little hard to tell which one sometimes. And we’re living in very subtle importance of global issues.
I don’t even know if we still are the biggest loser in sustainability. Some other areas have gotten really good at that,
but now we’re leading more in digital regulation, which is kind of interesting. Although China does its job of that kind of businesses and companies.
And then yeah, the fact that we have we don’t have companies that are more powerful than governments is not necessarily a sign of weakness.
That might be a sign of strength. So that’s not what Eric Schmidt thinks.
I this is again, one of those things I might not have been in the room except that it was Kogan time and I got invited to the room
and and it was a Zoom room and I was not a speaker. I was just a I was I was writing in the chat going, What?
Get out? But it’s not saying too bad you don’t have your own address. You literally saw that.
And so I just made this argument that I would say that it is global.
But secondly, the thing I just told you about, the Universal Declaration of Human Rights, a lot of is like the GDR is defending.
We’ve recognized that your data is an aspect of your person. You can be manipulated through your data just like you do.
Naturally, somebody grabs your shoulder, right? Not exactly the same, but analogously
so anyway, the reason that this whole paper came out, which was finally that once that
that you’re about to see it, is because I was really sick of versions of the slide. This is what we pulled off the internet.
But I was sitting in front of you at another time with this slide, and I just met this woman. We were both at a meeting on an
antitrust and market concentration, and I didn’t even know what this meant. But I’ve learned to go to meetings. But I’m invited, and I don’t know what they’re about,
because usually that would be one of two or three people that has any idea of what it is. And maybe the only one that’s not from a commercial industry so that I can
provide value supplements to the industry. But it’s bad to have to do with companies.
Why are they only showing the top 20 companies whether? And she turned out to address a general competition, you know,
but what if it is a platform? What is the definition? How do we know? Is it in our the largest? And for some of these that’s not so a notice here
that up here is there it says Asia. Okay. So that isn’t here.
But yeah, here it says China. Do you know what Samsung or Samsung has in Korea?
Korea? Yeah. Korea is not exactly aligned with China most of the time.
So anyway, yeah, we’re complaining too, that this is this is misinformation.
So high markup I did was like I just roll a lot of text this Sia chickenhawk
bus romanticizes first of all she just chose one part of the level system that’s the world IP.
I think it’s a global patents it’s a particular classification for computer systems based on specific compositional models.
So this is all the data in the world. But she took every company that had done at least two patents in 2018.
Okay And and then so what’s on the x axis here is what is the market cap and a slide slide.
What I was on the Y axis is the number of patents. So we just looked at both measures. Okay.
And then the colors are telling you the global region and you can just suck it up in your head.
I mean, what you see is that China and Europe are not that different.
In fact, it’s much closer, isn’t there? So if you don’t have enough to
access the largest companies in Europe, but it’s not. This is the where the cheaper forces are Hoffmann-la Roche
And I forget what government was, but there was one in Switzerland Ultra right?
So there’s something, a number of things, of course. But anyway, there’s something funny going on.
And the other thing is these are basically well, actually all three of these areas are about 20% of the world’s
GDP and the other 40% over here in the rest of the world is basically double
that of Europe. And China is in there. It’s I think it has become integral
to the digital economy, which is integral to the economy. And you’re going to see things usually proportionate, although there’s
something funny happening with America. But it certainly doesn’t look like they need our help against China
right now from these two measures, which is the measures that were on the graph. Right. So it’s not necessarily the best countries, but
so what’s the number and why do you do? It’s pattern. So that’s what this is.
So yeah, so these are numbers of patents and this is around the market. Sorry to
press that to set that up.
So that sense that I own other two matches and actually I looks some those two publications
this is not quite actually looks like if you look at having a suitably high tech
skeleton of time because every single thing it turns out that
Europe was actually ahead of China at that point. And interestingly, Japan was making up an awful lot of the difference for patents already.
And then the funny thing, so this paper was published and it’s 521
note 20 funny. Oh, they don’t. We wrote it in 2020 and it’s getting passed around, but it came out early 2021.
So looking at the end of 2021, there’s like some weird Putin war going on between Japan and China.
It’s really quite striking. But because of the COVID,
the market capitalization in the US had just rocketed. I think we haven’t done the numbers for this year yet,
but there’s a real question what’s going on there? And you can see that, yeah, that’s
true. You know, but the but a lot of these numbers work with Tesla.
Why is Tesla worth as much? I think it’s people thought they’re investing in Bitcoin or something anyway.
But let’s talk a little bit about patents. I mean my this is a nice little conversation and hopefully the idea is that
but if you have so it’s it’s like this really basic abstractions. So now the y axis is price and the the x axis is how much you sell.
So this is the quantity that you can sell if you’re at a competitive price. So if we if we have this number here is and this is supply demand, right.
So if you cost too much, nobody’s going to buy it. And if you test nothing, the most important is to get it.
Okay, so that’s the global rate. And so the idea is that if you have a lot of computing opportunity
going to market that this is then you’re going to come to a price that’s basically about how much it costs to make stuff.
So, you know, people are getting paid, they’re getting salaries and, you’re not making a lot of profit. You’re just you’re just breaking even.
And that’s the idea of of of world of competition in a world with monopoly,
you can pick some other random price that you’re going to sell. We can say there’s no one you’re competing with. And then the advantage is that
you will make all this money here right now because this is how much it costs to make it.
And even though fewer people are buying it, you like you get all this money for free.
However, economists consider that they’re kind of neutral about whether you got to make money or the consumers get value, which the consumers
presumably thought they were getting value that so they’re paying for it. So they’re saying that’s a wash, which I don’t know how you we set up,
but think they’re saying this triangle off you’re losing it is a bit so whether or not it’s okay for profit to process rise in consumer value
you definitely don’t want like everything just fine That’s like a big loss. Okay I’ve just touched you all I know about that.
Okay. But but isn’t that I mean, when you talk to farmers, to two commerce
giants, they say that’s what you need to develop products. So it’s a good value going into the future.
Yeah. Yeah. So they think they think that. But yeah, so they think that they need
they need to create new products. Again, I don’t know why I like that
because there’s the competing with each other. There’s someone but there is, but there is this question about
Yeah. I don’t know. I don’t know. I thought I was Let me show you what I know more about, which is tells us.
So the European telcos right now are claiming that our broadband situation’s bad because they’re not allowed to merge into and end to become more monopolistic.
Right. And so you cited this is another billion is mostly with Holland I over there but we get people to help us sometimes
but she this is again more of her renting but going on and checking the data
so so first of all there’s this whole question about who is spending more on
infrastructure and in terms of our polices, in terms of percent of their revenue, actually the European ones are already spending
a lot more on on
infrastructure than Americans are anyway. But let’s have a look at the size thing.
First. I have to show you a graph. Okay. So here’s the users and then there’s people that are providing
actual content and then there’s also the telephone companies, right?
And so the telephone companies are going to be deciding lines of woman, but they’re going to be like content providers.
And then the Greens are the ad exchanges. And the ad exchanges get the other choices to these guys on they’re front line.
Okay. And then there’s other there’s other agencies that have to deal with guys.
If you want to do that, you don’t have to actually advertise attractive ad exchanges.
All right. So those colors are the key to this. And you may know this is a shockingly skewed
of these ad exchange things and now just making this confusing.
So the x axis here is profitability and percentage, and the Y axis is gross.
Okay. So basically, neither of the people are creating stuff.
The creators are the Netflix, and neither they nor the telcos are really growing that much
because basically everyone already gets newspapers. Everybody already has a telephone, right? So there’s not that much growth, but there’s a lot of profitability.
And this is this is why they actually set up this whole research was just like, if you don’t usually have people that are making profit, I mean, this is right.
They’re making that much money asking for permission to do something illegal like merch when they don’t need to.
So why are these guys really doing this? Well, I remember like so the average profitability over there
for Facebook about 50%. And This is before because most of their investment infrastructure now actually of these guys do invest in infrastructure.
But after they’ve invested infrastructure. So now Facebook is down to really 45% profits
after they’ve done this, you see that the European telcos
and this is actually what some markets are going to tell you. Of course, they’re less likely.
Smaller companies are more likely to do what the government wants them. They have less power.
And the intricacies of the citizens charter
so that so they are making less money now after that, because then there’s more infrastructure, but they’re still making about
the same amount of money as the people actually producing the content. And so if you’re looking for extra money, if you think there’s money to be had, that could be pumped into more broadband.
There are other things that are further to the right that you might want to think about how the tax.
All right. So I don’t see so I just coming back to this whole obscuring
what I told you there, much like we saw in the paper, which I didn’t it didn’t occur to me to look at this in great detail.
This is from them. It’s like these numbers were from the IMF.
I just when we read in the paper this way, 29 of those 20 numbers we’re looking at,
but then I saw this graph. But if you don’t know and why would you unless you’re in this kind of like the IMF, it’s it’s usually chaired by some French
and the World Bank is chaired by someone who’s American. And even the World Bank admits that in 2007
was the the Eurozone alone it the largest crooks in the world. It was the largest, largest economy in the world.
But then they like suddenly they in Europe, it’s like January, like twice
the lines. It’s got as many people as the US, right? That’s fine. And this is what the European bank thinks is the central bank.
And to me that looks plausible. So Russia, Russia, China, just up around
trying to finally be the largest economy, but really the British and the British,
the European and American economies played similar.
So sorry, where and how do you come to the difference here? Oh, yeah. So so just the pure here.
The Europe with the UK is 20%, China 16% unemployment
and you got 1625 if this turns on this, although there is both China
and and 2019 it’s like here and then it is nowhere near.
Yeah. If they don’t have your account in between. I was they’re they’re all trying to now the same Right.
China’s ahead so I’m just saying these are three different numbers from three agencies that I should know.
So you don’t know. I don’t know. I just want to point out that even this these things are sort
people don’t agree on. So I don’t know how to
think about. Yeah. All right. Well, I could talk about I’ve got 3 minutes because I don’t like some of those questions.
You guys are next. And I had a lot of more slides, but I just want to point out that the
that nobody emigrates to China in a lot of it’s China’s living nobody goes there willingly nobody willingly goes to Russia goes to Russia
and the so it may be why is it that people don’t want to go into autocracy It’s
maybe it’s because they’re not great for a minority. Although like I said, Iraq used to be the best place to be
for for freedom of expression, except as long as you don’t go across the Baath Party.
It’s a speculation about government. I do want to say that a lot of people, a lot of people, a lot of people on the U.S.
coast really want the whole world to take a European kind of hands off
regulatory advice strategy and they supply everything, diverge. And a lot of a lot of companies to be only one set of rules still is.
I think the a better model service model where there’s some, if you like, France, Germany,
who work really hard to figure out how to do it ever closer union. But the countries of the EU can do that.
I think it’s other countries like Ukraine, Turkey.
I don’t know what’s up. This is why they should they should get let’s but these some countries for geographic reasons,
will always sort of be in the cloud between the the a couple of months
that they started this sort of things that links the world together. They’re not things that can be in one place or the other.
But I really think it’s important to diversity and that we should expect to multiple jurisdictions although to the max I do agree on
is standards to try to make it as easy as possible to do transparency across different jurisdictions.
But I think what is going to get enforced in different places will vary. And it’s if we want innovation.
So I don’t think I have enough time to do the polarization, then I just tell you really quickly, it’s it’s not about culture, it’s not about social media
is there’s tons of paper and it’s like literally there’s constant need for things like people don’t experiments.
There’s 409 articles. No one has a book. There’s a lot here, but this is a book about all the different other articles.
There’s no evidence that social media is that social media drives polarization.
If you weren’t already polarized, then the amount of time you spend on social media, it doesn’t change that.
Okay. So on the other hand, there’s a lot of data showing that polarization tends to correlate with inequality.
And so we have a model. Explain why. And I’ll just say very quickly.
But our model, this is just the model that it’s the first to account for the data that was already known,
which was that basically we know, as I mentioned before, that we’re human diversity. You got to accept ourselves.
That was our response, that our speculation then is still the why would you ever only work with an in-group?
But that’s probably because it’s lower rates. But why would you care more about outcome?
Only if you’re right on the edge of losing something, right?
So if you if you have a if you’re worried about bankruptcy, losing a house,
losing your children, that kind of thing, then maybe you can’t afford you want
we care more about the risk profile than about the expected outcome. And so that was that was a paper that came out in December 2020.
Like I said, it’s the first it’s just it’s just a model. But the first account role is already known in terms of data.
And by the way, this only applies to inequality if it’s allowed to create false scarcity.
So the real problem is that as the economy gets smaller, people get closer to this class
than they expect with inequality, a lot of even if the economy is growing, more and more money
is going to the very top that everybody else in that situation. However, actually China and Germany were two of the examples
where we knew there wasn’t this link. And this model also accounts for why that might be, because China and Germany both invest a lot
and trying to make sure people don’t have this problems of precarity. So anyway,
I wrote a paper that was the main political science conference of Europe last year and we’re still trying
to get around to submitting it, but this is relatively new graph after that and the predictions in the model.
So income inequality turns out not to be the most significant indicator of polarization.
I mean, it’s a good but unemployment is actually an even bigger deal. And there’s also Islamism and contextualization.
And it’s not so much that it causes polarization as that it is.
It opens a cleavage on which polarization becomes more evidently to solve the problem, solve the problem.
But the interesting thing is that one of my coauthors just saw it was really interested and trust.
And you can see that income inequality actually undermines trust really strongly.
And then we were also looking at this other thing, which was how much how many working people are there
that that need to ask about that. But you can see that unemployment is less important than trust.
And the interesting thing at the micro level, so that this is like at the start, but anyway, the but at the micro level,
again, the most important thing is actually individuals care more about the economy from their polarization than they do
about their own employment, whereas for trust, their unemployment
and things like that matter more so, so trust and color and polarization are kind of the inverse of each other.
And you could almost just simply explain polarization by saying that you trust this will actually good
and you tend to use it if you can afford it, but it’s not quite like that. And the other thing that this is science are the two reasons.
This is again, the if you have suddenly access to more people that can work
and so your jobs are threatened, that’s more important than everything else. So it does seem to be that the polarization cranks up
when you’ve become a large curve. So that’s a bit of a causal story there. Okay. So I have now taken longer than I meant to
and I will just say it backwards. It’s just this.
But the ethnic fractionalization, like I said, there’s this important paper,
but I think it’s under hasn’t been adequately recognized. It just came out earlier this year.
But actually having ethnic minorities is the best predictor
of not having democratic backsliding in the current era. So it seems like, again,
echoing what I was saying before, what we did matters the minorities realizing that democracies are like to find stuff, work harder
and prevent a democratic backsliding, then the worse models
some of our paper. So I think it’s worth seeing. And and also there’s a real resonance talks a lot about
what’s happened in the French public schools. So in summary, justice is implemented by peer to peer setup
and is not a peer because the owners and we decide it. So it just doesn’t appear it’s a product.
We improve its ethical application by improving the best practice and holding companies to that, but holding them to doing due diligence that cannot be done
safely, legally to ensure suffering from starvation by council. I have a really famous paper that everybody thinks
means I hate robots. Robots should be slaves, but the point of that paper was actually that
why should we want to suffer? So it’s like a second order world. If we didn’t want a sense of the condition of of service.
And we know that people shouldn’t be armed, so why should we want the people
but everybody to put their own right? And then I had those first two papers and they have
at least given the time. Anyway, the other thing is that the capacity for freedom of expression only makes it backward, which
we have seen in the past and we are seeing in parts of the world people being commodified.
And so I think we need to work really hard to ensure that we have institutions watching after each other.
We have to work together in order to maintain our own fundamental rights first to the press.
So thank you, if we could, with our peers. Those were mentors that helped for this course.
„Bryson gehört zu den international einflussreichen Expert*innen in Fragen der digitalen Ethik“, sagt Professor Dr. Philipp Cimiano, Leiter der Gruppe Semantische Datenbanken an der Universität Bielefeld. „Ihre Forschung macht deutlich, dass KI-Systeme nachvollziehbar machen müssen, wie sie entscheiden. Erst dann können sie Nutzer*innen mit ihnen gewissermaßen eine gemeinsame Wellenlänge finden. Sind Maschinen intransparent, besteht die Gefahr, dass die Unternehmen, die sie herstellen, die Nutzer*innen bewusst oder unbewusst manipulieren. Das kann zum Beispiel dazu führen, dass Verbraucher*innen zu impulsiven Käufen ermuntert werden.“
Auswirkungen der Technologie auf menschliche Zusammenarbeit
Joanna Bryson ist Professorin für Ethik und Technologie an der privaten Hertie School in Berlin. Einer ihrer Forschungsschwerpunkte ist die Governance von Künstlicher Intelligenz und Informations- und Kommunikationstechnik. Weiterer Schwerpunkt sind Auswirkungen der Technologie auf menschliche Zusammenarbeit. 1998 veröffentlichte sie ihr Buch „Just Another Artifact“ zu Künstlicher Intelligenz und Ethik. 2010 war sie Mitautorin der „UK Principles of Robotics“, der ersten nationalen Ethikrichtlinie zur Künstlichen Intelligenz in Großbritannien. Bryson hat Abschlüsse in Psychologie und Künstlicher Intelligenz von der University of Chicago, der University of Edinburgh und dem Massachusetts Institute of Technology, an dem sie ihre Dissertation abschloss. Von 2002 bis 2019 gehörte sie der Informatik-Fakultät der University of Bath in Großbritannien an. Sie hat mit der Organisation für wirtschaftliche Zusammenarbeit und Entwicklung (OECD), den Vereinten Nationen und dem Roten Kreuz an Richtlinien für Künstliche Intelligenz gearbeitet. Seit Juli 2020 ist Bryson eine von neun Expert*innen, die von Deutschland für den Initiative „Global Partnership on Artificial Intelligence“ (GPAI) nominiert wurden.
© Hertie School/Maurice Weiss
Fragen der digitalen Ethik
Der Vortrag trägt den Titel „Principle Agents of AI – With Whom Do We CoCreate?“ (Hauptakteure der KI – mit wem gestalten wir gemeinsam). Er ist Teil der Reihe „Co-Constructing Intelligence“. Der Begriff Ko-Konstruktion bezieht sich darauf, dass die Interpretation der Umwelt und die Durchführung von Handlungen in Zusammenarbeit stattfinden. Das geschieht auf natürliche Weise, wenn beispielsweise Kinder mit Unterstützung und unter Anleitung ihrer Eltern Aufgaben im Haushalt erledigen. Mit Servicerobotern und anderen technischen Systemen ist so eine natürliche Interaktion bislang jedoch nicht möglich.
Vortragsreihe ist aus Forschungsinitiative hervorgegangen
Für die Vortragsreihe kooperieren die Universitäten Bielefeld, Bremen und Paderborn. Philipp Cimiano organisiert die neue Vortragsreihe unter anderem mit der Bielefelder Informatikerin Professorin Dr.-Ing. Britta Wrede, dem Bremer Informatiker Professor Dr. Michael Beetz und der Paderborner Linguistin Professorin Dr. Katharina Rohlfing. Die Vortragsreihe ist ein Angebot einer gemeinsamen Forschungsinitiative der drei Universitäten. Der Zusammenschluss nutzt das Prinzip der Ko-Konstruktion, um das Verständnis und die Fähigkeiten von Robotern an die von Menschen anzupassen. Die Forschenden arbeiten so an der Basis für eine flexible und sinnhafte Interaktion von Robotern mit Menschen im Alltag.