Wie Menschen trotz radikaler Ungewissheit entscheiden


Autor*in: Universität Bielefeld

Die Welt ist komplex, dynamisch und vernetzt. Das führt dazu, dass Folgen von Entscheidungen häufig nicht kalkulierbar sind. „Dennoch müssen wir oft Entscheidungen von enormer Tragweite treffen“, sagt der international renommierte Ökonom und Psychologe Professor Dr. David Tuckett. Am University College London leitet er ein Institut zur Erforschung von Entscheidungsunsicherheit. Er ist Begründer der Theorie der emotionalen Finanzwirtschaft, die er in dem Buch „Die verborgenen psychologischen Dimensionen der Finanzmärkte“ vorstellt. Wie Menschen unter Bedingungen radikaler Ungewissheit entscheiden, erläuterte Tuckett am Montag, 5. Dezember, um 18.30 Uhr in einem Vortrag der neuen Reihe „Uncertainty-Talk“.

„Entscheidungen in der realen Welt unterscheiden sich in vielerlei Hinsicht von denen, die in Labors und Lehrbüchern analysiert werden“, sagt David Tuckett. „Wir müssen oft folgenschwere Entscheidungen fällen, obwohl wir dafür nur unvollständige Informationen haben, die Wahlmöglichkeiten mehrdeutig sind und die zukünftige Entwicklung nicht der vergangenen ähnelt.“ Wie gelingt es Menschen unter solchen unsicheren Bedingungen trotzdem, zu Entscheidungen zu gelangen? Das erklärt David Tuckett mit seiner Conviction Narrative Theory (Theorie der Überzeugungsnarrative, CNT).

Aufzeichnung des Vortrags von David Tuckett


Der Ökonom und Psychologe Prof. Dr. David Tuckett stellte am 5. Dezember in der Reihe Uncertainty-Talk seine Forschung zur Conviction Narrative Theory vor.

The key thing that you need to understand in the talk for it to make any sense is to understand the idea of radical uncertainty, which is not an idea… or, it depends whether you’re coming from the social, if you’re coming from the social sciences, it’s not a big issue. If you’re coming from any of the other decision sciences, it’s a very big issue. So let’s see. The starting point is to think about the kinds of decisions that I’m talking about, because those of you who know about decision science, for example, in psychology or in some of the other areas, know that the topic of choice is gambling. That’s the topic of choice of probability theory and gambling. Gambling is a well-defined situation. I’m wanting to talk about real life decision making and here we have some examples: Should you expand your company into a new product or technology? A very important decision made by lots of people within the economy. How much funding should you allocate to cyber security threat? What resiliency standards should be adopted by government regulation? Precisely what should we be prepared to do to prevent catastrophic climate change? And how should we anticipate and prevent future financial crises? And you could invent lots more of this. But I just want to be clear that this is the kind of thing I’m interested in. I’m not interested in a gambling problem. The point is that we must make consequential decisions, massively consequential choices, when in fact, whether we know it or not, data is incomplete. The options are ambiguous. The future may not resemble the past, and the axioms of standard decision theory are not satisfied. And this is, again, a more formal way of putting what I’m talking about. And the key thing is you have to make a decision. For example, about mitigating climate change. We spent 20 or 30 years deciding, well, maybe we can, maybe we should, maybe we shouldn’t. And now, when it’s pretty much decided by most people, we should be doing something, it’s a lot more expensive and a lot more problematic. And so in many of the instances I’m talking about, making no decision is a decision. Preventing pandemics, another one, I don’t know if leveling up makes sense to you. But dealing with regions of a country or part of the world which are not economically successful, how do you deal with that? All forms of macroeconomic policy, any type of investment. And of course, at the individual level, think of a career choice you make. You know, you’ve got an offer tomorrow to move to a job in Canada. How are you going to decide that? This would also be [an example]. Now, the other point about these decisions is that they tend to require commitment. That is to say, with a gamble, you know, you put your money on, you see what happens. In this type of situation, you make the decision and you then have to live out your decision while you wait to see if it unfolds as expected or you adjust to it or something. So it means that there’s always going to be some degree of monitoring of how things are working out. And I first, as Herbert [Dawid] said, a lot of this I developed when I was working with asset managers who were investing in large companies. And an investment in a large company, they usually had a three year horizon. So you think it’s a good thing to do today, but the next day the price can go down. Do you immediately sell? Well, hopefully not, because if you go on like that you’ll lose money very quickly. But you have to have a forward looking strategy with commitment, but also monitoring what is going on. And all of this, these decisions, the context is radical uncertainty. And what I mean by radical uncertainty, there are lots of ways of putting it formally, but essentially it means: I do not know. Actually, I cannot know today when I have, when I’m making the decision, how it’s going to turn out. So that is a decision. And some of you are probably married or thinking of getting married. Well, it’s the same type of situation also with career choice. Okay. So it’s very important to know this because what happens, we’ve just published this theory I’m going to talk about in what is the leading journal in the field of cognitive and behavioral science. And that’s a field where you get 27 comments from, I think there’s 17 disciplines, that people have commented about this paper. Around a third of them, the comments essentially try and get rid of the problem I’ve just expressed. They want to bring probability back, et cetera. So the underlying, I would suggest that in most decision theory, the underlying idea is that there is a best decision if you analyze the data properly. Now, while choices, as I say, like monetary gambles, may be amenable to the standard analysis, it’s far less clear how the things I’ve been talking about are. And in fact, unless we assume the effect of our actions on our future will be pretty much as they have been in the past, it is actually dangerous and deluded to suppose that a good statistical analysis, the best possible statistical analysis of the existing situation, is a reliable guide to the future. Now, of course, I’m speaking here in Germany, and this guy, Niklas Luhmann, is one of the key people who’s written in this field. I don’t think he’s known to many economists. But one of his key points is that if there wasn’t uncertainty, there’d be no freedom. And so that uncertainty is this double thing. And this is also terribly important to understand that uncertainty is an opportunity and a risk. So when I say, you don’t know what’s going to happen, you’re unlikely to go in for such a decision unless you have expectations of some kind of success. So I’m not saying, right… But you also have to accept there may be loss. And everything I’m going to talk about, really, is about this fundamental proposition that to make success, you have to risk loss, without having any calculable way of working out, what will actually happen. Now, Conviction Narrative Theory, which is the theory we’re going to talk about, is an alternative cross-disciplinary theory of decision making, that starts from the proposition that human cognitive, affective, and social capacities are well adapted for uncertain contexts. Well adapted. And it turns the core issue from how to make best decisions, which is what most of decision-making theory has been about in many disciplines, into a different question: How on earth do people make decisions at all? And what happens for people to be willing to make the decision I talked about, where you risk loss, seriously, in order to make a gain. So it becomes not: How do you make your best decision, but how do you come to make that decision, to actually commit to potential failure? And it identifies two principle ways that people and organizations actually can do this. And it also identifies, or tries to, which I hope to have got to by the end, the implications for research and for decision makers, if you think this theory is at all sensible. Now, I want to just introduce two rather formal issues here to try and show you what I mean. By the way, is it reasonable for me to assume you all know what formal decision theory is? Yeah, probability and all that? Yeah? I don’t have to explain anything. So, there are in fact two issues involved in decision making. One we would call the mediation problem, and one we call the combination problem. The beauty of probabilities is they solve both problems. And I’ll explain that in a minute. What is the mediation problem? Now, the mediation problem is essentially: How do you represent reality? Now, for people who are in the social science and the humanities, of course, there’s lots of discussion around how do we represent reality. And the issue of representation runs through a huge amount of what you’re doing. In economics, it doesn’t figure. Because if you write a model, you can put a value in a model, and because it’s a mathematical model, the value has a perfect understanding. It behaves as it should. You can put in, for example, utility or something, or an interest rate. But actually, in the real world, you have to represent reality before you can act on it. So to take an example from finance, it makes sense that the values in the stock market should roughly represent the actual fundamental values of all the companies and so on that are there. Because if they didn’t, somebody can come along and can make lots of money by, you know, knowing where people are wrong. Now, the economic theory about this doesn’t actually say what I’m going to say. So I’ll explain that and then I’ll say what [it’s about]. Implicit in that idea is that you can value a company. So I once went in a car with the head of [Siemens], one of your companies here in Germany. And as you know, they have multiple divisions. They’re doing many, many things. They have no idea, really, that that company is extremely complicated. And, actually, to know what it’s worth next year is a very difficult task. And particularly to ask the question: What’s it going to be worth in three years time? Now, economists or financial economists assume… they don’t actually assume anybody does that calculation. What they assume is, that we are in an equilibrium roughly, where the price of financial assets is roughly in an equilibrium where it should be. And then, if something happens to change things, they assume it will go back to where it was. So it’s quite a sophisticated theory. But nonetheless, it depends on there being some notion of an equilibrium which, you believe, reflects true value. If you’ve got uncertainty, that’s much more questionable. In any case, the mediation problem I’m trying to talk about is: How do you represent the problem you’re trying to solve? So if it’s how to mitigate climate change, you have to have ideas of what the problem is that you’ve got, what the actions are that you’re going to take, and so on and so forth. So there has to be, what psychologists would insist on, there have to be cognitive processes. That is, processes of coming to know the world and to represent it. Okay. The combination problem, what that is, put more simply, is you’ve got data. What does it mean? There’s got to be a process of making sense of data or making sense of the information you have. Information itself has no meaning until you have this cognitive process. The combination problem is a different one. It’s how do you combine the things you represent with your values and goals to actually decide this is something you want to do? And so in this diagram the dotted line, it represents the mediation problem. You’ve got data, you’ve got beliefs about what the data means, and then you’ve got your actions. And the combination problem is where your goals and your beliefs come together. Because, for example, the beliefs have got to correspond to what you want to try and achieve. So the logic of decision is that decisions reflect those data picked up from the external world, including the social environment and internally derived goals. The mediation problem reflects the need for an internal representation, a currency of thought recorded, that can mediate between data from the external world and actions decided internally. The combination problem reflects the need for a process, a driver of action that can combine beliefs and goals. Now in classical decision theory, the currency of thought is probability and the driver of action is expected utility maximization. In behavioral economics it’s slightly different. It’s mostly concerned with where people get the probabilities wrong. But in radical uncertainty there are no probabilities, there can’t [be]. And the decisions that I’ve talked to you about, the examples I gave before, you cannot put probabilities on what is going to happen. So in Conviction Narrative Theory, the currency of thought, what I was trying to set out, is narratives, and the driver of action is affective evaluation of narratives in social contexts. That’s quite a formal way of putting it, but I’ll try and put some flesh on it. And this is a more complicated way of saying the same thing, which… So, representations and processes in Conviction Narrative Theory, narratives supplied in part by the social environment. So in fact, in your environment, there are narratives circulating all the time about how this action is likely to lead to this thing. And if your social environment is the Bank of England or the Bundesbank or a large company, there are normative notions about what you do and how things work, which are basically narrative-based ideas. They explain data. And these can be run forward in time to simulate imagined future outcomes of action, which are then evaluated affectively, considering the decision-makers‘ goals. So the idea here is: You take your narrative about what you think is going to happen in your job in Canada, and as you run through, you know, I’ll be able to do this, I’ll go there, I’ll have this opportunity, you’ll be telling stories to yourself and you’ll be having feelings about that story. Now, some of the things you may have in your story may, say, oh but it’s a long way from Germany and I won’t be able to come back, or something, which will give you, maybe, a bad feeling, and other things will give you a good feeling. And what you actually have to do is to sum up overall what… Basically, do you want to approach this decision? Do you want to have it or do you want to reject it? The basis of the theory is that the narratives simulate future outcomes of action and, of course, you make then cognitive and affective judgments about those imagined actions. And it’s these appraisals of narratives that govern your choice to approach or avoid these imagined futures. There are also feedback loops but I won’t go into them now. So why are we talking about narratives? Well, there are four functions of narratives… I haven’t set this up very well. There are four functions of narratives in Conviction Narrative Theory. So, one function of narratives is that they explain or make sense of the data. So in fact, if you go to any scientific lecture in this building, people will put out data and then they will, in fact, tell you a story about it. Whether economists, physicists, whoever they are, that is actually what they have to do. Now, there are, of course, rules for how you make up that story in science, but it nonetheless is a story. Second function of narratives is it’s a mental simulation, that is, you can run the narrative forward into the future and then generate imagined futures that will come from the action you have in mind. The third is that it can be evaluated affectively. So it’s a story you tell and actually, one of the reasons I got this idea is, when I’m talking to fund managers, they will talk about, you know, I don’t like the story on this company or I love the story on Apple or whatever it is, and they make an affective evaluation of what they see as the story. Now, just because it’s a story, I do not mean that it isn’t backed up by all the quantitative data you can imagine. So it doesn’t… But you need… the quantitative data is made into a story. That could be models, it can be anything you like. And finally, narratives are communication. That is how you communicate. And for example, many decisions nowadays require lots of other people to cooperate with you. So you’ve got to sell it to your immediate colleagues. So you’ve got to get teams of people operating to do things. And you may even need to get your customers, for example also, you need to tell them a story for why they should be doing whatever it is you think they should be doing. What’s quite important here is I’m not talking about any old story, right? I’m talking about narratives of how a current situation can be transformed into a future situation by a narrative. A specific type of narrative. Right? So, if I decide to go to this university and study computer science, this is how my future’s going to be. That’s the story, right. This is a very complicated diagram to just show you that the narratives… It’s not just making them up, you know, like a fantasy, like a novel or something. You’re picking narratives up from your social environment. So they’re influenced by heuristics that are available, shared history and experience, values, norms, analogies that are available. So all these things. And this is a high-level theory. So we’re not attempting to say which one of these you’re going to use. All of these things go into what people pick up. And, in fact, they can be influenced… Which narrative people pick up when they’re thinking of a problem is influenced by things that are well studied in psychology, content effects, the various content effects, there is what people think is trustworthy or not data, and then the various presentational effects, which are often called fluency. So there’s a lot of research behind this kind of idea in many disciplines, but I’m just summarizing it. Now, what happens in the end is that your narrative generates either approach thoughts and then emotions that go with it that are evoked, or avoidance thoughts and emotions that go with it. And it’s approach minus emotion that determines action. So if there’s a balance of approach, it determines action. Now, one of the key things here is that if it’s uncertain and you know it’s uncertain, then one of the things you have to do is manage doubt, because doubt creates anxiety, and anxiety is an emotion that causes you not to want to do something. So this diagram doesn’t quite include the idea that one of the most important things that happens is people generate what we call counter repellers. So, what I discovered in the work I did with fund managers, for example, is there was somebody who wanted to make an investment in China, in a particular company. But the counter narrative is, or was for him, can you believe the figures that are coming out of that company? Because maybe the accountancy systems and blah, blah, blah are not very accurate and actually it’s not really that amount of activity as it looks like in the books. So that will be a reason not to invest, not to believe the figures. So what did he do? He employed some Chinese private investigators. They went and sat outside several of the factories of this company. And they actually counted numbers of workers going in, numbers of lorries coming out and things like that. And since they were consistent with the published figures, his doubts were dispelled and he could go forward with his story as to why this was a great company, what its future is going to be and so on. So that’s an example of managing doubts. Now, the fundamental concept, which I think was in the title of the lecture here is ambivalence. So because the nature of uncertainty is that you may succeed or fail, it generates what Bleuler, who is a psychiatrist, called ambivalence. That is, it generates both the feelings of wanting to approach the thing and of wanting to go away. Now, Freud picked up ambivalence, but so, also much more recently, did Neil Smelser, who was the president of the American Sociological Association. Yeah, Association, back in the late 90s. And Smelser argued that rational choice, you know it’s a useful way of thinking, but if, instead of being so hooked on rational choice, we’d thought about the fact that nearly every decision people make and nearly all people’s social relationships are actually ambivalent. So for example, most marriages, most relationships, most families are generating wishes to belong to it, wishes to get the hell out of it. The same with working for most companies, no doubt working for this university, certainly mine, ambivalence is a very common problem because, actually, you cannot avoid being frustrated essentially, even though you may also be… Now, I’m now not so much talking about personally, although the personal ability to handle ambivalence, which increases with maturity in fact, is extremely important. But what more recently… What it amounts to in decision making theory is how do you manage doubt? And I’m going to go on and talk about this in just a moment, but you can see that uncertainty must, of necessity, provoke ambivalence. Indeed, quite often when people talk about uncertainty, they’re talking about the negative, you know, uncertain means you don’t want to do it, right. But this is a mistake because actually, as I said, with Luhmann uncertainty also means you can do something completely different. Right? So, but there, of course, is a way in which, if you can’t stand the uncertainty, you won’t do stuff. And if you keep being balanced on the two things, it may be difficult to tolerate. So if we’re talking about managing uncertainty, managing ambivalence is the key idea. So CNT, Conviction Narrative Theory, focuses attention on how this ambivalent situation is dealt with, both affectively and cognitively. So I’ll be explaining in a minute that I do not see affect and cognition as two… I mean, they’re obviously two things you can define separately, but they are like this [Tuckett shows hand gesture for entanglement]. They’re not separate things. In a divided state, what you do is you assess a situation and you start with a hypothesis, you know, you have some belief, I want to do this. If you’re in a divided state, basically you tend only to see the things that confirm your belief. So organizations get set up like this, famously, you know, the metaphor of the king who kills the bad news. Anybody who brings bad news during the… during financial excitements is negative news, just doesn’t cover. And I’ll be talking to you about this. But a divided state is one where, it’s the opposite of what you could call the desirable normative scientific state, which is that you weigh everything carefully, you consider all the evidence and all that. Okay. But of course, it’s not so easy to consider all the evidence. And of course, part of what I would suggest is that most decision-making disciplines have actually been ignoring large amounts of evidence. So you can carry on with an idea that decision is can be formulated as a probabilistic problem, only if you actually don’t notice that it only applies to gambling problems and a few things like that. And of course, the big trick with uncertainty is to eviscerate it. That is, you just don’t… Although it might be objectively uncertain, in your way of thinking or your group’s way of thinking, the uncertainty never enters into it. I can give you one example. I’m sure there are lots in Germany, but leading up to the financial crisis, the Financial Services Authority, which is our financial regulatory body, from its minutes, in the four years before the financial crisis, never discussed banks. So why didn’t they discuss banks? Oh, because nobody thought banks were important. Why would I say that might be a divided state? Because if you understand that banking and the finance world is fundamentally uncertain, you should be looking out for things going wrong, you know, a little bit of the time. It may be that you have a discussion, you say it’s all fine. But there’s not, and I’ll show you some more data about that. But the tendency, in other words, is to solve the problem of ambivalence by removing one term of the problem. And it can equally be that you remove optimism. So a decision can be made that we’re just not going to do any innovation, effectively. It’s too risky. And to some extent that is what happens. Okay, so this is a diagram showing what I’ve just been talking about, in which the integrated state is on the top and you imagine, you’ve got a prime narrative prediction. So we’re going to build this new airplane or new train system and we want to do it. Information is coming in and this information will give you cognitive information about how the project is going or how the world is going, that depends on it, which will generate feelings. If you’re in an integrated state, you will consider both information that is congruent with your plan and that is not congruent, and you will keep the situation actively monitored. If you are in a divided state, then, see those two lines there, you only take the congruent information, which means that your optimism ramps up and up, and up. So that’s how you get a bubble, according to this theory. Because the incoming negative information doesn’t register. Okay. I’m just going to introduce a couple more concepts to show how all this can be used. So narratives, as I’ve tried to explain to you, are structured higher-order mental representations incorporating causal, temporal, analogical, and valence, value information, about agents and events, which serves to explain data, imagine, and evaluate possible futures and motivate action. Within narratives, you have narrative fragments, so you don’t want to get the wrong idea. For example, I could say to you: Divorce. And that would actually set, going in your mind, a kind of narrative about what divorce means. So it doesn’t have to be an essay, in other words. And there are narratives within narratives, which we’ve looked into endlessly, you know, can you formalize and divide these things up? Perhaps you can. But for the moment it seems a bit of a waste of time, which level you’re working at. You just need to know that often a narrative, for example, what we’re going to do about, you know… We’re going to build more solar energy reflectors, that will contain numerous sub-narratives within it, which also is part of it. Shared narratives. This refers to, within networks or within a large like a societal-level network, some narratives are shared. Now, here I’m just, we’re just borrowing standard social science. Now we know in standard sociology, of course, that there’s never one narrative. There may be a more dominant narrative, but there are always conflicting narratives. And that’s part of the point, because then you can get a shift from one dominant narrative to another quite easily. And in economics, for example, in macroeconomics, the principal macro narrative is, yes, it’s good to go on investing, things are going well. No, it’s not very good to go on investing, things are going badly. And that can shift. And if it shifts, behavior shifts, and then, of course, that can all become self-reinforcing. But a shared narrative is to get the idea of a network-dominant narrative. Or it could be a company-dominant narrative, or if you’re studying governments, it could be that there are different narratives within different government departments that are in conflict with one another. And then at a certain point, under certain conditions, one narrative takes over. An interesting one in economics, has been about supply chains and where you manufacture your things that, you know… We went through this quite long period where you manufactured in the cheapest possible place. And the narrative has now shifted to you manufacture pretty near home. And that, of course, depends on lots of factors. But these narratives can [change]. Shared narratives tend to be very influential and they solve problems of uncertainty. You don’t have to do an optimization finding out about everything. You just take the prevailing heuristic in your particular group. Explanatory models. This is a concept that I take from the work I did with doctors and patients, and also from social anthropology. So if you think, if you become ill tomorrow and you have some symptoms, you will give an explanation to those symptoms according to your ideas about medicine, the narratives about those symptoms that are in your social group. And they’re usefully called an explanatory model because it’s not just… There are different bits that join up to make you think, I’ve got x. Moreover, when you go to your doctor, you encounter a doctor who’s got an explanatory model, hopefully based on medical science. And lay models and medical models may or may not be congruent. And when we did a big study of doctor-patient communication, it’s very interesting what we found, was that most consultations are very, very successful. 90% or so from the communication point of view. The patient goes away believing more or less what the doctor did. In 10% of them, however, where the patient had a different model than the doctor, it’s a total failure. And the reason for that, we argued, is that the doctors didn’t fully understand that the patients had their own theory. They tended to treat the patients theories as sort of stupid theories, and they didn’t realize they actually had their own logic. Now, anthropologists have done a lot of work in this area. But an explanatory model is a very important concept because if you want to, for example, if you want to get a whole bunch of people to get behind you with changing the way we try and mitigate climate change, you’ve got to consider the explanatory models through which people take the information you give them. Or if it’s in a company, and you’re trying to get your workforce to do something completely different, they don’t… If your workforce doesn’t share your set of explanatory models. So I’m not here necessarily talking about power, of course, there’s that too. Power and how much money, but they don’t understand why you are proposing this. They don’t understand your explanatory model, you will have a problem. So narratives also deal with this kind of thing. Now lastly, feelings, and I’ll talk about that in just a minute, elaborate it. But, I don’t know how much those of you here know about modern cognitive neuroscience, but the basic pretty standard consensual idea now is that feelings are at the center of all brain activity. And this is because our brains evolved from much more, in evolutionary terms, primitive creatures who only had feelings, essentially. We built our cognition on top of that. In fact, cognition was an extra resource for thinking like, what am I feeling, and should I be feeling this, And so on. But if you put somebody, if you put someone on the MRI and you get them thinking of various problems, you will see, whatever people are doing, bits of the brain connected with feelings light up, even if you’re doing a math problem. Because when you get to the math problem and you get the solution, you feel good, and that is actually registered. So feelings are at the center of all brain activity. Their function is that they are conscious. So when you feel fear or you feel excitement or you feel pain, it’s conscious. And it’s conscious because it’s telling you to do something. The brain is saying: Time to do something. And that’s because, unlike with a subjective expected utility model, the brain is capable, our brains are capable of managing around seven conflicting systems within our biology. Which are seven systems for, you know, like survival, reproduction, looking after people… So I won’t go through them. But they potentially conflict. The brain uses feelings to prioritize. So that in this situation, it’s this you do, to keep your biological well-being. Right. So feelings are not unimportant in our existence. And, of course, you can describe feelings in lots of different ways: envy, jealousy, blah, blah, blah, competitiveness. But they all boil down, in the end, to either approach feelings or avoidance feelings. Okay? So I’ve elaborated… How are we doing for time? Okay? So I’ve elaborated that here. I don’t know how much all this is known to you, but basically it’s what I’ve just said. And one of the key reasons why this is important is the human brain is a completely different thing than a computer. A computer is fantastic, better than a human brain, as they’ve got bigger anyway, at solving a data problem, if all the data is there. Or if you, you know, you can do probability, so you can deal with not all the data, but you feel there’s enough data. But the human brain can manage when there isn’t data or isn’t much data, it can still allow you to make a decision. So if a computer, if AI, for example, is actually set up to deal with uncertainty, it has to freeze. There’s nothing else it can do. It can’t make a decision, unless you program, unless you say, okay, we’ll make a random choice, which is what some solutions are. But so, the human brain is highly adaptive for new environments and that’s what it’s been built for. So, for example, I don’t know how many of you have ever thought, why do you have memory? You don’t have memory for having fun with the past, although there are all kinds of ways we could write about, you know, we can write about that, and it’s perfectly true. But the reason we have memory in evolutionary brain terms is to give you a set of action steps for your future. And of course, what is it that triggers a particular memory? It’s an affect, memories are laid down, attached to affects. And so, when you see something that reminds you, right, you’re reminded of the thing and the feeling, and that then is an extremely useful resource. And you’re not reminded of your whole life and invited to do an optimization problem on every single thing you know. It’s a very efficient way, which is why we talk about heuristics. And of course, the other point about feelings is that they are socially… There’s a whole sociology of feeling these days and anthropology of feeling, even historians are interested in feelings, because they are, you know, they’re not just an individual phenomenon. So everything I’m talking about fits into the idea that narratives and action is ultimately both biologically embodied and socially embedded. Okay, so this is just a quick thing to show you one possible application of this. So this here is a graph showing, if you take Reuters news, the Reuters News Archive, all the articles between 2003 and 2012, which mentioned the American Bank, Fannie Mae, you know, Fannie Mae, which was at the heart of mortgages and so on. Now, what you see, and what has happened is, that we’ve taken all those articles, used an algorithm to do it, taken all the articles, and then we’ve assessed the words in the articles as to whether they have approach words or avoidance words, or both. And then we have scored the proportion of approach and avoidance words over that period of time. So in other words, if it’s going, if it’s going down, there is an increase in avoidance sentiment about Fannie Mae in Reuters news, which captures a lot [of material]. When it’s going up, there’s an increase in approach. Okay. Now, if you look at it, it’s got more complicated because, if we first deal with the yellow line, the yellow line is the Case-Shiller index of housing prices in the ten leading cities of the US. And, essentially, that is measuring what is happening to house prices. Now, logically, if you know anything about prices and houses, and the American mortgage market, you would know that the moment prices start to go down, it’s going to be a bit of a problem. I can elaborate later if you want. But look what happens to sentiment. Although the prices are going down, the sentiment, and that’s what that red line is supposed to show, carries on going up just the same gradient as it did before the price started turning down. This is an example of how, or it’s a possible example of how the stories, in other words, the way people were receiving this information, they would pay no attention whatsoever to the potential problem that was coming. Of course they do eventually. And you see the collapse. And of course, Fannie Mae had to be bailed out by the federal government. Now, there are lots of these graphs which you can show for this type of thing. And what we’re trying to show here is that this is a proxy way of looking at the narratives around Fannie Mae were, if you like, optimistic, optimistic, optimistic, despite the fact. Of course, when the facts did break through… And this is my point about cognition, that there are facts, but at what point do the facts become facts and get attention and be given meaning? And that’s the point. Okay. Now, so Conviction Narrative Theory has, in my view, quite a number of applications. It has applications in macroeconomics, where you can show that, if you consider that the fundamental drivers of the economy have something to do with sentiment for investing, nothing about what I’m saying is saying that people aren’t rational, by the way. My problem is not that people are not rational, but it’s: How are you rational if there isn’t a correct answer? So it’s not against rational action or rational theory, it’s just that, how can you be rational if you don’t have… in fact, it’s not rational to use probability in this situation. So there are applications in macroeconomics, applications in monetary and financial stability policy. The risk-taking is interesting because I mentioned about, before the financial crisis, and the Federal Reserve Bank of the US publishes its minutes of the discussions they have every month about what’s going on. And the minutes ahead of the financial crisis are particularly fascinating because they weren’t discussing these issues. They did at a certain point, but if you look at it in terms of either topics or you look at it in terms of emotion, one of the things, I would say, this is useful for is if you’ve conceptualized the situation, you’re working in as uncertainty, then you should be discussing what can go wrong fairly often. You should also be discussing what can go right. But you can use this measure of approach and avoidance emotions in documents that are kept, to keep an eye on whether, for example, the risk committees of companies or whatever, are actually considering risk, or are being captured by what I call a divided state. Like the Fannie Mae discussion in which they only look at just one side of the story, which tends to be what’s happening. We have a paper on how this can apply to futures, and foresight. It’s obviously extremely relevant to innovation, that people have to… with innovation you have to be willing to fail. So how do you overcome the failure? Levelling up. And, I don’t know if the people here are interested in politics and society, but David Soskice’s book, ‘Varieties of Capitalism’, do people know that? No, okay. It’s quite a well-known book in political science. They’re now using, he in his latest paper, they’re using Conviction Narrative Theory to explain how you, in a multiple equilibrium situation, you get a switch from that equilibrium to the others. So after 2008, people just became permanently pessimistic and it’s settled that equilibrium, which is why economies have not really recovered. Anyway. That’s how it can contribute theoretically. And in terms of research, it suggests a different type of research into decision making, particularly looking at organizational structure. Because you can avoid divided states organizationally, that is by having, for example, people in your organization whose job is to ask difficult questions, by making sure you have diversity, not because you can tick a box, but so that people actually, who are not part of the shared narrative of your organization, can ask difficult questions. And so there’s a lot of scope for research that looks into this. And as you probably know, groupthink and all that is part of it. The other point is that the implication of decision making under uncertainty is that decisions should be treated, big decisions especially, as experiments. Since you cannot know the outcome. And if you treat it as an experiment, then what would you do? You would have ways of, in a discussion about why you’re doing this and why it’s going to work and how it’s going to work, and what bits of the puzzle have got to come together, that can also lead you to create measures. So going forward, you can see whether the assumptions you’ve made are actually being fulfilled. And then you need a system of feedback, of course, rather than to not get the feedback, in order to adjust. Now, most, I would argue, most big decisions made by firms, by governments, and in the economy, and regulation especially, need to be made as if they’re an experiment. We know that unexpected things happen. And so the idea is… we’ve got this wonderful optimal solution, this is going to work. We know actually it always creates all sorts of unintended consequences. So treating it as experiment. But the problem here, and this is where the fact I’m a psychoanalyst comes in, is that people don’t like treating things as an experiment. So I can’t go into it but there’s a lot of work to be done to understand the obsession with optimal decision making as a control system for getting rid of uncertainty, for getting rid of feelings of anxiety, but which then make your organization or your society, or your government perform badly. Not always, by the way, because a divided state could be very useful if you were in the trenches in Ukraine. You don’t want to be thinking shall I, shan’t I, you just got to go and do it. Which is why we have the human capacities to block stuff out, it’s functional. But it isn’t functional in many other situations. I think it has a lot of implications for AI, and for defining where AI can be useful, and where pursuing artificial intelligence, as if computers can do what humans would be, is a very dangerous and probably wasteful thing to do. And finally, what I call the escape from model land. So, there’s a huge number of activities which use models, not just economic models, but many models, epidemiological models were used. You know, we had out fill of them during the pandemic. Models for supply chain, models for this that and the other. And it’s a huge industry. And is, of course, very important and very skillful work. But models have got to be eventually taken from model land into reality. And that’s the point where the modelers, or those who are their customers, needing to ask, what are your results? What happens if something something goes wrong? If all they’ve got in there is probabilities, where are you getting these probabilities? Because most probability is just invented in that world, or very often. And that raises a whole set of question. So I will stop there with these. I think I’ll stop there with these. You know, there’s a whole bunch of references there with these comments. But thank you very much.

Klimawandel und Pandemie fordern weitreichende Entscheidungen

„David Tuckett arbeitet stichhaltig heraus, dass Standardansätze der ökonomischen Entscheidungstheorie das Verhalten vieler Menschen in Situationen mit radikaler Unsicherheit, wie Entscheidungen zur Berufswahl oder zum Umgang mit dem Klimawandel und der Pandemie, nicht vollständig erklären können“, sagt der Wirtschaftswissenschaftler Professor Dr. Herbert Dawid von der Universität Bielefeld, einer der Organisator*innen der Uncertainty-Talks. „Tucketts Theorie liefert durch die stärkere Berücksichtigung psychologischer Aspekte wichtige Einsichten, wie Menschen solche komplexen Entscheidungssituationen und die damit verbundene Unsicherheit bewältigen“, sagt der Bielefelder Wissenschaftler. David Tuckett geht in seiner Theorie davon aus, dass „Überzeugungsnarrative“ es Individuen ermöglichen, sich auf die Ausführung bestimmter Handlungen vorzubereiten, auch wenn sie nicht genau wissen können, wie diese ausgehen werden. Die Narrative dienen den Akteur*innen zudem als einfaches Mittel, um zu kommunizieren und die Unterstützung anderer für ihre ausgewählten Handlungen zu gewinnen und um sich selbst zu rechtfertigen.

In seinem Vortrag an der Universität Bielefeld stellte David Tuckett die Grundsätze seiner Theorie dar. Ebenfalls geht er darauf ein, welche Anforderungen sich aus dieser Theorie für die Forschung und Entscheidungsträger*innen in Politik und Wirtschaft ergeben.

Interview: „In unsicheren Situationen muss für Erfolg riskiert werden, zu scheitern“

Anlässlich seines Vortrags im Zentrum für interdisziplinäre Forschung (ZiF) ging David Tuckett im Interview darauf ein, was den Umgang mit Ungewissheit in der heutigen Zeit ausmacht.

Mit Ihrer Conviction Narrative Theory – kurz: CNT – beschreiben Sie, wie Menschen unter radikal unsicheren Bedingungen Entscheidungen treffen. Was ist der zentrale Punkt Ihrer Theorie?

Die derzeitige Theorie der Entscheidungsfindung ist unrealistisch und gefährlich. Denn sie basiert auf der Vorstellung, dass man Entscheidungen für die Zukunft am besten trifft, wenn man davon ausgeht, dass die Welt stabil ist, das heißt ohne radikale Unsicherheit, so dass das, was letztes Jahr am besten war, auch dieses Jahr am besten sein wird und so weiter. Dies kann für einige Dinge zutreffen, lässt aber Innovationen und neue Ideen und Prozesse außen vor. Die CNT geht davon aus, dass Menschen Entscheidungen über ihr Handeln treffen, indem sie sich die Auswirkungen verschiedener Maßnahmen auf die Zukunft vorstellen und sich dabei auf vorherrschende Erzählungen über die Zukunft stützen, die in ihrem sozialen Umfeld existieren und die sich richtig „anfühlen“. Die Art und Weise, wie sie Narrative und soziale und psychologische Prozesse auswählen, bestimmt, was für sie bedeutsam wird.

Sie weisen darauf hin, dass es ein weit verbreitetes Missverständnis in der Wahrnehmung von Unsicherheit gibt.

Ja, das zugrunde liegende Problem ist, dass einige Organisationen und Einzelpersonen glauben, sie könnten es sich nicht leisten, an ein Scheitern zu denken – in unsicheren Situationen muss für Erfolg riskiert werden, zu scheitern. Der Ausgangspunkt ist, die Ungewissheit überhaupt wahrzunehmen.

Was kommt heraus, wenn Sie untersuchen, wie Akteur*innen – etwa in der Finanzbranche – mit Unsicherheit agieren?

Die Art und Weise, wie Akteur*innen mit Ungewissheit umgehen, hängt von zwei Faktoren ab: der Anreizstruktur in ihrem sozialen Umfeld und dem Ausmaß, in dem sie ermutigt werden, „phantastische Objekte“ zu verfolgen oder nicht. Die Anreizstruktur ermutigt oft zum Herdenverhalten und zum Streben nach phantastischen Objekten, die „große Einsätze“ belohnen.

Mit Blick auf Ergebnisse Ihrer Forschung: Welchen Ansatz empfehlen Sie, um mit Ungewissheit zurechtzukommen?

Eine Möglichkeit, mit Unsicherheit umzugehen, besteht darin, sie zu leugnen. Zum Beispiel schlossen sich die Wirtschaftswissenschaftler viele Jahre lang Milton Friedman an, der sagte, dass man die Ungewissheit ignorieren könne. Auch viele Unternehmen gehen in ihren Geschäftsplänen und Prognosen von Annahmen aus, die die Ungewissheit ausklammern – vielleicht auf der Grundlage von Modellen, die sie ausklammern. Ich nenne diesen Ansatz einen geteilten Zustand. Seine Alternative, der integrierte Ansatz, basiert auf der These, dass wichtige Entscheidungen erst getroffen werden sollten, nachdem man sich Gedanken darüber gemacht hat, was sehr gut oder sehr schlecht laufen könnte. Die gewählte Maßnahme sollte so angegangen werden, als ob es sich um ein Experiment handelt. Das bedeutet, dass die alternativen Erzählungen und Möglichkeiten, die Fortschritte zu überwachen und aus den Erfahrungen zu lernen, geklärt werden müssen.

Bild der Person: Professor Dr. David Tuckett, University College London
Der Ökonom und Psychologe Prof. Dr. David Tuckett sieht Narrative als ein zentrales Mittel, folgenschwere Entscheidungen zu treffen. Er kommt am 5. Dezember für einen Uncertainty-Talk nach Bielefeld.

Experte für die Auseinandersetzung mit radikaler Ungewissheit

Professor Dr. David Tuckett ist Ökonom, Medizinsoziologe, Lehr- und Supervisionsanalytiker der British Psychoanalytical Society sowie Fellow des Institute of Psychoanalysis, London. Er ist Direktor des Centre for The Study of Decision-Making Uncertainty am University College London und leitender Wissenschaftler im Netzwerk CRUISSE, das sich mit der Auseinandersetzung mit radikaler Ungewissheit in Wissenschaft, Gesellschaft und Umwelt befasst. In seinen Studien arbeitet er daran, Erkenntnisse der Psychoanalyse, Soziologie, kognitiven Psychologie und Wirtschaft zusammenzuführen. Tuckett ist Vorsitzender der Arbeitsgruppe für vergleichende klinische Methoden der Europäischen Psychoanalytischen Föderation (EPF) und langjähriger Hauptherausgeber (1988-2001) des International Journal of Psychoanalysis sowie Gründungsredakteur der New Library of Psychoanalysis. 2007 erhielt er den Sigourney Award for Psychoanalysis. 2010 sprach er als eingeladener Redner auf dem Weltwirtschaftsforum.

Vortragsreihe ist Teil einer neuen Forschungsinitiative

Professor Dr. Herbert Dawid organisiert die Uncertainty-Talks zusammen mit der Geschichtswissenschaftlerin Professorin Dr. Silke Schwandt und dem Konfliktforscher Professor Dr. Andreas Zick. Die Vortragsreihe wird in Kooperation mit dem Zentrum für interdisziplinäre Forschung (ZiF) der Universität Bielefeld veranstaltet. Hervorgegangen ist sie aus einer Forschungsinitiative an der Universität Bielefeld, die die drei Wissenschaftler*innen koordinieren. Der Zusammenschluss beschäftigt sich intensiv mit Unsicherheit. Lange Zeit ist Unsicherheit als allgegenwärtige Bedrohung betrachtet worden, die es zu kontrollieren und im Zaum zu halten galt. Die neue Initiative strebt hingegen danach, die Forschung zu Ungewissheiten und Unsicherheiten auf eine breitere Basis zu stellen und voranzubringen. Dafür stellt sie die vielfältigen Arten der Navigation von Unsicherheit in den Mittelpunkt. Die Uncertainty-Talks sollen durch verschiedene Blickwinkel auf diese Analyse einen Beitrag zum interdisziplinären Verständnis dieses Forschungsansatzes leisten.

Außer dem Vortrag von Dawid Tucket am 5. Dezember standen zwei weitere Veranstaltungen auf dem Programm der ersten Auflage der Uncertainty-Talks:

  • Montag, 19. Dezember, 18.30 Uhr: Professor Dr. Armin Nassehi von der Ludwig-Maximilians-Universität München. Der Soziologe sprach über das Thema „Entscheidungen unter Unsicherheitsbedingungen. Ein Pleonasmus oder ein steigerbarer Sachverhalt?“. Ort: Plenarsaal des Zentrums für interdisziplinäre Forschung (ZiF).
  • Montag, 30. Januar, 18.30 Uhr: Professor Dr. Gerd Gigerenzer vom Harding-Zentrum für Risikokompetenz an der Universität Potsdam. Der Psychologe und Risikoforscher hält seinen Vortrag zum Thema „Der Umgang mit Ungewissheit im digitalen Zeitalter“. Ort: Plenarsaal des ZiF. Der Vortrag wurde auf den 30. Januar verschoben, nachdem er im November ausgefallen war.