Horizon (1964) s53e05 Episode Script

How You Really Make Decisions

We like to think that as a species, we are pretty smart.
We like to think we are wise, rational creatures.
I think we all like to think of ourselves as Mr Spock to some degree.
You know, we make rational, conscious decisions.
But we may have to think again.
It's mostly delusion, and we should just wake up to that fact.
In every decision you make, there's a battle in your mind between intuition and logic.
It's a conflict that plays out in every aspect of your life.
What you eat.
What you believe.
Who you fall in love with.
And most powerfully, in decisions you make about money.
The moment money enters the picture, the rules change.
Scientists now have a new way to understand this battle in your mind, how it shapes the decisions you take, what you believe And how it has transformed our understanding of human nature itself.
TAXI HORN BLARES Sitting in the back of this New York cab is Professor Danny Kahneman.
He's regarded as one of the most influential psychologists alive today.
Over the last 40 years, he's developed some extraordinary insights into the way we make decisions.
I think it can't hurt to have a realistic view of human nature and of how the mind works.
His insights come largely from puzzles.
Take, for instance, the curious puzzle of New York cab drivers and their highly illogical working habits.
Business varies according to the weather.
On rainy days, everyone wants a cab.
But on sunny days, like today, fares are hard to find.
Logically, they should spend a lot of time driving on rainy days, because it's very easy to find passengers on rainy days, and if they are going to take leisure, it should be on sunny days but it turns out this is not what many of them do.
Many do the opposite, working long hours on slow, sunny days and knocking off early when it's rainy and busy.
Instead of thinking logically, the cabbies are driven by an urge to earn a set amount of cash each day, come rain or shine.
Once they hit that target, they go home.
They view being below the target as a loss and being above the target as a gain, and they care more about preventing the loss than about achieving the gain.
So when they reach their goal on a rainy day, they stop .
.
which really doesn't make sense.
If they were trying to maximise their income, they would take their leisure on sunny days and they would drive all day on rainy days.
It was this kind of glitch in thinking that Kahneman realised could reveal something profound about the inner workings of the mind.
Anyone want to take part in an experiment? And he began to devise a series of puzzles and questions which have become classic psychological tests.
It's a simple experiment.
It's a very attractive game Don't worry, sir.
Nothing strenuous.
.
.
Posing problems where you can recognise in yourself that your intuition is going the wrong way.
The type of puzzle where the answer that intuitively springs to mind, and that seems obvious, is, in fact, wrong.
Here is one that I think works on just about everybody.
I want you to imagine a guy called Steve.
You tell people that Steve, you know, is a meek and tidy soul with a passion for detail and very little interest in people.
He's got a good eye for detail.
And then you tell people he was drawn at random.
From a census of the American population.
What's the probability that he is a farmer or a librarian? So do you think it's more likely that Steve's going to end up working as a librarian or a farmer? What's he more likely to be? Maybe a librarian.
Librarian.
Probably a librarian.
A librarian.
Immediately, you know, the thought pops to mind that it's a librarian, because he resembled the prototype of librarian.
Probably a librarian.
In fact, that's probably the wrong answer, because, at least in the United States, there are 20 times as many male farmers as male librarians.
Librarian.
- Librarian.
So there are probably more meek and tidy souls you know who are farmers than meek and tidy souls who are librarians.
This type of puzzle seemed to reveal a discrepancy between intuition and logic.
Another example is Imagine a dictionary.
I'm going to pull a word out of it at random.
Which is more likely, that a word that you pick out at random has the letter R in the first position or has the letter R in the third position? Erm, start with the letter R.
OK.
People think the first position, because it's easy to think of examples.
Start with it.
First.
In fact, there are nearly three times as many words with R as the third letter, than words that begin with R, but that's not what our intuition tells us.
So we have examples like that.
Like, many of them.
Kahneman's interest in human error was first sparked in the 1970s when he and his colleague, Amos Taverski, began looking at their own mistakes.
It was all in ourselves.
That is, all the mistakes that we studied were mistakes that we were prone to make.
In my hand here, I've got ã100.
Kahneman and Taverski found a treasure trove of these puzzles.
Which would you prefer? They unveiled a catalogue of human error.
Would you rather go to Rome with a free breakfast? And opened a Pandora's Box of mistakes.
A year and a day.
25? But the really interesting thing about these mistakes is, they're not accidents.
75? They have a shape, a structure.
I think Rome.
Skewing our judgment.
What makes them interesting is that they are not random errors.
They are biases, so the difference between a bias and a random error is that a bias is predictable.
It's a systematic error that is predictable.
Kahneman's puzzles prompt the wrong reply again.
More likely.
And again.
More likely.
More likely? And again.
Probably more likely? It's a pattern of human error that affects every single one of us.
On their own, they may seem small.
Ah, that seems to be the right drawer.
But by rummaging around in our everyday mistakes That's very odd.
.
.
Kahneman started a revolution in our understanding of human thinking.
A revolution so profound and far-reaching that he was awarded a Nobel prize.
So if you want to see the medal, that's what it looks like.
That's it.
Psychologists have long strived to pick apart the moments when people make decisions.
Much of the focus has been on our rational mind, our capacity for logic.
But Kahneman saw the mind differently.
He saw a much more powerful role for the other side of our minds, intuition.
And at the heart of human thinking, there's a conflict between logic and intuition that leads to mistakes.
Kahneman and Taverski started this trend of seeing the mind differently.
They found these decision-making illusions, these spots where our intuitions just make us decide these things that just don't make any sense.
The work of Kahneman and Taverski has really been revolutionary.
It kicked off a flurry of experimentation and observation to understand the meaning of these mistakes.
People didn't really appreciate, as recently as 40 years ago, that the mind didn't really work like a computer.
We thought that we were very deliberative, conscious creatures who weighed up the costs and benefits of action, just like Mr Spock would do.
By now, it's a fairly coherent body of work about ways in which intuition departs from the rules, if you will.
And the body of evidence is growing.
Some of the best clues to the working of our minds come not when we get things right, but when we get things wrong.
In a corner of this otherwise peaceful campus, Professor Chris Chabris is about to start a fight.
All right, so what I want you guys to do is stay in this area over here.
The two big guys grab you and sort of like start pretending to punch you, make some sound effects.
All right, this looks good.
All right, that seemed pretty good to me.
It's part of an experiment that shows a pretty shocking mistake that any one of us could make.
A mistake where you don't notice what's happening right in front of your eyes.
As well as a fight, the experiment also involves a chase.
It was inspired by an incident in Boston in 1995, when a young police officer, Kenny Conley, was in hot pursuit of a murder suspect.
It turned out that this police officer, while he was chasing the suspect, had run right past some other police officers who were beating up another suspect, which, of course, police officers are not supposed to do under any circumstances.
When the police tried to investigate this case of police brutality, he said, "I didn't see anything going on there, "all I saw was the suspect I was chasing.
" And nobody could believe this and he was prosecuted for perjury and obstruction of justice.
Everyone was convinced that Conley was lying.
We don't want you to be, like, closer than about Everyone, that is, apart from Chris Chabris.
He wondered if our ability to pay attention is so limited that any one of us could run past a vicious fight without even noticing.
And it's something he's putting to the test.
Now, when you see someone jogging across the footbridge, then you should get started.
Jackie, you can go.
In the experiment, the subjects are asked to focus carefully on a cognitive task.
They must count the number of times the runner taps her head with each hand.
Would they, like the Boston police officer, be so blinded by their limited attention that they would completely fail to notice the fight? About 45 seconds or a minute into the run, there was the fight.
And they could actually see the fight from a ways away, and it was about 20 feet away from them when they got closest to them.
The fight is right in their field of view, and at least partially visible from as far back as the footbridge.
It seems incredible that anyone would fail to notice something so apparently obvious.
They completed the three-minute course and then we said, "Did you notice anything unusual?" Yes.
What was it? It was a fight.
Sometimes they would have noticed the fight and they would say, "Yeah, I saw some guys fighting", but a large percentage of people said "We didn't see anything unusual at all.
" And when we asked them specifically about whether they saw anybody fighting, they still said no.
In fact, nearly 50% of people in the experiment completely failed to notice the fight.
Did you see anything unusual during the run? No.
OK.
Did you see some people fighting? No.
We did at night time and we did it in the daylight.
Even when we did it in daylight, many people ran right past the fight and didn't notice it at all.
Did you see anything unusual during the run? No, not really.
OK, did you see some people fighting? No.
You really didn't see anyone fighting? No.
Does it surprise you that you would have missed that? They were about 20 feet off the path.
Oh! You ran right past them.
Completely missed that, then.
OK.
Maybe what happened to Conley was, when you're really paying attention to one thing and focusing a lot of mental energy on it, you can miss things that other people are going to think are completely obvious, and in fact, that's what the jurors said after Conley's trial.
They said "We couldn't believe that he could miss something like that".
It didn't make any sense.
He had to have been lying.
It's an unsettling phenomenon called inattentional blindness that can affect us all.
Some people have said things like "This shatters my faith in my own mind", or, "Now I don't know what to believe", or, "I'm going to be confused from now on.
" But I'm not sure that that feeling really stays with them very long.
They are going to go out from the experiment, you know, walk to the next place they're going or something like that and they're going to have just as much inattentional blindness when they're walking down the street that afternoon as they did before.
This experiment reveals a powerful quandary about our minds.
We glide through the world blissfully unaware of most of what we do and how little we really know our minds.
For all its brilliance, the part of our mind we call ourselves is extremely limited.
So how do we manage to navigate our way through the complexity of daily life? Every day, each one of us makes somewhere between two and 10,000 decisions.
When you think about our daily lives, it's really a long, long sequence of decisions.
We make decisions probably at a frequency that is close to the frequency we breathe.
Every minute, every second, you're deciding where to move your legs, and where to move your eyes, and where to move your limbs, and when you're eating a meal, you're making all kinds of decisions.
And yet the vast majority of these decisions, we make without even realising.
It was Danny Kahneman's insight that we have two systems in the mind for making decisions.
Two ways of thinking: fast and slow.
You know, our mind has really two ways of operating, and one is sort of fast-thinking, an automatic effortless mode, and that's the one we're in most of the time.
This fast, automatic mode of thinking, he called System 1.
It's powerful, effortless and responsible for most of what we do.
And System 1 is, you know, that's what happens most of the time.
You're there, the world around you provides all kinds of stimuli and you respond to them.
Everything that you see and that you understand, you know, this is a tree, that's a helicopter back there, that's the Statue of Liberty.
All of this visual perception, all of this comes through System 1.
The other mode is slow, deliberate, logical and rational.
This is System 2 and it's the bit you think of as you, the voice in your head.
The simplest example of the two systems is really two plus two is on one side, and 17 times 24 is on the other.
What is two plus two? Four.
Four.
Four.
Fast System 1 is always in gear, producing instant answers.
And what's two plus two? Four.
A number comes to your mind.
Four.
Four.
Four.
It is automatic.
You do not intend for it to happen.
It just happens to you.
It's almost like a reflex.
And what's 22 times 17? That's a good one.
But when we have to pay attention to a tricky problem, we engage slow-but-logical System 2.
If you can do that in your head, you'll have to follow some rules and to do it sequentially.
And that is not automatic at all.
That involves work, it involves effort, it involves concentration.
22 times 17? There will be physiological symptoms.
Your heart rate will accelerate, your pupils will dilate, so many changes will occur while you're performing this computation.
Three'soh, God! That's, so 220 and seven times 22 is .
.
54.
374? OK.
And can I get you to just walk with me for a second? OK.
Who's the current? System 2 may be clever, but it's also slow, limited and lazy.
I live in Berkeley during summers and I walk a lot.
And when I walk very fast, I cannot think.
Can I get you to count backwards from 100 by 7? Sure.
193, 80 It's hard when you're walking.
It takes up, interestingly enough, the same kind of executive function asas thinking.
Fourtyfour? If you are expected to do something that demands a lot of effort, you will stop even walking.
Eightyumsix? Uh.
16, 9, 2? Everything that you're aware of in your own mind is part of this slow, deliberative System 2.
As far as you're concerned, it is the star of the show.
Actually, I describe System 2 as not the star.
I describe it as, as a minor character who thinks he is the star, because, in fact, most of what goes on in our mind is automatic.
You know, it's in the domain that I call System 1.
System 1 is an old, evolved bit of our brain, and it's remarkable.
We couldn't survive without it because System 2 would explode.
If Mr Spock had to make every decision for us, it would be very slow and effortful and our heads would explode.
And this vast, hidden domain is responsible for far more than you would possibly believe.
Having an opinion, you have an opinion immediately, whether you like it or not, whether you like something or not, whether you're for something or not, liking someone or not liking them.
That, quite often, is something you have no control over.
Later, when you're asked for reasons, you will invent reasons.
And a lot of what System 2 does is, it provides reason.
It provides rationalisations which are not necessarily the true reasons for our beliefs and our emotions and our intentions and what we do.
You have two systems of thinking that steer you through life Fast, intuitive System 1 that is incredibly powerful and does most of the driving.
And slow, logical System 2 that is clever, but a little lazy.
Trouble is, there's a bit of a battle between them as to which one is driving your decisions.
And this is where the mistakes creep in, when we use the wrong system to make a decision.
Just going to ask you a few questions.
We're interested in what you think.
This question concerns this nice bottle of champagne I have here.
Millesime 2005, it's a good year, genuinely nice, vintage bottle.
These people think they're about to use slow, sensible System 2 to make a rational decision about how much they would pay for a bottle of champagne.
But what they don't know is that their decision will actually be taken totally without their knowledge by their hidden, fast auto-pilot, System 1.
And with the help of a bag of ping-pong balls, we can influence that decision.
I've got a set of numbered balls here from 1 to 100 in this bag.
I'd like you to reach in and draw one out at random for me, if you would.
First, they've got to choose a ball.
The number says ten.
Ten.
Ten.
Ten.
They think it's a random number, but in fact, it's rigged.
All the balls are marked with the low number ten.
This experiment is all about the thoughtless creation of habits.
It's about how we make one decision, and then other decisions follow it as if the first decision was actually meaningful.
What we do is purposefully, we give people a first decision that is clearly meaningless.
Ten.
Ten, OK.
Would you be willing to pay ten pounds for this nice bottle of vintage champagne? I would, yes.
No.
Yeah, I guess.
OK.
This first decision is meaningless, based as it is on a seemingly random number.
But what it does do is lodge the low number ten in their heads.
Would you buy it for ten pounds? Yes, I would.
Yes.
You would? OK.
Now for the real question where we ask them how much they'd actually pay for the champagne.
What's the maximum amount you think you'd be willing to pay? 20? OK.
Seven pounds.
Seven pounds, OK.
Probably ten pound.
A range of fairly low offers.
But what happens if we prime people with a much higher number, 65 instead of 10? What does that one say? 65.
65, OK.
65.
OK.
It says 65.
How will this affect the price people are prepared to pay? What's the maximum you would be willing to pay for this bottle of champagne? 40? ã45.
45, OK.
40 quid? OK.
ã50? ã50? Yeah, I'd pay between 50 and ã80.
Between 50 and 80? Yeah.
Logic has gone out of the window.
The price people are prepared to pay is influenced by nothing more than a number written on a ping-pong ball.
It suggests that when we can't make decisions, we don't evaluate the decision in itself.
Instead what we do is, we try to look at other similar decisions we've made in the past and we take those decisions as if they were good decisions and we say to ourselves, "Oh, I've made this decision before.
"Clearly, I don't need to go ahead and solve this decision.
"Let me just use what I did before and repeat it, "maybe with some modifications.
" This anchoring effect comes from the conflict between our two systems of thinking.
Fast System 1 is a master of taking short cuts to bring about the quickest possible decision.
What happens is, they ask you a question and if the question is difficult but there is a related question that is a lotthat is somewhat simpler, you're just going to answer the other question and and not even notice.
So the system does all kinds of short cuts to feed us the information in a faster way and we can make actions, and the system is accepting some mistakes.
We make decisions using fast System 1 when we really should be using slow System 2.
And this is why we make the mistakes we do, systematic mistakes known as cognitive biases.
Nice day.
Since Kahneman first began investigating the glitches in our thinking, more than 150 cognitive biases have been identified.
We are riddled with these systematic mistakes and they affect every aspect of our daily lives.
Wikipedia has a very big list of biases and we are finding new ones all the time.
One of the biases that I think is the most important is what's called the present bias focus.
It's the fact that we focus on now and don't think very much about the future.
That's the bias that causes things like overeating and smoking, and texting and driving, and having unprotected sex.
Another one is called the halo effect, and this is the idea that if you like somebody or an organisation, you're biased to think that all of its aspects are good, that everything is good about it.
If you dislike it, everything is bad.
People really are quite uncomfortable, you know, by the idea that Hitler loved children, you know.
He did.
Now, that doesn't make him a good person, but we feel uncomfortable to see an attractive trait in a person that we consider, you know, the epitome of evil.
We are prone to think that what we like is all good and what we dislike is all bad.
That's a bias.
Another particular favourite of mine is the bias to get attached to things that we ourselves have created.
We call it the IKEA effect.
Well, you've got loss aversion, risk aversion, present bias.
Spotlight effect, and the spotlight effect is the idea that we think that other people pay a lot of attention to us when in fact, they don't.
Confirmation bias.
Overconfidence is a big one.
But what's clear is that there's lots of them.
There's lots of ways for us to get things wrong.
You know, there's one way to do things right and many ways to do things wrong, and we're capable of many of them.
These biases explain so many things that we get wrong.
Our impulsive spending.
Trusting the wrong people.
Not seeing the other person's point of view.
Succumbing to temptation.
We are so riddled with these biases, it's hard to believe we ever make a rational decision.
But it's not just our everyday decisions that are affected.
What happens if you're an expert, trained in making decisions that are a matter of life and death? Are you still destined to make these systematic mistakes? On the outskirts of Washington DC, Horizon has been granted access to spy on the spooks.
Welcome to Analytical Exercise Number Four.
Former intelligence analyst Donald Kretz is running an ultra-realistic spy game.
This exercise will take place in the fictitious city of Vastopolis.
Taking part are a mixture of trained intelligence analysts and some novices.
Due to an emerging threat .
.
a terrorism taskforce has been stood up.
I will be the terrorism taskforce lead and I have recruited all of you to be our terrorism analysts.
The challenge facing the analysts is to thwart a terrorist threat against a US city.
The threat at this point has not been determined.
It's up to you to figure out the type of terrorism .
.
and who's responsible for planning it.
The analysts face a number of tasks.
They must first investigate any groups who may pose a threat.
Your task is to write a report.
The subject in this case is the Network of Dread.
The mayor has asked for this 15 minutes from now.
Just like in the real world, the analysts have access to a huge amount of data streaming in, from government agencies, social media, mobile phones and emergency services.
The Network of Dread turns out to be a well-known international terror group.
They have the track record, the capability and the personnel to carry out an attack.
The scenario that's emerging is a bio-terror event, meaning it's a biological terrorism attack that's going to take place against the city.
If there is an emerging threat, they are the likely candidate.
We need to move onto the next task.
It's now 9th April.
This is another request for information, this time on something or someone called the Masters of Chaos.
The Masters of Chaos are a group of cyber-hackers, a local bunch of misfits with no history of violence.
And while the analysts continue to sift through the incoming data, behind the scenes, Kretz is watching their every move.
In this room, we're able to monitor what the analysts are doing throughout the entire exercise.
We have set up a knowledge base into which we have been inserting data throughout the course of the day.
Some of them are related to our terrorist threat, many of them are not.
Amidst the wealth of data on the known terror group, there's also evidence coming in of a theft at a university biology lab and someone has hacked into the computers of a local freight firm.
Each of these messages represents, essentially, a piece of the puzzle, but it's a puzzle that you don't have the box top to, so you don't have the picture in advance, so you don't know what pieces go where.
Furthermore, what we have is a bunch of puzzle pieces that don't even go with this puzzle.
The exercise is part of a series of experiments to investigate whether expert intelligence agents are just as prone to mistakes from cognitive bias as the rest of us, or whether their training and expertise makes them immune.
I have a sort of insider's point of view of this problem.
I worked a number of years as an intelligence analyst.
The stakes are incredibly high.
Mistakes can often be life and death.
We roll ahead now.
The date is 21st May.
If the analysts are able to think rationally, they should be able to solve the puzzle.
But the danger is, they will fall into the trap set by Kretz and only pay attention to the established terror group, the Network of Dread.
Their judgment may be clouded by a bias called confirmation bias.
Confirmation bias is the most prevalent bias of all, and it's where we tend to search for information that supports what we already believe.
Confirmation bias can easily lead people to ignore the evidence in front of their eyes.
And Kretz is able to monitor if the bias kicks in.
We still see that they're searching for Network of Dread.
That's an indication that we may have a confirmation bias operating.
The Network of Dread are the big guys - they've done it before, so you would expect they'd do it again.
And I think we're starting to see some biases here.
Analysts desperately want to get to the correct answer, but they're affected by the same biases as the rest of us.
So far, most of our analysts seem to believe that the Network of Dread is responsible for planning this attack, and that is completely wrong.
How are we doing? It's time for the analysts to put themselves on the line and decide who the terrorists are and what they're planning.
So what do you think? It was a bio-terrorist attack.
I had a different theory.
What's your theory? Cos I may be missing something here, too.
They know that the Network of Dread is a terrorist group.
They know that the Masters of Chaos is a cyber-hacking group.
Either to the new factory or in the water supply.
Lots of dead fish floating up in the river.
The question is, did any of the analysts manage to dig out the relevant clues and find the true threat? In this case, the actual threat is due to the cyber-group, the Masters of Chaos, who become increasingly radicalised throughout the scenario and decide to take out their anger on society, essentially.
Who convinced them to switch from cyber-crime to bio-terrorism? Or did they succumb to confirmation bias and simply pin the blame on the usual suspects? Will they make that connection? Will they process that evidence and assess it accordingly, or will their confirmation bias drive them to believe that it's a more traditional type of terrorist group? I believe that the Masters of Chaos are actually the ones behind it.
It's either a threat or not a threat, but the Network of Dread? And time's up.
Please go ahead and save those reports.
At the end of the exercise, Kretz reveals the true identity of the terrorists.
We have a priority message from City Hall.
The terrorist attack was thwarted, the planned bio-terrorist attack by the Masters of Chaos against Vastopolis was thwarted.
The mayor expresses his thanks for a job well done.
Show of hands, whowho got it? Yeah.
Out of 12 subjects, 11 of them got the wrong answer.
The only person to spot the true threat was in fact a novice.
All the trained experts fell prey to confirmation bias.
It is not typically the case that simply being trained as an analyst gives you the tools you need to overcome cognitive bias.
You can learn techniques for memory improvement.
You can learn techniques for better focus, but techniques to eliminate cognitive bias just simply don't work.
And for intelligence analysts in the real world, the implications of making mistakes from these biases are drastic.
Government reports and studies over the past decade or so have cited experts as believing that cognitive bias may have played a role in a number of very significant intelligence failures, and yet it remains an understudied problem.
Heads.
Heads.
But the area of our lives in which these systematic mistakes have the most explosive impact is in the world of money.
The moment money enters the picture, the rules change.
Many of us think that we're at our most rational when it comes to decisions about money.
We like to think we know how to spot a bargain, to strike a good deal, sell our house at the right time, invest wisely.
Thinking about money the right way is one of the most challenging things for human nature.
But if we're not as rational as we like to think, and there is a hidden force at work shaping our decisions, are we deluding ourselves? Money brings with it a mode of thinking.
It changes the way we react to the world.
When it comes to money, cognitive biases play havoc with our best intentions.
There are many mistakes that people make when it comes to money.
Kahneman's insight into our mistakes with money were to revolutionise our understanding of economics.
It's all about a crucial difference in how we feel when we win or lose and our readiness to take a risk.
I would like to take a risk.
Take a risk, OK? Let's take a risk.
Our willingness to take a gamble is very different depending on whether we are faced with a loss or a gain.
Excuse me, guys, can you spare two minutes to help us with a little experiment? Try and win as much money as you can, OK? OK.
OK? In my hands here I have ã20, OK? Here are two scenarios.
And I'm going to give you ten.
In the first case, you are given ten pounds.
That's now yours.
Put it in your pocket, take it away, spend it on a drink on the South Bank later.
OK.
OK? OK.
Then you have to make a choice about how much more you could gain.
You can either take the safe option, in which case, I give you an additional five, or you can take a risk.
If you take a risk, I'm going to flip this coin.
If it comes up heads, you win ten, but if it comes up tails, you're not going to win any more.
Would you choose the safe option and get an extra five pounds or take a risk and maybe win an extra ten or nothing? Which is it going to be? I'd go safe.
Safe, five? - Yeah.
Take five.
-You'd take five? -Yeah, man.
-Sure? There we go.
Most people presented with this choice go for the certainty of the extra fiver.
Thank you very much.
Told you it was easy.
In a winning frame of mind, people are naturally rather cautious.
That's yours, too.
That was it? That was it.
-Really? -Yes.
-Eh? But what about losing? Are we similarly cautious when faced with a potential loss? In my hands, I've got ã20 and I'm going to give that to you.
That's now yours.
OK.
-You can put it in your handbag.
This time, you're given ã20.
And again, you must make a choice.
Would you choose to accept a safe loss of ã5 or would you take a risk? If you take a risk, I'm going to flip this coin.
If it comes up heads, you don't lose anything, but if it comes up tails, then you lose ten pounds.
In fact, it's exactly the same outcome.
In both cases, you face a choice between ending up with a certain ã15 or tossing a coin to get either ten or twenty.
I will risk losing ten or nothing.
- OK.
But the crucial surprise here is that when the choice is framed in terms of a loss, most people take a risk.
Take a risk.
- Take a risk, OK.
I'll risk it.
- You'll risk it? OK.
Our slow System 2 could probably work out that the outcome is the same in both cases.
And that's heads, you win.
But it's too limited and too lazy.
That's the easiest ã20 you'll ever make.
Instead, fast System 1 makes a rough guess based on change.
And that's all there is to it, thank you very much.
Oh, no! Look.
And System 1 doesn't like losing.
If you were to lose ã10 in the street today and then find ã10 tomorrow, you would be financially unchanged but actually we respond to changes, so the pain of the loss of ã10 looms much larger, it feels more painful.
In fact, you'd probably have to find ã20 to offset the pain that you feel by losing ten.
Heads.
At the heart of this, is a bias called loss aversion, which affects many of our financial decisions.
People think in terms of gains and losses.
Heads.
It's tails.
Oh! And in their thinking, typically, losses loom larger than gains.
We even have an idea byby how much, by roughly a factor of two or a little more than two.
That is loss aversion, and it certainly was the most important thing that emerged from our work.
It's a vital insight into human behaviour, so important that it led to a Nobel prize and the founding of an entirely new branch of economics.
When we think we're winning, we don't take risks.
But when we're faced with a loss, frankly, we're a bit reckless.
But loss aversion doesn't just affect people making casual five pound bets.
It can affect anyone at any time, including those who work in the complex system of high finance, in which trillions of dollars are traded.
In our current complex environments, we now have the means as well as the motive to make very serious mistakes.
The bedrock of economics is that people think rationally.
They calculate risks, rewards and decide accordingly.
But we're not always rational.
We rarely behave like Mr Spock.
For most of our decisions, we use fast, intuitive, but occasionally unreliable System 1.
And in a global financial market, that can lead to very serious problems.
I think what the financial crisis did was, it simply said, "You know what? "People are a lot more vulnerable to psychological pitfalls "than we really understood before.
" Basically, human psychology is just too flawed to expect that we could avert a crisis.
Understanding these pitfalls has led to a new branch of economics.
Behavioural economics.
Thanks to psychologists like Hersh Shefrin, it's beginning to establish a toehold in Wall Street.
It takes account of the way we actually make decisions rather than how we say we do.
Financial crisis, I think, was as large a problem as it was because certain psychological traits like optimism, over-confidence and confirmation bias played a very large role among a part of the economy where serious mistakes could be made, and were.
But for as long as our financial system assumes we are rational, our economy will remain vulnerable.
I'm quite certain that if the regulators listened to behavioural economists early on, we would have designed a very different financial system and we wouldn't have had the incredible increase in housing market and we wouldn't have this financial catastrophe.
And so when Kahneman collected his Nobel prize, it wasn't for psychology, it was for economics.
The big question is, what can we do about these systematic mistakes? Can we hope to find a way round our fast-thinking biases and make better decisions? To answer this, we need to know the evolutionary origins of our mistakes.
Just off the coast of Puerto Rico is probably the best place in the world to find out.
The tiny island of Cayo Santiago.
So we're now in the boat, heading over to Cayo Santiago.
This is an island filled with a thousand rhesus monkeys.
Once you pull in, it looks a little bit like you're going to Jurassic Park.
You're not sure what you're going to see.
Then you'll see your first monkey, and it'll be comfortable, like, "Ah, the monkeys are here, everything's great.
" It's an island devoted to monkey research.
You see the guys hanging out on the cliff up there? Pretty cool.
The really special thing about Cayo Santiago is that the animals here, because they've grown up over the last seven years around humans, they're completely habituated, and that means we can get up close to them, show them stuff, look at how they make decisions.
We're able to do this here in a way that we'd never be able to do it anywhere else, really.
It's really unique.
Laurie Santos is here to find out if monkeys make the same mistakes in their decisions that we do.
Most of the work we do is comparing humans and other primates, trying to ask what's special about humans.
But really, what we want to understand is, what's the evolutionary origin of some of our dumber strategies, some of those spots where we get things wrong? If we could understand where those came from, that's where we'll get some insight.
If Santos can show us that monkeys have the same cognitive biases as us, it would suggest that they evolved a long time ago.
And a mental strategy that old would be almost impossible to change.
We started this work around the time of the financial collapse.
So, when we were thinking about what dumb strategies could we look at in monkeys, it was pretty obvious that some of the human economic strategies which were in the news might be the first thing to look at.
And one of the particular things we wanted to look at was whether or not the monkeys are loss averse.
But monkeys, smart as they are, have yet to start using money.
And so that was kind of where we started.
We said, "Well, how can we even ask this question "of if monkeys make financial mistakes?" And so we decided to do it by introducing the monkeys to their own new currency and just let them buy their food.
So I'll show you some of this stuff we've been up to with the monkeys.
Back in her lab at Yale, she introduced a troop of monkeys to their own market, giving them round shiny tokens they could exchange for food.
So here's Holly.
She comes in, hands over a token and you can see, she just gets to grab the grape there.
One of the first things we wondered was just, can they in some sense learn that a different store sells different food at different prices? So what we did was, we presented the monkeys with situations where they met traders who sold different goods at different rates.
So what you'll see in this clip is the monkeys meeting a new trader.
She's actually selling grapes for three grapes per one token.
And what we found is that in this case, the monkeys are pretty rational.
so when they get a choice of a guy who sells, you know, three goods for one token, they actually shop more at that guy.
Having taught the monkeys the value of money, the next step was to see if monkeys, like humans, suffer from that most crucial bias, loss aversion.
And so what we did was, we introduced the monkeys to traders who either gave out losses or gains relative to what they should.
So I could make the monkey think he's getting a bonus simply by having him trade with a trader who's starting with a single grape but then when the monkey pays this trader, she actually gives him an extra, so she gives him a bonus.
At the end, the monkey gets two, but he thinks he got that second one as a bonus.
We can then compare what the monkeys do with that guy versus a guy who gives the monkey losses.
This is a guy who shows up, who pretends he's going to sell three grapes, but then when the monkey actually pays this trader, he'll take one of the grapes away and give the monkeys only two.
The big question then is how the monkeys react when faced with a choice between a loss and a gain.
So she'll come in, she's met these two guys before.
You can see she goes with the bonus option, even waits patiently for her additional piece to be added here, and then takes the bonus, avoiding the person who gives her losses.
So monkeys hate losing just as much as people.
And crucially, Santos found that monkeys, as well, are more likely to take risks when faced with a loss.
This suggests to us that the monkeys seem to frame their decisions in exactly the same way we do.
They're not thinking just about the absolute, they're thinking relative to what they expect.
And when they're getting less than they expect, when they're getting losses, they too become more risk-seeking.
The fact that we share this bias with these monkeys suggests that it's an ancient strategy etched into our DNA more than 35 million years ago.
And what we learn from the monkeys is that if this bias is really that old, if we really have had this strategy for the last 35 million years, simply deciding to overcome it is just not going to work.
We need better ways to make ourselves avoid some of these pitfalls.
Making mistakes, it seems, is just part of what it is to be human.
We are stuck with our intuitive inner stranger.
The challenge this poses is profound.
If it's human nature to make these predictable mistakes and we can't change that, what, then, can we do? We need to accept ourselves as we are.
The cool thing about being a human versus a monkey is that we have a deliberative self that can reflect on our biases.
System 2 in us has for the first time realised that there's a System 1, and with that realisation, we can shape the way we set up policies.
We can shape the way we set up situations to allow ourselves to make better decisions.
This is the first time in evolution that this has happened.
If we want to avoid mistakes, we have to reshape the environment we've built around us rather than hope to change ourselves.
We've achieved a lot despite all of these biases.
If we are aware of them, we can probably do things like design our institutions and our regulations and our own personal environments and working lives to minimise the effect of those biases and help us think about how to overcome them.
We are limited, we're not perfect.
We're irrational in all kinds of ways, but we can build a world that is compatible with this and get us to make better decisions rather than worse decisions.
That's my hope.
And by accepting our inner stranger, we may come to a better understanding of our own minds.
I think it is important, in general, to be aware of where beliefs come from.
And if we think that we have reasons for what we believe, that is often a mistake, that our beliefs and our wishes and our hopes are not always anchored in reasons.
They're anchored in something else that comes from within and is different.

Previous EpisodeNext Episode