Search

Why Smart People (Sometimes) Make Bad Decisions - Harvard Business Review

makaanlontong.blogspot.com

ALISON BEARD: Welcome to the HBR IdeaCast from Harvard Business Review. I’m Alison Beard.

We all want to make better decisions, whether we’re hiring a new employee, choosing a marketing campaign or allocating funds between different departments. Good judgment is an essential part of any job, and it’s ultimately what makes individuals, teams and organizations successful.

But we know that as humans, we’re all flawed. We’re pretty terrible at making predictions, we come with preconceived opinions, we make a ton of mistakes. Even the greatest leaders trip up from time to time. We talk a lot about how to eliminate bias in decision-making, that is the tendency to swing in one direction as a single person or group. Maybe it’s always thinking a project will get done faster than it will or consistently underestimating costs or hiring only people whose names sound familiar.

Today’s guests say that we’re overlooking at another big problem, what they call “noise” or random variability in the decisions made by different people:s doctors, judges, managers. It’s pervasive and affects all industries. But noise is also something that, once we understand, we can combat.

Daniel Kahneman is a Nobel Prize winner and emeritus professor at Princeton University. Olivier Sibony is a professor of strategy at HEC Paris. And they’re coauthors along with Cass Sunstein of a new book, “Noise: A Flaw In Human Judgment”. Danny, Olivier, thanks so much for joining me.

DANIEL KAHNEMAN: My pleasure.

OLIVIER SIBONY: Thanks for having us.

ALISON BEARD: So, as I said, organizations around the world have become much more aware of bias in their decision making, and a lot seem to be working to try to fix it. How is noise different and why is it also just as important?

DANIEL KAHNEMAN: Well, bias has become almost a synonym for error, that is when people make or see errors of judgment, they attribute them to biases. But there is another form of error. Suppose you have a number of people are making a judgment about an object or a person. And on average they’re exactly right, but they vary. That variability is noise, and so it’s variability that shouldn’t exist. It’s judgment that should agree, but don’t. Wherever there is judgment, there is noise and more of it than you think.

ALISON BEARD: And you’re not just talking about differences between people, you say interchangeable professionals. It’s actually sometimes the same person making a different judgment in a very similar circumstance?

OLIVIER SIBONY: That’s right. In fact, when we measure the fact that different people in the same organization make different judgments, that also includes the fact that the very same people might have made a different judgment at another time. And the way to check that is that in some situations you can actually test whether people given the same problem make the same judgment. So in some domains it’s impossible to test because people will recognize it. If you show a judge at the same defendant the judge sentenced yesterday, the judge will say, “Well, I recognize this person. I sentenced him to seven years yesterday. I’m going to sentence him to seven years today. I’m not crazy.”

But if you take people who cannot recognize what they’ve done before, like radiologists who see x-ray images that they’ve seen some time ago, or even forensic scientists who look at pairs of fingerprints that they have decided sometimes ago were a match or were not a match, there will be some variability in their judgment from one moment to the next. So there is noise between people and there is noise within people.

ALISON BEARD: And it’s different than bias because it’s not a predictable inclination?

DANIEL KAHNEMAN: We distinguished three sources of noise. The first one, if we’re sticking to the sentencing example, the first one are differences in toughness of sentences in the average level. The second one are differences in taste in the ranking of crime or defendant. And the third one is within judge variability so that a judge might be more severe when the temperature is high than when it’s low, when his or her favorite football team has lost the game or won a game the proceeding day. So those are the three sources of variance. But actually what we discovered in the process of writing the book were that the biggest source of variation are just different judgment personalities that people have.

ALISON BEARD: So let’s move to the corporate world. Why is it dangerous for a business to have too much noise? What are some real world consequences?

OLIVIER SIBONY: Well, there are several types of consequences. The most obvious one is that they are going to make mistakes. If you assume that there is a correct answer, and two people have a different answer, then at least one of them is the wrong. And very often businesses think, “Well, it doesn’t matter because we are right on average, so those mistakes tend to cancel out.” But really they don’t. If you price one insurance policy too high and the other too low, on average, your pricing may be right. But the one that is priced too high is a customer you might not get. The one you price too low is a risk that you are not charging the right price for.

Then there is another consideration which matters to many organizations, which is the credibility and the fairness of their decisions. Even if you don’t know and there is no way of knowing in absolute terms which employee deserves the best rating, and that’s inherently subjective judgment for which you will never know what the absolute truth is, it is shocking if your rating and my rating depends on the luck of the draw because one person would give you a top rating and the next person would give you an average rating. That destroys the credibility of the process, and ultimately the credibility of the organization.

ALISON BEARD: I love those two examples you just gave because you’re calling out the fact that noise is a problem in both predictive decisions, how much an insurance client is going to be worth. And then also evaluative decisions, whether an employee has performed well or not.

DANIEL KAHNEMAN: That’s correct. Then actually, you can think of it that way. The customer or the employee who interacts with an organization is facing a lottery, and that lottery shouldn’t exist. People wouldn’t sign up for a lottery when they’re asking for an insurance premium from a company.

ALISON BEARD: Right.

OLIVIER SIBONY: You could even put it in stronger terms. If you were told that some of your customers have to pay a higher premium or get turned down for a loan because of some bias in the way their case has been evaluated, some bias in the identity of those customers, you would find this outrageous and you would be right. But if you hear that some customers have to pay a higher insurance premium or get turned down for a loan because of the luck of the draw in that lottery, we somehow ignore that problem. We don’t think we should ignore that problem. We think we should do something about it. We think is just as out as outrageous and as damaging as bias.

ALISON BEARD: Yeah. It’s really in pursuit of consistency.

DANIEL KAHNEMAN: To just to give another example of that, we thought of an organization that hires people and where half of the people do the hiring favor men and half favor women. Now, on average, the organization will be unbiased, but it’s going to make a lot of mistakes. It’s going to hire the wrong people. And that is entirely due to noise.

ALISON BEARD: You’ll have one department populated entirely by women, and another populated entirely by men, which surely isn’t productive. So how do you tell if you as an individual or your organization has a noise problem?

OLIVIER SIBONY: We suggest a procedure that we call the noise audit. In fact, what happens in most organizations most of the time is that each problem, each case, each judgment is made by one person. So in the courthouse, the judge to which a particular defendant gets assigned gets to see that defendant, but no other judge is going to see that defendant. In the bank, the loan officer who looks at a particular loan application is going to make that decision, but no one else is going to see it.

The noise audit creates a situation, an exercise and artificial situation, where a lot of different judges are going to look at a lot of different cases and give separate judgments about them. And that will give you a clean measure, a clean estimate of how much difference there is that you normally don’t see. And that might tell you that that’s fine. The difference is tolerable. It’s well within what you expected. But much more likely, you are going to find that the noise is in fact much larger than you thought and that you need to do something about it.

ALISON BEARD: How would you do that in that company?

DANIEL KAHNEMAN: Well, to give an example from say the world of insurance, you would ask executives in underwriting, say, to write up realistic cases or to find cases which are as representative as possible of the kinds of judgment that people make on the job. Now, you have one case and you might have 50 people making judgments about that case. You don’t know what the correct answer is. You don’t need to know the correct answer, but what you can observe in the variability in the judgments that people make. And if you’re doing this cleverly, you should compare the variability that you observe to the variability that people actually expect so that you should, in advance, obtain from executives an estimate of how much noise they expect to find so that you can see whether they find a great deal more than they expected.

ALISON BEARD: Well, that’s a good way to segue into talking about solutions. So I think the first thing that we’ll jump to a lot of people’s minds is to just eliminate humans from the decision-making process entirely and move to AI. Is that where this is going?

OLIVIER SIBONY: No, it is not where this is going, although it is an important approach and it is an approach that is being used, and that will be used in more and more fields. And the reason it’s very tempting, and the reason often going to be useful is that since we’ve said that wherever there is human judgment, there is noise, the only way to completely eliminate noise is to eliminate human judgment. So when you do that, you will eliminate noise.

This does not guarantee, however, that you will make the best possible decisions. You might inadvertently create bias in the process. And there has been a lot of talk about algorithmic bias. You don’t have to create algorithmic bias. You are not doomed to create algorithmic bias every time you have an algorithm, but it’s a risk and it needs to be managed.

More importantly, there are lots of decisions, lots of important decisions, that do not lend themselves to that sort of automation, either because it’s impractical, or more often because even though it’s practical, the people who have to make those decisions and who bear responsibility for them, do not want to abdicate those responsibilities to machines. And for those reasons, we believe that it’s important to improve the quality of human judgments, to educate human judgment, to structure human judgment and not to hope to ever abolish it.

ALISON BEARD: Okay, so without resorting to algorithms, what are some things that we humans can do to reduce noise in our decision-making?

DANIEL KAHNEMAN: We are more concerned actually with what an organization could do than what an individual could do. We talk of a family of procedures and steps that we call decision hygiene. And it’s not an appealing term and it’s meant not to be very appealing, but we want to contrast hygiene from medication or vaccination, that is medication or vaccination, when you do that, you know which disease it is that you are trying to control or combat or eliminate. When you wash your hands, you don’t know what germs you’re killing. And if you’re successful, you will never know.

And when people think in terms of biases, they’re very naturally drawn to how can we control this particular bias or that bias. And this is more like vaccination or medication. When you’re thinking of noise, you are basically looking for ways of producing a more uniform and efficient use of information.

ALISON BEARD: Okay. So let’s talk about some of the ways to become more hygienic. We talked about performance evaluations and hiring. So let’s say I’m a team leader or head of an HR department trying to help my organization do a better job of that with less noise. How should I start?

OLIVIER SIBONY: There’s one place where you’ve probably started already, which is to ask several people to make judgments. And typically, if you’re hiring, you’re asking several people to meet a candidate. If you’re evaluating, you usually have some sort of 360 degree assessment where several people are making the judgment.

The advice we would give you here is keep doing that, but make sure that in the process of aggregating those judgments, you first elicit independent judgements before you discuss or aggregate them. That’s a precaution that companies don’t systematically take. For instance, if you are going to have a meeting to discuss which candidates you hire, make sure that every person sitting in that meeting has filled in a very detailed form explaining what they think of the candidate before they interact with others. The reason for this is that obviously social influence between people is a noise factor.

The second thing you could do, which is just as important, is to structure your judgements. And specifically, if you take recruiting, this is going to be obvious, to say what are the various attributes or qualities or skills that we expect in this person? What is the job description? And then to make sure that each of the dimensions that you’re going to evaluate those candidates on you evaluate separately. So each of the people who is meeting Alison in an interview will say Alison is evaluated on N dimensions, one, two, three, four, five, six, seven, and we’ll evaluate Alison separately in those dimensions. That’s what a structured interview for those who are familiar with the term typically does.

ALISON BEARD: And Danny, that goes back to your initial research on Israeli soldiers, right?

DANIEL KAHNEMAN: Indeed. I find myself at the end of my career returning to a theme with which I started as a psychologist when I developed an interview system for the Israeli Army. I’m embarrassed to say when it was, but it was 1956. And in constructing that interview, it really consisted of a set of traits. And you evaluate traits in sequence using factual questions. And it’s much easier to be fact-based and to be objective when you’re assessing individual characteristics, whether the candidate or of an option in decision-making, than it is to keep the same objectivity when you’re making a global judgment.

So the idea here is break up the problem into dimensions, evaluate dimensions as independently as possible, and delay the intuitive global evaluation until you have all the information about the option or about the candidate that you have to evaluate.

And very much the same process can be applied to hiring, where it is applied in many places, for example, in Google, at Google. But it can also be applied much more generally to decisions. Options in decision-making, you can think of options as candidates. And so the reasoning that applies to how you should interview and evaluate candidates applies much more broadly to reviewing and evaluating options in decision-making.

ALISON BEARD: And in terms of the people that you’re bringing into these processes, in the book you talk about people who are indeed very good judges, they’re very good decision-makers, in part because they probably more intuitively follow the processes you’re suggesting. But then also decision observers, people who sort of monitor the hygiene. So talk about how you incorporate both those types of people to make sure that an organization as a whole is making better decisions.

DANIEL KAHNEMAN: Well, one thing that we know actually is that somebody who is looking objectively at a decision or at a judgment being made is much more likely to detect mistakes or to detect biases than the people who are actually making the decision. So the idea that we recommend, and it’s untried, but we think that it’s definitely worth trying, is to have observers who are looking for possible biases and sources of mistakes. And we actually have the sort of checklist for the most common sources of error. And having somebody who is independently making those judgments, we hope and we think is likely to improve the quality of those processes.

OLIVIER SIBONY: There is another aspect of this in your question, Alison, which goes to styles of leadership and styles of problem-solving. And everybody knows that if you want people to make the right judgements and the right decisions, it helps for them to be competent, to know what they are talking about. That’s true. And that’s not a revelation. Everybody suspects that it’s important for those people to be smart, to have a high IQ, and to be able to understand the ins and outs of complex problems. And that’s true too. IQ does help. There’s a third thing, which is discussed a lot less, in which actually does matter a lot, which is the cognitive style that you bring to a problem. And the style that seems to characterize the best judges on questions on which you can actually evaluate them, so questions of predictive judgment where you can look back and see who was right and who was wrong, the style that seems to work is something called actively open-minded thinking.

That’s the style of people who actually love to change their mind, who love to look for information that might prove them wrong,  and who proudly change their mind and explain why they have changed their minds because the facts have changed.

It’s a very interesting style to have in people who need to make difficult judgments. It’s also a difficult style to conciliate with our stereotype of leadership where we imagine leaders as decisive, unwavering, committed people who know exactly where they are going and whom everyone wants to follow. So that’s a bit of a challenge for people who want to make the right decisions when they’re in a position of leadership, but we think it’s a challenge well worth thinking about.

ALISON BEARD: But it sounds like something that organizations should put on that checklist of what they’re looking for in new hires or promotion candidates, right?

OLIVIER SIBONY: Actively open-minded people. Yes.

ALISON BEARD: Yeah. All of these ideas do sound very promising, but they also sound like they might be really time-consuming, costly to implement, and also just a little bureaucratic, stifling even. So is it really worth it? How do you weigh the costs with the benefits?

DANIEL KAHNEMAN: Well, in the first place, it’s very clear that if you’re going to have procedures the people who are going to live with those procedures should be involved in creating them and should be involved in periodically reviewing them and revising them. So it’s very important for people to view those tools and procedures as tools that help, rather than as bureaucratic sort of dictates that they have to follow. And indeed there is a risk of bureaucratization that whenever you look for a way of creating uniform procedures, but we think that in the process of adopting procedures, this is where you can overcome at least some of these difficulties. And also, and that is really very important, procedures should not be guaranteed to last forever. They should be periodically reviewed and periodically criticized. That as you are not looking for the best procedure that would last forever, you are looking for the best procedure for right now. And you have a time period at which you’re going to evaluate it again.

ALISON BEARD: And the next question that our listeners will have is, is all of this achievable? Do you have evidence that these interventions you’re talking about really work and make a difference and lead to better decisions?

OLIVIER SIBONY: There are lots of evidence in some situations. So the most striking example is recruiting. In hiring decisions, we have a century of data where we know  traditional interviews, the way most companies still do them own structured interviews, as they are called, do not get very good results. And that’s structuring your judgments, which is one of the forms of decision hygiene that we’ve talked about, does give much better results.

This is interesting, because it tells you several things. The first thing it tells you is that however much you improve, you’re never going to be perfect. Even structured interviews are very, very far from being perfect at predicting who is going to succeed and who is not going to succeed on the job. So this is a game of inches where you’re improving your probability of success, but you should remain aware at all times that improving your policy of success does not guarantee success.

And if your aspiration when you say, “Does this work?” Is that it guarantees success, that aspiration is going to be disappointed. So first we should bear in mind that it’s not about being perfect, it’s about being better. The second thing that the story of recruiting tells you, which is really interesting, is that we’ve known for decades that people shouldn’t be doing unstructured interviews, but they still do. And we’ve known for decades that there are solutions that are superior and they are catching on.

But it takes a long time for people to see the benefits of those techniques and the resistance of people who fear that their power to make discretionary decisions is going to be taken away from them is a very real concern, which is why when we bring those techniques into organizations, we should, as Danny was saying earlier, make sure that people own them and that they do not feel power is being taken away from them, but on the contrary that they are being helped with new tools to reach better judgments.

DANIEL KAHNEMAN: I should add that in decision-making, there is an institution of meetings. Executives spend a lot of their lives in meetings. And I think it’s fair to say that meetings are by and large not optimized. That there is-

ALISON BEARD: A waste of time.

DANIEL KAHNEMAN: A lot of wasted time.

OLIVIER SIBONY: This is the understatement of the year.

DANIEL KAHNEMAN: Yeah. And so decision hygiene, one obvious target for decision hygiene, is how to conduct meetings and how to make meetings feel more efficient and actually be more efficient. That is one hopeful way of thinking about decision hygiene that people will immediately recognize. That’s not bureaucracy, that just improves life.

ALISON BEARD: And what’s the impact that you’re hoping to have with this book. If we all start paying more attention to noise and trying to eliminate it, what are the big benefits for companies and even broader society?

DANIEL KAHNEMAN: Well, I think that everybody recognizes that making better decisions is a worthy objective and eliminate error is a worthy objective. And there are two kinds of errors, bias and noise. And one of them has been completely neglected. And really the aim of this book is to direct attention to a problem that has been thoroughly neglected and to start a conversation about it. And the impact would be a conversation about it, both academic and applied, that is we hope that organizations will be curious. We hope that organizations will conduct noise audit. And if they do, they will discover that there is more noise than they’re expecting. And then perhaps they’ll want to do something about it.

ALISON BEARD: Terrific. Well, thank you both for being with us.

DANIEL KAHNEMAN: Thank you.

OLIVIER SIBONY: Thank you very much.

ALISON BEARD: That’s Daniel Kahneman and Olivier Sibony. They’re co-authors along with Cass Sunstein of a new book, Noise: A Flaw in Human Judgment.

This episode was produced by Mary Dooe. We get technical help from Rob Eckhardt. Adam Buchholz is our audio product manager. Thanks for listening to the HBR IdeaCast. I’m Alison Beard.

Adblock test (Why?)



"make" - Google News
May 25, 2021 at 08:08PM
https://ift.tt/2ThsaMl

Why Smart People (Sometimes) Make Bad Decisions - Harvard Business Review
"make" - Google News
https://ift.tt/2WG7dIG
https://ift.tt/2z10xgv

Bagikan Berita Ini

0 Response to "Why Smart People (Sometimes) Make Bad Decisions - Harvard Business Review"

Post a Comment


Powered by Blogger.