16 November 2020

Today’s guest is Annie Duke, a writer, and expert on decision making. Annie is also the author of Thinking in Bets, a New York Times best-selling book about decision making and her life as a professional poker player that was published in 2018.  She has a new book out, How to Decide: Simple Tools for Making Better Choices.

Our topic today is how to improve our decision making. Annie is one of the best at helping us think clearly and developing good habits and a sound process around making decisions, especially in finance.



  • The difference between luck and risk
  • The Paradox of Experience
  • The Decision Multi-verse
  • The first step of a great decision process
  • The importance of estimating reasonable outcomes before you make the decision
  • Why we should make implicit assumptions explicit when making decisions
  • The difference between the inside view and the outside view
  • How to know when it’s time to make the decision


Help us reach new listeners by leaving us a rating and review! It takes less than 30 seconds and really helps our show grow, which allows us to bring on even better guests for you all! Thank you – we really appreciate it!





Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present due to platform differences.

Sean Murray  0:03  

Welcome to The Good Life. I’m your host, Sean Murray. My guest today is Annie Duke. She was a previous guest on our Decision Mastermind Episode earlier this year. 

She’s a professional poker player, an expert on decision making, and the author of “Thinking in Bets,” a New York Times best selling book. It’s about decision making and her life as a professional poker player. It was published in 2018. 

She has a new book out: “How to Decide Simple Tools for Making Better Choices.” Our topic today is how to improve our decision making. She’s one of the best at helping us think clearly, develop good habits, and the sound process around making decisions, especially in finance. She is so knowledgeable and has a ton of stories and examples. 

This was a lot of fun. I hope you enjoy my conversation with Annie as much as I did. My friends, I bring you Annie Duke.

Intro  1:01  

You’re listening to The Good Life by The Investor’s Podcast Network, where we explore the ideas, principles and values that help you live a meaningful, purposeful life. Join your host, Sean Murray on a journey for the life well-lived.

Sean Murray  1:25  

Annie Duke, welcome to The Good Life.

Annie Duke  1:28  

Happy to be here. How are you today?

Sean Murray  1:31  

I’m doing great. I’m excited to talk about your new book, “How to Decide Simple Tools for Making Better Choices.” You describe this book as one that offers a framework for making quality decisions and a set of tools to execute on that framework. 

It’s full of exercises, thought experiments, templates, and checklists. It’s really a fabulous book. It’s packed full of information, anecdotes, and a lot of learning. 

You mentioned right at the beginning a quote that really struck me. I thought we’d start there. You’re making a case for why we should spend time developing our own quality decision process. You say that it’s because there are only two things that determine how your life turns out: luck and the quality of your decisions. You have control over only one of those two things. 

Can you talk a little bit about that, and how we should approach decision making?

Annie Duke  2:22  

Just to be clear, that thing that you have control over is the quality of your decision. But the funny thing is that I’m not sure that that’s necessarily so clear for most people. I could think about the aphorism, “You make your own luck.” If you think about that, and that’s something I hear people say all the time, that would imply that you have some sort of control over luck. 

Read More

Let’s just define what luck is just to start, so that we can kind of dig into that. Luck, by definition, are things that are out of your control. You can kind of think about it this way. If you make any decision, there are different ways that that could turn out, and which outcome you actually observe on a particular time is determined by luck. 

A simple way to think about it is if I flip a coin, once I flip the coin, I know it can land heads or tails. Luck is going to determine whether I happen to observe heads or tails. I have no control over that or whatsoever. 

I think that what people are actually saying, when they say you make your own luck, is you make your own decisions. It is kind of my point in this quote. By that, what I mean is I can choose a decision that maybe was going to work out for me 50% of the time, or I could choose a decision that’s going to work out in some way that I really like 80% of the time. I could also choose the option that’s only going to work out for me 20% of the time. 

In that way, I’m toggling what the chances are that I get a bad outcome or a good outcome. But I want to be clear that even if I choose an option that says it’s going to work out 90% of the time, and not work out 10% of the time, whether on that particular time I observe an outcome I don’t like, which will happen 10% of the time by definition. It is still a matter of luck. 

It’s just that I’m reducing the chances that I see that thing happen. That’s why I think, there’s luck in the quality of your decisions. This is why we should be so focused on the quality of our decisions. Luck is something that needs to be seen clearly, but that’s it. The rest is all up to you. 

Sean Murray  4:30  

What you’re saying there, if I understand you correctly is that we do have some control over luck in the process of making the decision. We’re choosing how much luck we want to accept or how much risk we want to accept maybe in the outcomes.

Annie Duke  4:45  

I would say I just want to separate luck and risk. People kind of put those together. Different decisions have different risks associated with them, but they all have the same luck. In the sense that luck’s influences luck’s influence. It’s just going to intervene. But how much you’re exposed to the downside is what your decision is determining. 

There’s a difference between risk and luck. I think that we tend to confuse those. When people say you make your own luck, what they’re kind of saying is, if you make particular types of decisions, you can reduce the risk of a bad outcome.

Sean Murray  5:15  

There’s another thing you mentioned at the beginning of the book, which is, if decision making has such an outsized impact on the quality of our life, you would think we would spend more time studying decision making in school.

I did not get a lot of training in decision making all the way from high school and through my undergrad. In my graduate studies, I took one class. It was on decision science. It was much more about if I was a Wildcat oil driller in Texas, and I had a 40% chance of getting an outcome and a 30% chance of another outcome. It was doing the expected outcomes. 

It’s very challenging to take that kind of decision framework and apply it to our world. I think you did a pretty good job in this book. You sort of bridge that gap for me. But the bigger point here is that we really don’t spend much time thinking about decision making.

Annie Duke  6:04  

Actually, that whole topic is the reason why I co-founded the Alliance for Decision Education. We really aren’t teaching kids how to make decisions, not in K-12 education. 

I think it’s kind of nutty that it’s like a requirement to do trigonometry. The need to understand sine, cosine or tangent is very specific to very particular professions that you might end up doing such as engineering.

If someone’s going to engineer a bridge for me, I would very much like them to know trigonometry. This is true. But for most things that people are doing, it’s not really a required skill. 

In fact, funnily enough, the reason why it’s in the curriculum was because it’s so hard. It doesn’t feel like it has an application. It was sort of like a grit test. That’s kind of how it got in there. 

[If] you can get through this, you’ll really make it in life. I don’t think that’s a particularly good use of our educational time. Maybe we should be teaching things like statistics and probability instead. That really forms a basis of really good decision making, or even habit formation or whatever. 

The reason why I think about that, and actually, when I talk *inaudible*, “What’s your decision process?” It’s kind of all over the map. Most people can’t articulate it. A lot of people are like, “I go with my gut, or I decide by consensus.” We can see convened committees.

None of these are particularly good descriptions of an actual decision process. I kind of think about it like walking. I started walking, I don’t know. I have to ask that with my dad. But somewhere around nine months or a year or something like that. 

I put one foot in front of the other, and I go. Can I explain the process of walking in terms of the physics of it? Can I really even access what I’m thinking when I do it? No, but I feel like I’m an expert walker. 

I don’t really examine it very much. The thing is even little tiny babies are making decisions. It’s something that we’re kind of doing all the time. I think that this is kind of, if you go back to what I call, “system 1 and system 2.” 

Your cats are making decisions. In the same way that cats are walking, it’s just not something that’s accessible to them in terms of being able to access that process. Of course, human beings have the ability to think abstractly. They can think beyond nine seconds from now, imagine beyond their lifetime, think about what their goals are, think about their values in this very deliberative way, where you’re recruiting system 2.

But because I think system 1 has been sort of like running with decisions the whole time, we have the illusion that we’re very good at it. In some ways, we are. Human species survived pretty well. Although, in the information environment that we live in now, like in the modern world, a lot of the things that made it so that we survive really well back in the day aren’t necessarily good for us now.

We need to be thinking about what’s a good process in the same way that for a primitive man, just eating as much sugar and fat as you could was an incredibly good strategy. It is a terrible strategy today, but we still do it. 

I think that we really need to get a hold of what a good decision is and what a good decision process is. Again, going back to it, your decisions are your luck. 

Luck certainly has a humongous influence on how your life turns out. No doubt. When you were born, where you were born, or who you were born to. Are you tall? Are you short? You don’t have any control over all this stuff. 

You have to grab hold of this piece that you do. You need to have some control over your decisions. We think so little about it. We don’t teach anybody about it. It’s the thing that’s really going to make a difference.

Sean Murray  9:28  

Yeah, and one way we can make a difference is to get better decision making over time. This is something that you write about. To do that, we have to be able to look at our past decisions and learn from them. We’re not very good at that. 

You provided kind of a thought experiment which applies in the financial world. Since we’re on The Investor’s Podcast Network, I thought I’d bring it up. You buy a stock at quadruples. Hey, great decision.

You buy a stock, then it goes to zero. It goes down by 7%. It’s a terrible decision. That’s about the amount of time we spend reflecting on it. We try to move on and learn from it. What you say in the book is that it’s a very dangerous way to try to learn from our decision making if we just only go that deep.

Annie Duke  10:07  

I would actually say it’s worse than that. In that particular case, obviously, you’re resulting there. All I know is that the stock went up or down. I know nothing of your decision process. It’s very hard to say if it’s a good decision or a bad decision. Perfectly good decisions don’t work out all the time and perfectly terrible decisions do. 

Obviously, as an example right now, if you bought Zoom as a stock, it may be that it was like a really good decision for you to buy it because over time, it was actually going to appreciate better than whatever indexing in the market would be. 

But if you’re taking credit for what it’s doing now, unless you specifically had it in your decision process, I think there is going to be a pandemic, and everyone’s going to need to be at home using Zoom. Probably, that’s more a matter of luck. You shouldn’t be taking so much credit for the fact that you made so much money on Zoom. 

What I think is really interesting is that “resulting” is even worse than that. In the case that you gave, which is an example from my book, I don’t give you any details. In that case, maybe you can argue, “Well, you’re asking me to answer the question, and you didn’t give me any details. What else did I have to go on?” You were implying that “I don’t know” was a response, right? 

I can show you how strong this illusion is by filling in some of the details. You buy an electric car. It is the best car you ever owned. Other people like the car. The founder of the company is a billionaire tech visionary. The stock quadruples in price. 

But I can do the same thing, right? You buy an electric car. You love it. Everybody you know loves theirs. The founder is a billionaire tech genius. You buy the stock, and within a year, it’s zero. Everybody calls it a terrible decision. 

Now, there I did give you some details about your process. I would argue that if that was your whole process for investing in the stock, it was a horrible decision. I’m pretty sure you want to look at: “What are [their] earnings? What’s their depth?” There’s a whole bunch of things that you’d like to know about that stock. 

You better have a good reason for why you think you know something that the market doesn’t know. If that was your process, it’s bad either way. But notice that when the stock quadruples, even though I’ve given you the details of how you came to the decision, you still think, “Wow, what a genius decision.”

Sean Murray  12:21  

Well, we go to the facts that came up. I knew since I bought the car, I enjoyed it. It was such a great car. I saw my neighbors buying it because it had to be a good decision. It was going to work out the way I thought it was because I had that inside information.

Annie Duke  12:37  

Meanwhile, of course, that’s not a very good reason to invest in a stock. Even if you think the stock is great, if the market knows the stock is great, you’re not getting a good price on the stock anyway. But this is what happens with “resulting.” It’s such a strong cognitive illusion that even when I do tell you about the process, you override it.

Sean Murray  12:59  

Let’s talk about overcoming “resulting.” How might we think about it in another way? How will we build a quality decision process that would get us out of that trap?

Annie Duke  13:10  

There’s a couple of ways that we can sort of think about how to deal with this resulting problem. There are a few ways. One of them relates back to hindsight bias as well, which can help us out of the resulting problem. 

When we sort of think about how to solve this, this actually helps us to figure out how to build a good decision process in the first place. Obviously, all things being equal, we would have liked to have done the work beforehand, so that we don’t have to reconstruct everything after we find out whether the stock went up or down.

Let’s assume we didn’t do the work beforehand. There’s a few things that you can do. The first thing is that you can actually build yourself a matrix. On one hand it is a good or bad quality decision. On the other is good or bad outcome quality. 

We can think about that as four quadrants. One quadrant is “earned rewards.” That would be good-good. One quadrant is “bad luck.” That would be a good decision, bad outcome. One quadrant is “dumb luck,” which would be a bad decision that gets a good outcome. One of them is “just desserts.” That is a bad decision that gets a bad outcome. 

What we need to realize is that the possibility that any particular outcome fits in one of those categories is relatively even. All things being equal, I will say the more skill in what you’re doing, the more likely it’s going to be in either earned rewards or just desserts. 

If you think about something that was 100% skill, that would mean that dumb luck and bad luck would be out of the question. We know that with investing, there’s a lot of luck involved. Most of the things that we do, there’s a lot of luck involved. 

We want to really try to think about where we can place this in those quadrants. There’s two ways to sort of figure out where it might go. The first is to actually think about the outcome and the context of the other things that could have occurred. I think that this is part of the big problem with resulting. We lose sight of that other stuff that might have happened. 

Once the stock quadruples, we forget that there was some probability that it went to zero. There’s some probability that you lose 25% or 50%, or gain 25% or 50%, instead of quadrupling. It’s actually really good to go through and try to reconstruct what those other possible outcomes might have been, and figure out what the probability of those things occurring is. 

Let’s go back to, for example, the way that I open “Thinking in Bets” with Pete Carroll. He calls for a pass play, and it gets intercepted. That’s actually what I am doing in that book. I’m walking people through the possible outcomes.

Sean Murray  15:43  

I’m glad you brought up that example. It does bring up painful memories. I’m a Seahawks fan. Of course, you’re referring to the infamous play at the end of Super Bowl 49. The Seahawks were playing the Patriots.

The coach of the Seahawks, Pete Carroll, made a call to run a pass play on second in goal from the one-yard line with 26 seconds left. It’s how you opened up, as you mentioned, your previous book, “Thinking in Bets.” 

The consensus, of course, is that it was the worst call ever. Yet, if you do the analysis on the decision, you can come to a very different perspective on the quality of that decision. Maybe you could break that down for us a little bit.

Annie Duke  16:25  

I simplify it a little bit to make it an interception, touchdown, or incomplete pass. I mean, obviously, there’s fumbles and other things. But I just take it into three. Those are kind of the three reasonable outcomes to consider. And then, you think about what are the probabilities of those things occurring, which I can just go look at base rates for. 

It turns out that among those three possible outcomes, the interception is somewhere between 1% and 2%. This helps me overcome “resulting” because I’m situating it in its appropriate context. It is in the context of the other states of the world that might have actually unfolded, and I might have observed. It’s kind of the first way that you can do it. This helps us figure out which quadrant we are supposed to be sitting in here. 

The other thing that you can do that I think is actually really important is to try to reconstruct your state of knowledge at the time of the decision. This is something that happens all the time. I think that once we know the outcome, we feel like we should have known it. It was somehow knowable beforehand. 

Very often, what happens is that we get an outcome and there’s some new information that reveals itself. Our memories are weird. We kind of feel like that was knowable beforehand. And then, all of a sudden, we’re saying, “Oh, it must have been a bad decision because, look, this information clearly tells you what was going to be a bad outcome, and how could you have missed it?” 

This can create this real problem, as well. All which wraps into this concept that I call the “paradox of experience.” It is an experience that is really good for learning. Take the wrong lessons from that Pete Carroll example or the stock example or whatever, and it can interfere. 

But if we can kind of reconstruct what we know beforehand or what was revealed after the fact, and then look at what revealed itself after the fact and say, “What was knowable beforehand? Was any of this knowable beforehand?”

Most times the answer is going to be no. And then, occasionally, the answer is going to be yes. You can ask a further question, which is, “Could I have afforded it?” Sometimes, yes, it was knowable, but it would have taken me six months to know. 

If the opportunity would have gone away, sometimes it would have cost too much money. I could have bought the information. It wouldn’t make sense then to make the investment. Most of the time, it’s me saying, “It wasn’t knowable.”

Sean Murray  18:38  

There’s just a lot there. I want to sort of unpack. I love this concept that you described in the book called “the decision multiverse.” I think it’s how you describe this tool of thinking about the different outcomes.

We can do that in past decisions if we want to try to figure out where we are actually resulting or not. But even more importantly, as we move forward, we think about a decision, it can be very helpful to articulate out the reasonable outcomes we can expect. Start to try to figure out what the probabilities are and what the likelihood is. 

You offer some great tools for that, too. Talk a little bit about the decision multiverse and how we can use it.

Annie Duke  19:17  

I introduced the decision multiverse in the book as a kind of a tool for dealing when an outcome means in retrospect. It’s kind of what I talked about, like in the Pete Carroll example, where you can say, “What were the reasonable outcomes that could have occurred?” We’ll consider a touchdown, an incomplete pass, and an interception, and then try to figure out what the probability of those things occurring.

You’re kind of working backwards in order to put the branches back on the tree. When you’re thinking forward, you know that all three of those things are possibilities. But then once you observe the interception, it’s like we take a cognitive chainsaw. We lop off all the other branches, and they just totally disappear from view. 

Now, all of a sudden, the outcome that you observe, even if it was highly unlikely, takes up your whole cognitive landscape because you only have one path. You only have the outcome that actually happened. We sort of view it as inevitable. If we view it as inevitable, that’s how we get into this resulting problem. 

Obviously, if the outcome was inevitable and bad, the decision obviously must be bad to have produced this inevitably bad outcome. That’s why we want to think about it as a decision multiverse. What are the counterfactuals? 

You can also do that obviously in your own life as well. When we do that, we just kind of get a better view. Now, reconstructing it is always harder than doing it in the first place. 

If you have some sort of evidence or record of what you thought at the time when you were making the decision, you’re always going to be better off because you can Google your own decision making. If you’re going to have a good process, it should naturally create an evidentiary record. 

I want to just sort of have *inaudible*. I’m not a big fan of…thinking about a decision journal, just because I think cognitively. It feels like an extra step. I’m going to make this decision, and then I’m going back. I’m going to have to write it down and record it. 

A really good process is going to produce that record without you doing anything else because you have to do it in order to have a good decision process. This gives us a clue to sort of like [determine] what the first step is in a great decision process. 

The first step is that, for an option that you’re considering, I want to pass the ball. What are the reasonable sets of outcomes that I should expect to see? Do that at the time of the decision instead of after the fact. Obviously, in order to do that at the time of the decision, you have to actually start thinking about what might the future hold.

I think the reason why that’s so incredibly important is that the way that people tend to think about decisions is that they tend to think about an option, and then they’re trying to predict the exact thing that will happen.

Sean Murray  22:01  

That’s really powerful. Maybe you could break that down for us using the Pete Carroll decision in the Super Bowl. He called a pass play on the one-yard line, on second down with 26 seconds trailing New England by four points.

Annie Duke  22:17  

Generally, you’re working back. I would like to throw a touchdown, so let me think about this choice. I want to think about the choice that will get me a touchdown. You’re not really thinking about it as what are the other ways that could turn out? You’re actually just thinking about it as trying to get to a single result. That’s not how the world works. Not for most things. 

For most things that you decide about, you have an option. There are a lot of different ways. Even just starting to acknowledge in that process that I can’t control. I can’t get myself to a deterministic outcome. I can sort of figure out what the set of outcomes is. And if I can do that, I’m going to be a better decision maker because I’m going to have a better view of what the future might hold. 

What we’re trying to do is sort of get us as close to having a crystal ball as possible. Honestly, like all a decision is, it is a prediction of the future. That’s all it is. I decide between the chicken and the fish in a restaurant. I’m predicting that the future me that eats the chicken is going to be happier than the future me that eats the fish. That’s it. 

It’s just a prediction of the future. We can get a better view into the future. We can construct something that’s going to allow us to get a view into the future. The better off we’re going to be. The first step is to not think that only one thing can “result.” That’s a completely unobjective and inaccurate view of the future.

Sean Murray  23:40  

There is one thing that really struck me about this framework. A light bulb sort of went off. You advise people to think about the reasonable outcomes. I think something that derails people and has derailed me before is that we sort of get stuck there by saying, “How can I predict?” There’s so many things that could happen in a football game.

I mean, it could be a play to the left. It could be a play to the right. But what you do in that framework is say, “Well, really, there are three big potential outcomes.” Focus on those. Make some reasonable guesses about what we think the probabilities are. 

In football, you can go back and look at a base rate. But in a lot of things in life, we sort of have to then put some kind of probability on that. We tend to not want to do that because it’s a guess. We don’t want to be wrong. 

We think, “I want to pick the right one.” And what you say in the book is that everything’s a guess. There are only educated guesses. We’re never going to be exactly right. We can just put something out there. We’ll be better off if we identify the reasonable outcomes, and then assign some probabilities to them.

I love the way that you tapped into Mauboussin’s work on using natural language to identify percentages to help us do that.

Annie Duke  24:53  

I think what happens is that when you’re thinking about what the reasonable set of outcomes that you can consider, sometimes it can kind of be broad scenarios. I talked about that in the book. But just for simplicity of the conversation, there are certain things that you value in the decision. You’ve identified what those are. 

Let’s say you’re thinking about hiring a particular candidate for a position. You have a really bad turnover problem at your company. All things being equal, if the candidate has met a certain bar in terms of qualifications, what you really care about is retention. I’m just trying to simplify. 

Let’s say that you care if a person is going to be with us between 0 and 6 months. That is 6 months to 18 months, or beyond 18 months. You just sort of define those bins. Obviously, the question is for any candidate you’re considering, “What’s the probability that they’ll be here for one of those periods of time?”

What I hear from people when I ask them to do that a lot is, “Well, I don’t know.” In a sense, what they’re saying is, “I’ll just be guessing, because I’m not a time traveler. I don’t have perfect information. I can’t give you an exact number. And if I can’t give you an exact number, I feel like the answer is going to be wrong.” 

What I try to point out to people is, “Well, you’re doing it anyway.” This is the thing that people need to understand. These kinds of probabilistic forecasts are implicit in your decision making anyway. If you choose Candidate A over Candidate B, and what you care about or your value is retention, you’re saying and really implying that by that choice…you think the Candidate A is going to have better retention than Candidate B.

Obviously, that’s probabilistic. The more we can make that stuff explicit that you’re doing implicitly in your decision process, the better off you’re going to be. You can now examine it. We can Google our decision making, go back, and not worry so much about where we went wrong. 

But think more about how we calibrate better. How do we create better feedback to be able to calibrate? Let me say that, when you say you’re guessing, you’re not really guessing. There’s almost nothing that you know, literally zero about.

I’ll give you an example, Sean. Obviously, we’re on Zoom. You can’t see the surface that my computer is sitting on. You have no way to see it. It’s sitting on a piece of furniture. That’s what I’ll tell you. That’s the only thing I’m telling you. It’s sitting on a piece of furniture. What do you think the weight of the piece of furniture is?

Sean Murray  27:23  

I’m just going to guess 50 pounds.

Annie Duke  27:26  

Okay. What would you say the lower bound is? What’s the smallest it could be?

Sean Murray  27:31  

15 pounds.

Annie Duke  27:33  

Okay. What’s the most that it could weigh?

Sean Murray  27:36  

Also, 300 pounds on the upper side.

Annie Duke  27:37  

Okay, 300 pounds. Great.

Here’s what I just learned. You know a lot about furniture because you didn’t tell me that the thing that it’s sitting on weighs a pound. You didn’t tell me that the thing that it’s sitting on weighs 10,000 pounds. From 0 to infinity pounds, which were the possible answers, you actually gave me quite a narrow range from 15 to 300. 

You gave me a point estimate, which was 50. When I think about that range, what that tells me is that you have more certainty around the low end than you do around the high end. You’re allowing that there’s some sort of outliers in there. You even told me what the shape of the distribution was, which is pretty good. You told me something about your certainty. 

If you could see the thing that was sitting on and know what material it was made out of, your range would have been a lot narrower. In other words, if you had more information, you would have been able to narrow that range a lot. But when you tell me that it’s between 15 and 300, you’ve narrowed it down a lot. 

You’ve also told me that you are also lacking knowledge in a way that’s actually expressed that really well. Isn’t that amazing? Basically, what you did was you made an educated guess. That also told me how educated the guess was. 

First of all, that’s really good for your process. You’re not just sort of saying, “I can’t come up with the right answer. I’m not going to say explicitly what I’m doing implicitly in my decision.” Now I could say, “Hey, I need to move this piece of furniture. Can you send some people over to move it?” Let’s say that’s all I told you, and you need to make a decision about how many people to send over. 

You don’t need to send 10. I mean, these things really matter. It’s implicit in the types of decisions that you already make that you’re making these types of estimates. We should make them explicit.

In having you do that, what it does is two things. One is that it really helps you not just in terms of…your view of the future, but it makes you start to think about, “Well, what do I know that would help me make this guess better?” and “What could I find out?”

When you talk about the base rate in terms of football, the reason why I know it was between 1% and 2% is because I was trying to make an estimate of the probability of that branch of the tree. I went and looked it up. It gets you when you start to make these things explicit. It gets you to start going [and] looking things up, which is always better. 

The thing that we have problems within our decision making is that we lack a lot of knowledge. The more we can fill in those gaps for ourselves, the better off we are. 

The second thing that it does really well is that if you and I are working on a decision together. You tell me that you think it’s between 15 and 300 pounds. Implied in that is a question, “Hey, Annie, do you know something that could help me figure out where it sits in this range or if my range is right?” 

And now, without even knowing it, you’re now allowing me to reveal some knowledge to you because you’re asking me a question. I can tell you, “Well, the desk is 6 feet long. It’s 3.5 feet wide. It’s made out of some sort of solid wood.” 

This helps you. Now you know that it doesn’t weigh 15 pounds. When you say it’s between 15 and 300 pounds, it’s a little bit like, “Help me.” That’s all really good. What that means is that you’re going to start to refine and hone what you think the future holds. 

And then, because you’ve actually bothered to make the estimate, you now have something that you can go look back on. You can start to get enough results to say something about it, such as “How good are my guesses?” or “How well calibrated am I?” that are going to help you. 

One of the things that I’d recommend in the book, in terms of just kind of getting started and being comfortable with these kinds of probabilistic guesses, is to just start thinking about natural language terms. 

It has a pretty broad target area. Just start there and say, “I think it’s more likely that this person is going to be here over 12 months than they’re going to be here less than 12 months.” That’s a good kind of starter kit for getting you there.

Sean Murray  31:37  

One of my favorite pages in the book is “The Table” from Andrew and Michael Mauboussin’s research. It includes the natural terms that implied probabilities and what the general consensus are for translating those terms into actual probability. 

The terms like “always, certainly, almost certainly, with high probability, rarely, and never” are terms we use in finance quite often. The exercise that you recommend is to personally go through and identify for ourselves the probability we associate with those terms. 

When we’re thinking about the decision multiverse and we look at an outcome, and a term pops out, that’s highly likely to happen. We can sort of put a probability. When we use the term “highly likely,” what does that mean?

It also allows us to maybe have a conversation with someone else. It’s important to know that when we have a conversation with someone and ask them what their view of the likelihood of something happening, it also is good to know that their number, or the actual percentage of what they associate with “highly likely” may not be the same as ours.

We may think “highly likely” means 85% chance. Someone else’s “highly likely” might be 75% or 60%. There’s a pretty big difference there. I think there was a lot of great usefulness to this tool in this exercise.

Annie Duke  32:59  

When I say a starter, using things like “highly likely,” I think it’s a way to get used to at least sort of thinking about different branches with different probabilities. It’s a way to do it. It’s got less friction to it, because it doesn’t feel like you’re going to be as wrong. 

But to your point, here’s the issue: If I say two 2+2 is a small number, that’s similar to saying something is sort of “highly likely.” It sort of feels like it has precision, but only so much. 

I’m technically correct, if I say 2+2 is a small number. The problem is that I can only get so much better at math, if that’s the way that I talk. It’s better for you to know that if I say 2+2 is 6. 6 is a small number. It’s better for you to hear that. You may think that 2+2 is 4. We can have a conversation about it. It’s going to hopefully help us become better informed. 

We want to speak with precision, so that we can uncover a little bit better the places where our target isn’t quite calibrated. We don’t quite have the right target. And when we use natural language terms to describe things that do have more precise meanings, we have sort of the illusion of being precise without the actual precision. 

It’s kind of on purpose. There’s two reasons why that’s a little bit on purpose that we tend toward that kind of thing. The more sort of “2+2 is a small number” kind of language. The first is it’s harder to be wrong. 

As you said, people are just sort of afraid of being wrong. That’s obviously trading the short term for the long term. I guess if I say 2+2 is 5, I might be sad because I’m wrong. But then, you help me, and then I’m right forever on. That’s actually important to know because it improves my decisions going forward. 

You lose the opportunity. You exchange this ability to sort of be “right more often” because it’s harder to tell exactly what you mean, but what you’re giving up in exchange is the ability to actually really improve your knowledge at a much more rapid rate. That’s going to improve your decision making going forward. I don’t think that’s a particularly good trade.

The other issue, though, is that people like to agree with each other. I mean, you know this on teams. A part that sort of makes people feel like a team player is that you agree, like “Sean and I are on the same page.” What does “on the same page mean?” It means that we’d sort of see the world the same way, and we agree. 

When you use these terms that have relatively big target areas, like the range of what they could mean is quite wide, that means that now we’re going to feel like we agree a lot more than probably we actually do. 

As you said, in the book, I have an exercise that I really recommend that everybody does with people that they’re interacting with on a daily basis. They kind of think that they’re communicating precisely. 

It’s taking these terms that Andrew and Michael Mauboussin put together. These are words such as “real possibility, always, never, certainly.” Write down if you say that. How many times out of 100 do you think that’s supposed to happen?

If I say it’s a real possibility of rain tomorrow, what do I mean by that? How many times out of 100? What percentage of the time is that going to occur? I can write that down, and then there’s a form in the book. You can go ask 3 other people about their responses to it. 

What you’re going to find is that you don’t agree on anything. This includes what “always” and “never” means. “Always” and “never” have about a 5% spread. “Always” is between 95% and 100% of the time. “Never” is between 5% and 0% of the time. 

“Certainly” interestingly enough is 90% to 100% of the time. I think if you look up “certainly,” it means always. You can see that even these things where we really feel like we obviously know what that means, it turns out that we may not mean the same thing. 

But here’s where we really get into a problem. You take a term like “real possibility.” I’ve done this with a lot of groups. If I say “real possibility,” and you think it’s a real possibility, it doesn’t feel like we’ve agreed. If I say, “It’s a real possibility, it’s going to rain tomorrow.” And you’re like, “Yes, I agree.” We’re like, “Yay, we’re on the same team.” 

Here’s the thing. The largest spread that I ever got with a group was 16% to 81%. It wasn’t like everyone in the group was at 80%. There was some weirdo, who says that 16% meant “real possibility.” 

It’s actually relatively even spread across that whole range. That’s what you see in Andrew and Michael Mauboussin’s surveys. It ranges from about 20% to 80% with a very slight skew toward the upper end. 

How can we even know that we’re not on the same page here? If it could mean anything from 20% to 80%, why are you even speaking to each other? You’re literally not going to help each other in the decision. 

Phil Tetlock tells a really wonderful story in expert political judgment. I think he might say to use it in a super forecasting tool. I’m not sure. I think it was at the Bay of Pigs Invasion when the General said something like, “I think there’s a fair chance it’s going to succeed.” It was some term like that. 

What the general meant was 25%. Kennedy heard 75%. These things have real world consequences. We need to be more precise in our language. 

I know that people are afraid of using probabilities. The nice thing about that exercise in the book is that I get it. Everybody’s more comfortable saying “real possibility,” “certainly,” and those kinds of things.

A nice thing about the exercise is that it gets you to create a list, so that you actually have a decoder ring for yourself. When you’re thinking “real possibility,” you can look back at your list and substitute in: “I actually think that means 56% of the time.” “Okay, good. Now we know.”

Sean Murray  38:28  

If you think about investing, you could call up a colleague that you really trust and maybe walk through a few facts about an investment saying, “Hey, I think there’s a real possibility that this could work out.” They could say, “Yeah, Sean. It’s a real possibility.” 

It would feel like that this is just more confirmation that I’m making a great decision, when in fact, we have two very different views of real possibility.

Annie Duke  38:48  

I think the key thing is that we shouldn’t be afraid to find out that you may think it’s 20% to work out, while I think it’s 60% to work out. That’s incredibly valuable information. Again, as I said, the big problem with our decisions, and this is the real exploration of this book, is that we just have imperfect information.

The stuff we know sits on the head of a pin, and the stuff we don’t know is like the size of the universe. We have to start pulling things out of that universe of things that we don’t know into the universe of stuff that we do know.

We need to do two things. One is we have to learn new stuff. But the other is that we have to correct the inaccuracies. We have to correct all the 2+2=6. We’ve got to get all of those corrected because those are inputting into our decisions. 

Obviously, if you input junk into a decision, what you’re going to get out, no matter how good your process is, is junk. This is kind of where I go back to. If we say “real possibility,” I have to confront corrective information less often for sure. Maybe that’s a good short run play for me.

It’s a terrible long run play. I would prefer to find out that I think it’s 56%, while you think it’s 20%. We get to have a conversation about it, which is super exciting. Through that conversation, where we discovered that there’s some sort of dispersion between the two of us in our opinion, we both get to learn.

Sean Murray  40:06  

Why do you say we both get to learn? Explain that.

Annie Duke  40:10  

Why do I say we both get to learn? There are three possibilities for what’s going on. One is, which is the most likely, if we’re equally well informed that the answer is probably somewhere in between the two of us. That’s cool because we both get to moderate our opinions.

But it could be that Sean’s right and I’m wrong. In that case, it’s super obvious why I might benefit from that conversation. But what people don’t realize is that Sean is going to really benefit from that conversation as well. Why? That’s because I’m going to ask you to explain to me why it’s 20%, when I believe something different. And then, you having to explain it is going to clarify your thoughts. 

You’re going to more likely find out where you don’t really explain or don’t really know why you believe it’s true. The example I think I gave in the book is, “I believe the earth is round. I’m right.” If I’m talking to someone who believes that the earth is flat, they’re wrong. That doesn’t mean that I would be good at explaining it to them.

That’s a belief that sort of lives within me, kind of like a meme. Daniel Dennett talks about this. There are beliefs that used you as a host. I would say that the earth is round is one of those for me. Why is it more like a meme for me? That’s because my explanation to a flat earth would be things like, “I saw the pictures.” That’s a terrible explanation. It may be, “Scientists say so.” 

It turns out that I actually don’t possess that belief very well in a sense that I don’t really know why it’s true. If I then have to tell somebody why I believe that the earth is round and not flat, I need to go figure some stuff out. I’m going to look and see what that laser experiment is and where you shoot the thing. I’m going to learn some stuff that’s going to be able to help me. I’m going to learn something about how you tell if a picture’s doctored or not.

Sean Murray  41:50  

There’s a great YouTube video by Carl Sagan, where he explains through shadows an obelisk. The Egyptians actually figured that out.

Annie Duke  41:58  

I would go look at this Carl Sagan thing now for myself. Where I come out at the end of that process of having watched the Carl Sagan video is that I now actually know my belief better. 

Even in the case that you totally have the right belief, you wouldn’t moderate your belief at all. You’re actually going to know your belief better. It’s going to become less meme-like. It will become more possession-like, which is a very good thing for you. The other thing about that is you’re asking someone who’s outside of your purview. 

Sean Murray  42:27  

You talked about the “inside view versus the outside view” in your book. If you talk to someone who is not as close to the problem or not as emotionally tied up in it or isn’t putting their own money at risk in the investment for whatever reason, they are outside of the decision process.

You’re going to get something that’s often less subjective, that’s going to look at different facts, and can give you a different sense of the issues and probabilities. And you can learn a lot by bringing someone in bringing the outside view in and base rates as a way to do that.

Annie Duke  43:00  

Let’s think about the difference between the inside and the outside view. The inside view of the world from our own perspective is informed by our own beliefs and the facts that we happen to know. That’s where all the cognitive bias lives. Obviously, like for confirmation bias, I’m trying to confirm my beliefs.

Sean Murray  43:16  

Yeah, availability bias.

Annie Duke  43:18  

Right. Things that happened recently in my life. These are all inside view problems. That’s in particular the big problematic one. That is where motivated reasoning lives. When I’m sort of processing information, my beliefs have sort of put me down inside a trench. I sort of dug with my beliefs. It is like my models of the world. 

When information comes in, we sort of have this intuition that I look at the information like an object in an objective way. I either alter my model or I don’t. I look at it objectively, and then decide what to do about it. Should I change my model? That’s not what happens at all.

You bring information right down into that trench with you. And you go, “Let’s make sure this fits.” Either I sort of massage the information and tell a good narrative about it that strengthens my belief, or I tell a really good narrative about why I should reject the information. Those are the two things that happen. That’s obviously how you can see that’s really an inside view problem. You’re sort of processing through your own mental models and your own beliefs. 

The outside view is something that we can think about as two things. One is, what is true of the world that is independent of anybody’s belief? The other is how other people might view the situation that you’re in. Base rates fit into that first category. That is: which is true of the world that’s independent of anybody’s beliefs?

No matter what somebody believes, some percentage of 70-year old men die of a heart attack every year. It doesn’t matter what anybody believes. That’s what happens. It doesn’t matter what anybody believes. There’s some percentage of the days in June in the area that you live in that it rains. 

That’s why base rates are so helpful. You really want to incorporate that into your processes at the starting point. That disciplines you to [see] what’s true of the world in general before you start to think about your own perspective on it. It creates a much more objective anchor that’s going to help you with bias. 

The second piece is actually super interesting. Other people can view your situation. In fact, they can view the exact same data that you’re looking at. They can come to very different conclusions. That’s why you have value in growth. They’re all looking at the same data, but they just have very different ideas about how you’re supposed to model it, or what a successful investment strategy is, or what your portfolio should look like, or the choices that you make. Everybody’s looking at the same information.

Sean Murray  45:34  

Is that “probabilities of outcomes?”

Annie Duke  45:37  

Exactly. We know that data exists outside of us, but our interpretation doesn’t exist outside of us. It takes someone to model the data. It takes someone to collect it. That includes the way that you collect it and the questions you ask. 

By the way, the questions that you ask are not for nothing. You’ll see political polls that say: “Do you think the candidate is great or awesome?”

Sean Murray  45:56  

Yeah. What are we learning from that exactly?

Annie Duke  46:00  

What we want to do is say, “My model of the world is probably not perfect, neither Sean’s model of the world. But it would be really good if our models of the world collided. If I could understand the way that Sean views the situation that I’m looking at, then we could actually look for wherever there’s dispersion between the two of us.” 

The areas of agreement are kind of boring. If we both agree that the earth is round, so what? What we want to do is find out where the dispersion is in the way that we’re viewing the exact same situation. We can explore those differences. That’s generally going to get us to a more accurate model of the world. 

What I’m really doing is allowing the inside and the outside view to collide. The outside view is going to generally discipline the inside view. I think that one of the examples that I give in the book is when you go to a wedding. This is a thought experiment only, and I never actually run this experiment. 

You go to a wedding. The couple has just gotten married, and you say, “Hey, what percentage of the time do you think you’re going to end up divorced in 10 years?” You get kicked out of the wedding, because what they say is, “0% of the time. Our love is special.” That’s the inside view. “No, we’re special. We’re different than every other person on earth. Obviously, we love each other, and it will never stop.” 

You crash the wedding next door in the banquet hall or whatever, and you don’t make the same mistake this time. But now, you see the newly married couple, and you say, “Hey, how often do you think the couple in the next room is going to get divorced?” And you know what they say? “It’s 50% of the time.” 

Now, why are they able to do that? That’s because they can see that situation from the outside view. They’re not thinking about their own perspective or the things that they would like to be true of their own union. That’s why we want to be able to get to the outside view, so that we can get that more sort of disciplined approach, and then maybe more people would have prenups.

Sean Murray  47:41  

I think that’s a great thought experiment to keep in mind. In finance, you always want to remind yourself that when you’re getting into a research project around a financial decision, you’re often that couple. You’re like that couple who is getting married. You’re in it. If you’re not careful, your overconfidence is going to lead you to think, “Well, this is just going to work out.”

That’s where the base rates in finance come in. I mean, if you’re predicting that your investment’s going to grow. Let’s say you’re investing in a company, and you’re predicting that sales and profits are going to grow at 25% a year for the next 10 years. There are very few companies where that’s happened.

In closing, I wanted to talk about one more aspect of decision making. When do you stop? When do you close the decision making process? You mentioned something that I really am going to take to heart. There’s a question that you suggest we ask. That is, “Is there information that I could find out that would change my mind?” I think that’s a great question to ask. As we get towards the end, and we think, “Okay, this decision has to be made.”

Annie Duke  48:44  

There’s opportunity cost waiting. We can think about this. It’s just a time accuracy trade off. The more time you spend gathering information, theoretically, your decision is getting more accurate. Not always, but on average. Those things are going to be correlated. 

Obviously, every moment that lapses is time. I mean, there’s opportunity cost to the time. Options can expire, for example, both in the literal option sense, and then in the decision options. In the financial instrument options sense, they can expire. 

We also know that options can expire. One of the things that I think about is that for people who are thinking about getting married, I think this is a really good way to ground this outside of finance. If you’re thinking about getting married and you know that you would like to have children and have them at an age where you feel like you’re really going to be able to be active and enjoy them, you can continue to gather more information. You may not actually exercise any options by continuing to date and build really good models of who are the people that you would like to date. 

At some point, the option to have children by a certain age is just going to expire because you’re going to be too old. I think that we can think about decisions that way. We don’t want these things expiring. I sort of try to get into a sweet spot. We can recognize that there really is this time accuracy trade off.

One of the things that I think causes us to actually handle that trade off pretty poorly is that we don’t think about options as relative to each other. We think about options in the absolute. Again, we have these two sources of uncertainty that make it hard to know how something is going to turn out. 

One of them is luck. Obviously, gathering more information doesn’t help us with the luck piece because we can’t really help ourselves with the luck piece very much. We also have hidden information. When we’re modeling these things out, there are subjective judgments. We’re all going to have some certainty around any given option. 

What happens is that, let’s say that we’re looking at an option and that we’re about 60% on it. Naturally, it sort of feels like, “Oh, I don’t want to make a decision because I’m only 60% on it. I need to go get a whole bunch of other information. Maybe I could get to 90%.” 

But we have to remember that sort of gaining that extra 30% is going to cost you quite a bit of time. That is time in which you’re not using that information gathering to explore other things that you could be doing that might have a positive expectancy. The option may expire, just like all of these issues that use that time.

What we actually really care about is not so much the 60% to 90% on the single option. We care about how that option compares to other options that we have. By the way, that is including thinking about not doing anything. It has costs. We want to think about the status quo of not doing anything and think about that as a decision. How sure are we that that’s a good choice?

Very often, when you explore that, you find out it’s not such a good one. We don’t actually think about it as a separate option. We sort of want to think about it as if we’re making an active choice to take that path that we’re already on.

What you’ll find is that like that 60% choice, you really have the urge to get to 90%. It turns out that it’s way better than all your other options. All these other options that you think have a 20% chance of working out. You have this one that has a 60% chance of working out. All these other ones have a 20% chance of working out. It could maybe just be your certainty around it being the best choice.

“Oh, it’s 20% that I think that’s the best choice. That is regardless of the probability of getting a good result. I’ve given my options. I’m 20% on these ones. This one, I’m 60% on.” You’ll notice that once you start thinking about it as relative to the other options you have, including what we would sort of think as a default option, it becomes pretty clear: “I should choose this one.” 

First of all, that is going to help you speed up. You’re going to be less likely to want to think about it in the absolute sense of getting it from 60% to 90%. The other thing that you can do is that once you settle on that, this option seems superior relative to the other options that I have. You can now just say to yourself, “Is there some information that I could find out?” 

Absent having a time machine, that would actually flip this. It would cause another option to sort of take it away and win. You’d be surprised. The answer is almost always no. Once you’ve gotten to that point in the analysis, you’ve usually sort of gotten the heavy lifting stuff that allows you to get that separation between the options that you’re considering. 

Here’s the great thing. If the answer is yes, then there’s two things you can do. One is you can say, “Can I afford it?” Sometimes the answer’s no because the option will go away. But the other thing is, if the answer is yes, go find it.

Sean Murray  53:17  

You make a really good point that you’re going to be more likely to be open minded after you’ve asked that question. You go out and try to find that information than if you would have been. You’re actively thinking, “What could potentially derail me from making this decision or change my mind?”

Annie Duke  53:33  

It helps you to sort of see these signals that occur in the world. You may say, “Yeah, well, if I found that information that would be good, but I don’t have time to find it.” Now, you exercise whatever option it is. You pick the thing that you pick.

But now, you kind of know that this information happens to appear on the horizon. I should probably really be paying attention to it. What that allows you to do is just get you sort of more likely to climb out of your trend when that information comes along. You should view that in a more objective way because you sort of thought about it in advance that this might be a really important signal out in the world.

Sean Murray  54:06  

When people exercise their option to find out more about you, how can they do that?

Annie Duke  54:12  

Well, the best place to go would be You can contact me. I really wish people would do it. I will say that the things that I write about, a lot of them come out of conversations that I have with people who’ve read my other work. 

I would say that that’s really true of this particular book. It really came out of cool conversations I was having with people who’d read “Thinking in Bets”. You can also find me at @annieduke on Twitter. The other thing I would love is if people could go check out the Alliance for Decision Education. That would really warm my heart.

Sean Murray  54:44  

I’ll put links up in our show notes to and the Alliance for Decision Education, which I want to personally learn more about. I hope we can get that going in my children’s school district. I’m fully on board with any more decision making in the curriculum. 

Annie, this has been a wonderful conversation. Thanks for being on The Good Life.

Annie Duke  55:04  

Thank you for having me.

Outro  55:06  

Thank you for listening to TIP. To access our show notes, courses or forums, go to This show is for entertainment purposes only. Before making any decisions, consult a professional. 

This show is copyrighted by The Investor’s Podcast Network. Written permission must be granted before syndication or rebroadcasting.


Check out our latest offer for all The Investor’s Podcast Network listeners!

TGL Promotions

We Study Markets