24 August 2020

On today’s episode, we examine one of the critical skills to achieving the good life: decision making. I’ve invited three experts on decision making, Annie Duke, the author of Thinking in Bets, Jacob Taylor, a value investor and the author of The Rebel Allocator, and Brent Snow, who teaches decision making to executives in corporate America.   We have a wide-ranging discussion on decision making, what prevents us from making good decisions, and how we can improve.

We cover a topic Annie calls “resulting” and it’s related to the idea that the quality of the outcome doesn’t tell you everything about the quality of the decision.  We talk about wicked versus kind feedback environments.  We discuss the role that luck and hidden information play in decision making, and why decision prework is better than a decision journal, and so many more topics.



  • What prevents us from making good decisions
  • Why the quality of the outcome doesn’t tell you everything about the quality of the decision
  • The role that luck and hidden information play in decision making and decision feedback
  • The paradox of experience and learning
  • How kind and wicked learning environments shape our ability to learn from experience
  • Why the “interpreter” part of our brain is important but can lead us astray
  • How to avoid consensus overload when making decisions in organizations
  • Why decision pre-work is better than a journal


Help us reach new listeners by leaving us a rating and review! It takes less than 30 seconds and really helps our show grow, which allows us to bring on even better guests for you all! Thank you – we really appreciate it!







Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present due to platform differences.

Sean Murray  0:03  

Welcome to The Good Life. I’m your host, Sean Murray. 

Today, we examine one of the critical skills to achieving the good life. And that is decision making. We’re going to explore this topic in a new format. I’ve invited three experts on decision making: Annie Duke, the author of “Thinking in Bets;” Jake Taylor, a value investor and the author of “The Rebel Allocator,” and a previous guest on The Good Life; [and] Brent Snow, who teaches decision making to executives in corporate America. 

We’re calling this a decision mastermind group, and it’s a lot of fun. We cover a concept Andy calls, “resulting.” It’s related to the idea that the quality of the outcome doesn’t tell you everything about the quality of the decision. 

We talk about wicked and kind feedback environments. We discuss the role that luck and hidden information play in decision making. And why decision pre-work is better than a decision journal. And so many more topics. Stick around to the end when we discuss topics for future mastermind discussions. You don’t want to miss this one. I hope you enjoy this mastermind discussion as much as I did. So let’s get started.

Intro  1:14  

You’re listening to The Good Life by The Investor’s Podcast Network, where we explore the ideas, principles and values that help you live a meaningful, purposeful life. Join your host, Sean Murray, on a journey for the life well-lived.

Sean Murray  1:38  

Welcome to The Good Life and to the Decision-making Mastermind Group. The purpose of this episode is to bring together a panel to explore decision-making and how we can get better. There’s a great quote from Ray Dalio in his book, “Principles.” He says, “The quality of our lives depends on the quality of our decisions”. I think he’s spot on. As you know, the ethos and the mission of The Good Life has to help us all get the most out of life. And to help us attain the life well-lived. 

One component of that, maybe the biggest component of that is decision-making. It’s a worthy pursuit to spend our time examining our decision-making, learning from others in striving to continuously improve how we make decisions. 

In the spirit of that, I brought together three friends of the show who thought and [have] written deeply about decision-making. Joining us today is Annie Duke, a former professional poker player, and the author of, “Thinking in Bets”. She also has a new book coming out this fall called, “How to Decide.” Annie, thanks for being here.

Annie Duke  2:37  

Thanks for having me.

Sean Murray  2:39  

Jake Taylor is an investor, and he runs Farnam Street Investments. He’s been a guest on the podcast before talking about his novel, “The Rebel Allocator,” which has a theme of decision-making as well. So Jake, thanks for being here. 

Jake Taylor  2:54  

Thank you for having me. 

Sean Murray  2:55  

Brent Snow, the founder of Ten Thousand Feet. It is the company that develops learning experiences and teaches leadership and decision-making to organizations. Brent, thanks for joining.

Brent Snow  3:05  

Thanks, Sean.

Sean Murray  3:06  

Well, Annie, I thought I’d start with a question for you. In your book, “Thinking in Bets”, which was a wonderful book. I learned a lot from reading the book. But one of the tools that you talk about is something you call, “resulting,” which I had not heard of before. It ended up having a big impact on how I think about decision-making. So I thought we’d start with just that particular aspect of decision-making.

Read More

Annie Duke  3:30  

I’m going to offer you sort of a paradox that I call, “The Paradox of Experience.” What is that? In order to become a better decision-maker, you need experiences in your life. You need to have made decisions and see how they turn out, get feedback from that, and kind of iterate. Experience is necessary for becoming a better decision-maker. 

But the paradox is that any individual experience that you might have can actually really interfere with learning. And the reason for that is that for most decisions that we make, the quality of the outcome is not correlated at, one, with the quality of the decision that preceded it. And there’s kind of two influences that kind of get in the way. One is luck, which is just when you make a decision that determines the set of possibilities that could occur and the probabilities at which you might observe those. But it doesn’t determine which one you’re going to say.

I could make a decision that 95% of the time will work out well. 5% of the time, it will work out poorly. And just mathematically speaking, 5% of the time, I’m going to observe the poor outcome. I have no control over when that will happen. I just know it will happen 5% of the time. So that’s the luck element. And then there’s also this influence of hidden information. I don’t generally make decisions with all the information I would need, in order to perfectly determine what that set of possibilities is, and what the probabilities of those things are. 

On the hidden information piece, there’s often just information that isn’t available to us or would cost too much, or we’re not even aware we wouldn’t need. And in those cases where it’s not available, if that information would have an influence on the outcome, it wouldn’t mean anything about our decision quality. The information just wasn’t available to us. 

So we have these two things. They’re kind of mucking up the decision. Essentially, what happens particularly because of the luck element, is that if we think about the four ways in which decision quality and outcome quality can relate to each other, all four are possible. Make a good decision, have a good outcome, to run through a green light and everything’s fine. You could have a good decision that has a bad outcome. I could go through that same green light and someone coming the other way in the intersection could hit me. 

I could make a poor decision and have a perfectly good outcome. I’ve run red lights before by mistake and gotten through the intersection unscathed. I could make a poor decision rather and actually have a bad outcome. So we can call those like earned rewards would be “good, good”. “Just desserts” would be “bad, bad”. And then “dumb luck” would be a bad decision that turns out well. “Bad luck” would be a good decision that turns out poorly. So we can think about all those relationships. 

The problem for us as decision-makers is that when we see that something happened, good or bad, we’re trying to work backwards from the quality of that outcome in order to figure out something about the quality of the decision. And that’s really hard when you only have one iteration, and only one thing has happened. For most things, you would need a lot of time to play out and a lot of instances of that particular decision being made in order to work backwards. 

And this just mucks up our decision-making because we act like these things are really correlated. When something bad happens, we think about it as a bad decision when we’re sort of evaluating somebody else. So as you know, like I opened the book, “Thinking in Bats” with Pete Carroll choosing a past play on the last play of the Super Bowl. And, of course, that gets intercepted, and everybody declares him an idiot. But it takes two seconds to do the thought experiment. Well, if the ball had been caught, then obviously he would have been hailed as a genius. And that’s really where the paradox of experience comes in. 

Sean Murray  7:28  

Well, that story really hurt being a Seattle Seahawks fan. So it was really visceral because I remember the play. Obviously, everyone Seattle does. I also remember my reaction, and the reaction of most of the fans in Seattle, which was exactly like you said. “Wow, that was a boneheaded play. It was a stupid decision.” And there’s even a quote, I think, “worst decision ever in the Super Bowl” was one of the headlines that you pointed out. I was sort of going along with that narrative until you brought up the second half of that story. 

If you really break down that decision, it’s not such a bad decision. If you look at everything that was going on. It was on the two yard line, and it was second down. There was a total of 26 seconds left. There was only one timeout. And not to get into all of the intricacies of football, but there’s a good case around time management and creating the best possible option to win the game that that was a good decision. 

Pete Carroll’s quote was so revealing. And you quote in the book, he said: “It’s not that it was the worst decision in football history, it was the worst decision outcome in the history of the Super Bowl. Potentially, the worst decision outcome.” And I think that tells me that he really sort of gets it as far as decision-making, understanding “resulting”, and decoupling the one-to-one relationship between the outcome and the decision.

Annie Duke  8:53  

The thing is you can see this kind of all over the place. Actually in my new book, I talked about the reaction to the 2016 election and the Clinton campaign’s decision-making around where she was campaigning. We know that she lost Wisconsin, Michigan and Pennsylvania. We know that that’s the outcome. And I noticed after the election that there were just a lot of think pieces and a lot of pundits on television talking about how terrible her decision was to campaign in Florida, North Carolina, Virginia, Arizona and New Hampshire, and not spend all of her time in Pennsylvania, Wisconsin and Michigan. 

When you look at the polling from before then, all of these states that look very, very close, like Florida and New Hampshire. We can look at what the information was to anybody who would have had to work with at the time. And it turned out that obviously, Trump won those a couple by very, very narrow margin, certainly, in the case of Wisconsin. They ended up being sort of relative ties. So there seemed to be a polling error that was going on. 

But of course, the thing about polling errors is that you can’t know about them until after the fact. I thought to myself, well, this is a really good example of crowdsourcing and decision-making. And what I thought was well, given that there was this huge ability for people to actually examine this decision before the fact, which is different than the Pete Carroll situation. 

Pete Carroll, everybody’s watching that in real time. And they don’t have time to actually do a lot of analysis on this. But in the case of Clinton, you have months and months and months for pundants, experts, political consultants, and everybody to be writing these pieces and get on CNN, write up ads to talk about how terrible her strategy was. I just did a little Google search because I was curious. And indeed, thousands of articles come up about how bad her decision quality was. 

The weird thing is that the date of the first one is November 9, and the election was on November 8. Clearly that decision was crowdsourced. Everybody was analyzing the election. Everybody was analyzing the decision-making. And weirdly, no one wrote down these thoughts. 

So that’s like clearly resulting [to what] everybody says, “Oh, that was such a bad decision”, but apparently nobody thought so beforehand. But then what’s interesting is that you can add to this wreckfest, in terms of the paradox of experience with hindsight bias. Which is like, “Oh, no, I knew that.” Because when you read the articles, most of them say, and, “everybody knew it”. Like, “Okay, but if everybody knew it, someone would have written a piece about it.” 

And I think it’s such a good example of how much we get tripped up by this experience. Paradoxically, it’s the only thing that we can use in order to get better at our decision-making. And this really, I think, is the primary problem as we go into figuring out how to make better decisions.

Jake Taylor  11:54  

I’m curious to ask Annie a question, if she has any kind of a gut sense of just across a broad swath of domains. What do you think would be an N that you would need? How big of a sampling to sort of get a little more comfortable about making a prediction?

Annie Duke  12:10  

N that you need completely depends on the variance. So you would need to know the standard deviation. You’d also need to know a little something about the shape of the distribution. So in order to be able to tell what the N, in order to be able to understand what the N would need. You just need to understand the variability. 

Just as an example. For me to find out if we play chess, for me to find out who’s better, I need an N of one. We can play once. But for poker, for example, if I play eight hours of poker, say in a live game, and I have a sizable edge, like I’m 5% better than the table. I think it’s something like after 8 hours in a live game. [It’s] not like you get fewer iterations in a live game. I think I’m going to be winning somewhere between like 54% and 56% of the time. I think it’s after 1,000 or 1,500 hours, I’ll be winning 99% of the time. I’ll actually have a positive balance sheet. 

That gives you an idea. In poker, you need a lot more time to really have confidence about whether the person actually has an edge or not. And to work backwards from that. So you can think about a 50/50 coin. We can actually work those probabilities outright in terms of, what are the chances of two heads in a row, that’s 25% of the time. You’re going to observe that result. Three heads in the row was going to be 12 and a half percent of the time. Four is going to be 6.25% of the time, so on and so forth. It’s not that crazy to think, “Oh, you know, we could end up with 10 heads in a row on a totally fair coin.” So as we start to increase the volatility, you need to increase the N.

Sean Murray  13:43  

When you make investment decisions, how many are you making in a year, Jake? How many times are you actually putting money on the line in investment?

Jake Taylor  13:52  

For me, I’m more of a Warren Buffett style of investor. It’s small. I mean, two good ideas in a year would be okay. I would say Buffett’s typically less than five. There’s often years where there’s zero. So the N is very, very small as a sample size for this type of investing that I like to do. But if you’re a day trader, I mean, you could be making hundreds of decisions a day. You can get your N up at a much faster rate.

Sean Murray  14:20  

And you can lose your money pretty fast.

Annie Duke  14:22  

I’m going to give you a kind of Contrarian opinion. While you might only make two investments in a year, two things are true. One is you’re still making thousands of decisions. And if you can capture that data, that you can actually become a better decision-maker much more quickly. Everything is kind of like options trading. It’s just that you’re doing more “No’s” than “Yes’s”. So I hear that a lot. This is very different because I only make two decisions a year and I like, “No, that’s not true. You make thousands.” 

If we want to think about the omissions too and how we are capturing that, what’s nice about the omission is that there’s less risk to you, in order to track that data, and in order to become better. 

The other thing that’s also true is that when you hold an investment, after you hold that investment, you’re making choices about whether to press in that same domain, whether to hedge against it. If there’s liquidity to the investment, you’re making choices about whether you would like to liquidate that investment or not. Those choices are all happening along the way. You would want to be recording those as well. 

The other thing is that when you make an investment, there are implications as to why that have to do with things that will be true about the world. Some of which would be on a very short time horizon, where you can start to actually be very mindful of thinking about what am I thinking about beforehand in terms of what my ideas are about, what the world will look like. I can then close the loop in a way that will allow me to overcome the paradox of experience because I’m going to know exactly what I thought. 

I’m going to know what my predictions were. We can think about that, like back to the Clinton thing. The reason why I understand there was a problem is because there’s an evidentiary record. When everybody says, “Yeah, no, I knew that beforehand. And so did everybody else”, I can actually go look. The more that you can kind of create that for yourself, I think that we get caught up in this idea that we have sort of commission and omission. 

The things that we commit, which would be the investments we make loom very large in our cognitive space. And then the time horizon that that investment would actually settle also looms very large in our cognitive space. We want to try to pull back from that and try to think about how can we actually be data collectors? How can we create for ourselves the most coin flips possible that we can actually go back and check on?

Jake Taylor  16:43  

Annie is 100% right. The “resulting” in the investment world is a “resulting orgy”. I mean, it’s all about returns. And that’s all everyone wants to talk about. And yet over short periods of time, returns are all over the place. I mean, it’s incredibly noisy. Researcher Robin Hogarth separates the world into, “kind and wicked learning environments”. 

The “kind learning environment” is one where feedback is immediate. It’s unambiguous, and tells you whether you did the right thing or not. And think about like a little kid who’s learning how to walk, like gravity is unrelenting in its immediate feedback of teaching you how to walk. It’s a very “kind” learning environment, despite maybe some scraped knees. 

But the investment world is a very “wicked environment,” where there’s incredible amounts of noise. Sometimes it takes years for you to figure out whether you were right or wrong. It’s incredibly difficult to sort of close that feedback loop that might lead to future intuition. 

The other thing that I would say would be a best practice, based on what Andy just said was that, you can get one year’s worth of results by looking at the returns over one year for an idea. However, you can also be making specific predictions over that same time period about fundamentals about a business, let’s say, Philip Tetlock’s work is basically what I’m going to be telling you right now. 

There’s three things that make for a good prediction. You have to have an unambiguous, clearly defined outcome. I like to think about it like, you’re either pregnant or you’re not. There’s no, “a little bit pregnant”, and you want your prediction to also be “pregnant or not pregnant”. You want to assign a probability to it. So you’re thinking probabilistically. And that’s where Andy’s idea about when you say one a bet about something that gets people thinking and kind of a probabilistic mindset. 

Also, you have to have a very definite time horizon, because it’s very easy for you to say [something] like, “Well, it was going to happen, but it just hasn’t happened yet– those three components put together.” Let’s say that I would say there’s a 70% chance that Apple’s profit margins will grow by 5% by January 1, 2021. That’s a good prediction. It has all three of the hallmarks. We’re going to be able to tell whether it’s true or not. 

Now, we could also turn to this a Statistician turned Meteorologist, whose name was Glenn Brier. What he came up with was this idea of a “Brier Score.” It’s a two-factor scoring system. You have the probability that you assigned to a particular outcome, and then whether it occurred or not. What’s really nice about this is that it gives you both how accurate you are, but also where’s your confidence on your predictions? 

Most of us tend to be overconfident about what we think, that we understand the world. We’re taking a very “wicked learning environment,” and we’re making it a little bit kinder. So if you came up with five individuals, hopefully, things that would be sort of key drivers about a particular investment outcome, like profit margin, or market share, or a lot of different things that maybe correlated with a good outcome. We could actually grow the N at a five to one rate, as far as assessing if we have luck or skill, by taking a snapshot of our predictions as opposed to just purely our results.

Sean Murray  19:57  

Yeah, that’s a great insight to get alpha, or to get a return in the market that is better than the market. You’re going to need some kind of insight that’s different than the consensus. That’s what you’re getting at, Jake. You’re testing your ability to gain some kind of insight and being able to bet on that insight.

Jake Taylor  20:17  

Yeah, the other thing I like to think about is, the universe of potential outcomes is incredibly wide. And so you have, I would say, like millions of strands of the future that fuse together in the present to form this rope of history. Each different investment strategy that you could affect, you basically like, if different investment decisions will have different potential outcomes across millions of potential futures. So you can design for different things. Do you want to design for resilience, which would be a good outcome across a bigger swath of the rainbow of outcomes? Or do you want to use optimization, which would be like the best average across the millions of potential outcomes? 

But maybe there’s a big subset in there where it’s a really bad outcome. So we’re always kind of making trade-offs in our resilience versus optimization. I think people will be well-served, especially in the investment world with as much noise and crazy things that can happen. You have a pandemic breakout that no one saw coming. There are lots of different potential outcomes. I think it’s important that people think through these a little bit more. Not just look at the result of, “Well, here’s the return for the year.” There’s so many other ways that that could have gone.

Brent Snow  21:33  

There will, of course, be people who will be emerging and they’ll say, “I knew this was coming.” There’s lots of articles out there already by people saying I predicted this.

Annie Duke  21:43  

I don’t know if you’ve read, “Range.” David Epstein actually did predict it. I mean, Jake, obviously I’m a huge fan of Phil Tetlock’s work and I completely agree with you. I think what’s interesting is that sometimes, it’s not even about unique insights. It’s that you just understand your market a little bit better than everybody else, and you’re able to execute on it more quickly. You’re more agile and flexible in the way that you’re incorporating the information. 

It’s like two people could have the exact same model of the world. And one person when information comes in may adapt their model to the new information. Another person may adapt the information to their model. So they both have the same kind of insight about the world, but one of them is a better updater than the other one, based on new information coming in. 

I think that, Jake, what you’re describing in terms of making these repeated forecasts that have very, very clear answers as to whether you got it right or not. You’re getting a large enough N that you can really start to test what your calibration is automatically, because it’s forcing you to think probabilistically in that way, and to hold all these different futures in your mind at once. 

And to also just sort of start thinking about, well, “What would have to be true of the world in order for this set of outcomes to be the ones that I observed, or this set or this set, that it causes you to be more flexible in your thinking and more open-minded. 

When we have very strong models of the world, then you can see this with some models in terms of investing that worked well in certain environments that maybe not other, like value investing. Once you do that, it’s kind of dug a cognitive trench for yourself. It’s hard for people to climb out of that trench and sort of stand up and look around the world and say, “Okay, do I want to change anything?” 

So what Jake is describing, I think, allows you to create that kind of mindset, where you’re just more likely to be adaptive to the information insight aside. It’s just you know your market better and you’re more open minded to the information that will allow you to actually develop even more expertise like an extra .1 bit. Those just accrue over time. And once we accrue those over time, obviously you do enough of those, and all of a sudden you’re like a bit ahead. 

I’d like to be that. What you described is like this multiverse. You have to have this multiverse in your head at all times. And then you’re always on the lookout for, “which one am I in?”, “Which timeline do I seem to be on here”. You can only do that if you’ve been really attentive to the world. 

Jake Taylor  24:13  

What’s nice is that in the investment world especially, you can get a ton of information on what everyone else thinks based on the price. You can work backwards. What are the actual business implications that this business would have to do to justify today’s price? And there’s a ton of information as to what the assumptions are, that the rest of the marketplace is making when you work backwards from price.

Sean Murray  24:38  

Brent, maybe you could offer a little insight here from the organizational decision-making and the corporate decision-making world on how we can use some of these decision-making tools to improve how we make decisions from an organizational and personal perspective.

Brent Snow  24:54  

There’s a couple of thoughts that I have in this. I’ve been thinking about listening to this conversation, and about how this all applies. So one of the things that, as we have been teaching decision-making is that we actually created a course called, “Decision Mojo”. We purposely chose the word “Mojo” because “Mojo” is this kind of almost undefinable but definable. It’s this extra strength, this extra skill. You know when you’ve lost it. You know when you’ve gotten it. 

It many ways describes that extra little edge that you’re talking about, Annie. That edge. That “something something”. That extra point one bit that adds up over time, in one way or another. And so, as we are doing this, we’re trying to figure out how do we teach leaders. How do we teach managers, professionals and organizations on a day in day out basis to be better decision-makers? To make them feel more confident as decision makers, to be willing to make bets and make decisions. 

And conceptually, it completely makes sense to folks that we should judge the quality of a decision by the process that was used to get to the decision. Not by the results that occurred because there could be any number of intervening variables. Sometimes in an organizational world they take months. You make a decision in January. The actual result of that decision is a little bit what you were talking about, Jake. It doesn’t occur until some point down the road in the future. 

And in the meantime, there’s been a lot of noise. There’s been a lot of other kinds of things that have intervened and caused it to go one way or the other. But you’ve kind of lost the trail. It’s hard to get that immediate feedback, oftentimes by the time the result occurs. They kind of lost the trail on some level. 

People get it conceptually that you should do that. But the challenge is, the folks that they report to – their managers, the senior leaders of the organization are often like the pundits who are making comments about Pete Carroll’s decision. I often think of the senior leaders as the pundits in that world, who immediately go to results. That’s the shortcut. That’s the way in which they sort of go, “Okay, that was a good decision or a bad decision.” 

People feel like it doesn’t really matter what process I use in many cases because I know I’m going to get judged on results. And in fact, in organizational life, people say, “We judge you on results.” You get your performance reviews based on results that you achieved, or based on the results that you’ve created. And so, as we work with managers, we realize that the big challenge is actually to help them feel like they’ve got a good process. And so that they feel confident about their doing the quality kind of thinking. They’re bringing in the different perspectives. 

The Contrarians who are going to challenge them in one way or another. The people who are willing to tell them when they’re not thinking about something or bring them disconfirming evidence. They’ve got a good process. They’ve framed a decision in a way that makes sense. They’ve challenged their frame. They’ve used that process well. But the key thing they need to do is that they need to be very skillful at communicating that process. 

In some respects, getting a meta decision made by their leaders about the quality of their process. So the leader says, “Okay, we agree. You’re using a good process to get to this decision.” So they buy it into an early stage to the process, in such a way that when the result occurs, the leaders recognize that was a good process. In fact, they wouldn’t have probably made any different decision. 

Oftentimes, the leaders are not necessarily privy to that process. They just see the decision in one way or another, or they see the result of the decision. What we’ll do is say, “Okay, given this particular decision, what’s the process that makes sense for this decision? And how are you going to get buy-in on the process?” Because at the end of the day, despite your best efforts, you’re likely to have a lot of pundits out there, such as your leaders who say, “But look at the results.” 

And as much as you’ve written. We’ve all thought about Pete Carroll. It almost takes the kind of breakdown and analysis and looking at probabilities. Then years later, we can do that analysis and say, “Well, look, this ultimately was a really good decision.” But 98% of the world out there isn’t on that same journey. And doesn’t have that same perspective in one way or another. Even if you present them with that analysis, they still fight it, one way or another because the result was not what they wanted.

Annie Duke  29:10  

I gave a talk at a conference. I’ve been opening with the video of that play.

Brent Snow  29:17  

Great way to open.

Annie Duke  29:18  

People like it. Particularly because I’ll put up the still and I’m like, “Could someone tell me what situation this is?”. “Should have handed it off to Marshawn”, every time gets yelled out. Everybody knows this play. But anyway, so I went through the analysis. So when I do that, I don’t get too wonky. I mean, it depends. I’m talking to financial professionals. I’ll talk about the options theory that’s in here. This option for a third play, and what is that? What are you paying for that? 

But mostly, I just kind of get the audience to understand like, no matter, whether ultimately, you think it would have been better for them to hand-off or surpass first, like it’s under discussion. The idea that this is The worst play in Super Bowl history, which was, as Jake pointed, or Sean, I guess you pointed out was one of the headlines, is absurd. 

I mean that we should all be able to agree on. So anyway, right after the conference, this guy came up to me, and just basically told me I was wrong. I sort of said, “But you know, and I kind of walked through it again. And then I said, and you know Bill Belichick has defended the play, right?” He was interviewed about it. They asked him: “Oh, didn’t you get lucky that Pete Carroll called such a horrible play?” And he said: “Actually, that was an excellent play. I probably would have called that myself.” So Belichick thinks it’s a good play. And he went, “That’s all. We’re done here. It was bad.”

Brent Snow  30:39  

I mean, if you’re at the very, very top of the organization, you have the luxury of saying, and particularly if you’re a man, by the way, as well, “Well, I took a risk. It didn’t work out. We learned from it, and we’re going to go forward.”

Annie Duke  30:51  

I know it for myself. It’s like I’ve presented that video so many times. When I watched it, I felt it. I think about Clinton’s campaign strategy. I feel it. I feel that, of course, she was stupid for not campaigning in those three states. I have to get into a different part of my brain. I’m sort of battling it all the time. 

Michael Mobizen talks about this really eloquently that there’s a part of the brain that’s been shown neurologically called, “the interpreter”. Basically, when you’re in the process of making a decision, this particular part of your brain isn’t engaged. But once you see an outcome, that’s when it’s engaged. And what it’s trying to do is it makes sense of the world for you. It’s trying to find causality. It’s trying to figure out what the connection is. 

The interpreter is not happy. Well, there were lots of ways that this could turn out if this happened to be one of them. That’s not really what it’s trying to accomplish. I took the pool cue. I hit the white ball. The white ball collided into the blue ball that caused it to go into the pocket. It’s just trying to settle it. And then I think the other problem is that a lot of the things about which we think. This is true of the Pete Carroll thing as well. While they might be probabilistic in nature, they settle to one or zero. 

So I can say there’s a 30% chance of rain in the forecast area. And the next day, it either rains or it doesn’t. It settles to one or zero. I think that it’s very hard for “the interpreter” for our brain to like, it wants to say, “Okay, it’s settled to one or zero. I figured what caused that.” I think that that makes it really hard because we can’t see 30% except over time. We see that it rained or didn’t.

Brent Snow  32:35  

Right. It’s like this outcome that is actually swaying everything else, all the interpretation. That’s like this magnetic force at one or zero that just sort of polls how we interpret everything that happened before. We one way or another may have been thinking very differently, but now we’re thinking this way.

Annie Duke  32:53  

I think that you’re exactly right. Brent.

Sean Murray  32:55  

Yes, this idea of a decision process review. It’s so important. We can learn so much from stepping back and assessing our decisions, our decision process, and our model. And then using the information from the outcome of the decision to update our process and our model. But it’s challenging, right? Because we don’t often take the time to do it, or, when we do take the time to do it, we aren’t honest with ourselves. We revert to “resulting”. 

Annie, you bring up an interesting thought experiment to get us to think about what exactly is the thing that triggers a decision process review. You say, “Imagine we all work for a commercial investment firm, and we have a model that predicts our investment. It’s going to be up 20% in market value in two years.” And then you imagine fast forward in two years. Maybe I’ll let you set this up since it’s your thought experiment.

Annie Duke  33:46  

So it’s two years down the road and let’s say it appraises at 10% or 15%, less of what our model predicted. I think we can all imagine that we’re in a room. It’s a very long meeting. If it’s a better company they’re going to be talking about the model and the process for making the decision to invest. They’re going to say, “I know, I know, I don’t care that it lost. We don’t care. But we want to figure out what’s going on with our model. Obviously, we think our model was wrong. So let’s really dig into it”. It’s a very, very long meeting. 

So now let’s do the thought experiment. We do the exact same thing. We said, “We really think that commercial real estate is being undervalued right now. We’re going to build an office building. We have it appraised at X dollars in two years. That’s what our model says. We finished the project and it appraises 10% or 15%, higher than what our model said.” Are we in a room that doesn’t involve champagne, by the way. 

Is it a two-hour long meeting about what was wrong with our model? Should we need to be examining it? Was this just a tail event with our model off? Why did it do so much better than we expected? Was it for reasons that we could predict or was it orthogonal to the reasons that we thought? So on and so forth. 

So what we can see is that there’s an asymmetry in, “What is the thing that’s triggering process reviews?” Because Jake, to your point, like Phil Tetlock will tell you, “You can’t allow process to live in a box.” Just because outcomes are probabilistic in nature, doesn’t mean you get to say, “Well, I thought we would observe that result 5% of the time when we actually observed it 5% of the time”. 

Like in the case of the polling error with Clinton. With that election, not Clinton, and particularly across those three states, you don’t want to just say, “Well, our polling is variable. There’s a margin of error”. You want to actually go look and look at them. You want something to trigger the loop. But you want it to trigger the loop, whether the poll overestimated or underestimated what the population to vote was because our allocation is just as bad a problem as over allocation. 

Number one, so you don’t want to be doing either of those things. Two, it’s the case that you’re losing half of your opportunities to learn. [You’re] examining and trying to figure out how can I refine my model. You don’t want to lose half your learning opportunities. But three, and this is the really bad one to Brent’s point, “What are you communicating to the people who work for you?” Don’t you go doing that. And we’re going to be quizzing you about it. 

We know from Kahneman and Tversky’s work on “Prospect Theory,” that the chance that you might be in the room having everybody toast champagne to you doesn’t even compare to that time in the room where you’re having to defend that decision that you make. How that interacts with innovation. 

So if we go back to the Pete Carroll example. I’m going to put this thought experiment to you. Pete Carroll hands it off to Marshawn Lynch. It’s just like everybody wants. He hands it off to Marshawn Lynch. Marshawn. Lynch fails to score. So he calls the timeout, which he would have to do. He again does what everybody expects him to do. It ends after Marshawn Lynch a second time and the Patriots line holds. Is there a headline that says, “Worst call in Super Bowl history now?”

Jake Taylor  36:56  

Probably not. It’s better to fail conventionally than to succeed unconventionally. 

Sean Murray  37:00  

Good point.

Annie Duke  37:02  

What do you think is more likely? It’s the worst call in Super Bowl history. What was he thinking? Or, “This is why Belichick is going to be in the Hall of Fame. The Patriots defense was too strong. They held the beast at the goal line for the win.” Where do we think those headlines go? This becomes a problem as it interacts with this issue of, “Are we in the room for bad outcomes or good outcomes?” 

We’re only in the room for bad outcomes. That’s only a problem. If it’s unconventional. It can be because you didn’t have full consensus. So now that causes consensus speaking or false consensus speaking or, the kind of the worst version of consultants who just sort of are brought in to defend a decision that somebody wants to make. So they can say, “No, look, we did the work”.

Brent Snow  37:57  

Or what I would say happens increasingly in an organization. We call it, “consensus overdosing”. You basically cover your every base. You bring everybody in. And so if it goes bad, it’s like, “Oh well, look at everybody else who’s involved [with the] status quo”. Which is the safe place to go because how can you false the status quo?

Annie Duke  38:17  

If you make a consensus or a status quo or conventional decision, and you fail, it’s like, “This is bad luck.” If you do well, it’s like, “Okay.” So you’re literally hovering around the middle. But if you do a Pete Carroll thing and you do something really unusual and you succeed. Sure, you get called a genius. We see this all the time in Silicon Valley. [It’s] like genius. But the problem is, when it fails, you’re called an idiot. 

And that quadrant that we’re so afraid of, we just kind of try not to lose, which is a horrible way to live your life. It’s like how can I just de-risk at every turn? You’re not going to do very well doing that. So one is that your employees, the people on your team, in your own life, you’ll do this to yourself because you make the same judgments to yourself. 

It doesn’t take another person to do it. You’re going to be just kind of de-risking all the time. That’s number one. And then number two, is that you’re going to remove career risk with the kinds of things that you just said, Brent.

Sean Murray  39:17  

Jeff Bezos gets this right. If you’ve seen some of his quotes. They did something with Amazon about 10 years ago called, “the fire phone”. It failed miserably. It was a billion-dollar bet, a multi-billion dollar bet, potentially. He was asked about it. “What do you think, Jeff, of this failure? Of this terrible outcome?.”


And he said, “Well, we’re going to have a lot bigger failures in the future. If we’re not seeing those kinds of failures, then we’re not making the right kinds of bets.” And that’s a very different mindset from your leadership. If you’re at Amazon and you want to place a bet, if you want to invest in something that’s going to be innovative, creative and involve risk and uncertainty, knowing that you’ve got someone who thinks about risk in that way is going to clear the way culturally for that innovation.

Brent Snow  40:06  

So Sean, here’s the question. And I don’t know that we can totally know the answer. But if you’re in a middle of the organization, if you’re in the middle of Amazon, you’re not Jeff Bezos. Do you feel that same freedom? It might not be a billion-dollar bet. But it might be a million-dollar bet. Do you feel that same freedom to take that kind of risk, make that kind of mistake, or not even make the mistake, but have it not go well, and end up in that quadrant that you were describing, Andy? 

When you’re at the top of the organization, you can do it. Then you can talk about it later and say, “Hey, we just learned something really valuable. And that’s really important.” I think what the challenge is for folks who are in the middle of the organization is that they can to some extent, maybe if they’re great managers and leaders, they can impact with their team or the people who directly report to them, this “culture of experimentation”. Say, “Listen, I’d rather you take the risk, make a decision, go forward, learn from it, make changes quickly, whatever it is.” 

But then if their leaders, their managers don’t have that same mindset. They feel like they are going to have to somehow manage the risk that they’re allowing their employees to take. Ultimately, it reflects back on them. They’re going to be called to the table for that long and painful meeting to explain that one time “where didn’t it go well?”. It’s hard. They get frozen. They go into “consensus overdosing” by bringing everybody into the decision, or they feel like they have to have 100% of the information versus 70% of the information.

It’s teaching people to somehow feel confident enough to still make a call in a world where they feel like they’re going to get called to the carpet when it doesn’t necessarily turn out exactly as they planned is the real challenge. I started using the term and actually it is one that seems to have a fair bit of resonance.

If you are feeling stuck, and you’re afraid to make a decision because you’re afraid that ultimately once you make a decision, there’s now going to be something that happened to result. Think of decisions As experiments with the future. Reframing your decision, not as, “Oh my god. It’s going to go one way or the other way, and what if I get it wrong”. But just think of it as, “This is kind of a positive experiment with the future where you’re going to get data back”. 

And also try to put the decision out there in a way where there’s some ability to make adjustments, learn from it, and bring it back in one way or another. Just that simple language shift of, “What if I get this wrong?” 

And so then, I default to the status quo, and I don’t make a decision, or I “consensus overdose” or information overdose, or I do all that other stuff, and then ultimately don’t make the decision. Instead, I think of it as this positive foray into the future to experiment with the future. And I’m going to get feedback one way or the other. And like all experiments, sometimes it doesn’t work out exactly. 

Form a hypothesis. You have a prediction that you’re making. And best of all, you also document that prediction. You actually say, “Okay, here are my assumptions. Here are the things that I believe is why I’m making this decision”. And then you go back and revisit those assumptions, or revisit what you were thinking when you made the prediction. And then you obviously, hopefully get better and better and better over time as you do that.

Annie Duke  43:07  

I love what you said. Actually, Tim Harford really talks a lot about this idea of experimenting. Like, thinking about decisions as experiments. And there’s so much good stuff that you just said in there. I love it. So the first thing I just want to kind of circle back to when you said like, “if you actually have a record.” What I really like to think about is that, if you’re actually instituting good process and a decision, a record naturally gets created. 

I actually am not a big fan of decision journals. In the sense of, “I’m going to make a decision and write it down”. I talk about it as pre-work. Pre-work is the way that I frame it, as opposed to a journal. I feel like a journal is a little bit of a diary. It feels like extra work. But if you think about a good decision process, as going to create an evidentiary record, which will be the same as a journal. But it creates a good evidentiary record for you to be able to look back on. Then it doesn’t feel like extra work. You’re doing something that is in the creation problem of the decision. 

Part of that is what you’re talking about, Brent, which is, “What type of decision are you facing?”. Which tells you, how experimental can I get? And if you really think about the type of decision that you’re facing, it frees you up because you’re framing the impact of the decision. Is this, as you said, “What’s the impact going to be? What’s my optionality?” Those are the two things that you should really care about, as you’re going into that. 

And once you realize, “I’m kind of in a low impact situation with lots of optionality, you should be able to get an organization to start moving very quickly.” Now the defenses what we thought about what type of decision it was. We just need to identify the decision environment. And that told us how far up the chain we needed to even be thinking about this. 

But then I also want to go back to the thing that you said about, people kind of wanting to get to 100% on a decision. The thing is that, I think that people forget about time as a commodity. And that when you’re entering any decision, there’s a trade-off between time and accuracy, or time uncertainty. 

Certainly, we really want to be thinking about that because that can get us into this situation where we end up making no decision. You’re accruing risk by not making a decision over the time that you don’t make a decision. I’m just as much as you would be if you make the decision itself. Either way, you’ve got risk accruing. And in general, what we’d like to do is be able to speed things up, if possible. Sometimes the best information you can get is how it turns out. 

We live in a world that’s so data rich. When I was growing up, it was like, “Well, let’s just go to this restaurant”. Now, it’s, “We got to go on Yelp. We have to examine every restaurant in the area”. Everything is reviewed,  I go read some of the reviews. And then you know, because we have this idea that with enough data, we just got one more piece of information. If we just looked a little harder, then maybe we could get to that hundred percent sure. 

And so I think that, again, another paradox is that the amount of data that we have available to us now can actually slow decision-making down in a really bad way. It encourages the illusion that we can get to 100%. But if you think about it as an example, like if I’m in a restaurant, and I’m trying to order between the chicken and the fish, there’s not a lot that I can find out that’s going to tell me whether I actually like that dish. It’s beyond ordering it, having it in front of me and tasting the food. 

And there [are] so many decisions that are like that. I just have to taste the food, then you can take that and work backwards and say, “Well, if I do have a really high impact decision on the horizon, what are the small experiments that can run in front of that in order to actually build a better model of the world outside of just trying to accumulate information about base rates”. It is important but doesn’t give you the answer. Well, it can sometimes, right? 

If there’s no skill involved, base rates are certainly the only thing I would want to know. But assuming there is skill involved, that’s not necessarily going to give you the answer. And now I can think about how I can kind of date before I marry, in this particular decision. If I’m going to make a really illiquid investment, what are the smaller things that I can do in front of that, to help me understand when I actually have to make the investment that’s going to be very illiquid and incredibly impactful in my decision-making life. I just wanted to sort of circle back to a few of those things. What you said Brent was just so jam-packed with amazing stuff.

Brent Snow  47:33  

Jake, how do you date before you marry in the investment world? I love that analogy, by the way.

Jake Taylor  47:37  

People will pay per trade. That’s sort of one way of doing it. You can just scale down all your position sizing as a way. There’s this kind of idea that you really don’t know a company until you’ve owned it for a while. I found that to be true. You can do all the research you want but once you actually own it, then you’re like, “Oh, man, I really need to know this now.” 

Some people will unlock how much of the portfolio that you’re willing to dedicate to an idea over time as you come to know it better. So you can scale yourself into it as you hopefully capture more of the total information that would affect your outcomes.

Annie Duke  48:15  

So if you think about options and impact, that’s a way to reduce impact. The more you can reduce impact, the more quickly you can go. iIt’s safer to be deciding where essentially, your accuracy is just going to be lower because you don’t quite know as much. You’re actually using it as an information gathering tool. 

And so if you’re thinking in those two categories, “options and impact”, the more liquid something is, the faster you can  just go because I can just get right out of it. If I find new things out, and then if you can lower impact, make a lot of small bets, while you’re building your model before you make a really big bet just to get the information. And you can actually sort of work backwards. So you can say if we’re really interested in this particular area or particular company, kind of what’s the minimum amount that we would have to invest in order to be able to get access to that information?

Sean Murray  49:03  

If we layer in another level of complexity when it comes to investing, especially value investing. Jake, as you are getting to know this company if the stock price is continuing to increase, you’re going to have some regret. Potentially, you’re going to have hindsight bias. “Why didn’t I go in bigger earlier?” 

Alternatively, if the stock price is going down as you’re learning more about the company. If you continue to like what you’re learning, you should be celebrating. Warren Buffett says, “Tap dancing and doing a little dance”. That’s like saying, “Alright, the price of the stocks going down. Wohoo!” But those are hard mental states and emotional states to put yourself in because they go counter to obviously, your hindsight bias or your potential net worth as you think about it.

Jake Taylor  49:49  

I’m actually a little bit pessimistic that we will be able to overcome some of these challenges. I think we’re actually fighting biology in a lot of ways. For instance, like the decision-making on, let’s say you were the leader of a little tribe a million years ago. You lived in a world first of all, that was very Newtonian. And there was a lot more sort of like cause and effect that was closer together. We found the animal. We killed it. We got to eat dinner. 

There wasn’t as much of some of these tails. I think they’re a little bit narrow than they are today, in such an interconnected world. A global pandemic, being a good example of everything’s so interconnected now. We’re living in more complex systems than we did at that time. 

So if you were the leader of that tribe, and there’s something crazy [that] happens, left tail event, that’s an act of God, right? Like you just blame God for that one or the gods. Whereas now, to put yourself into either one of the tails, good or bad outcomes requires you to really kind of go against your biology and probably, how leadership evolved over time. 

It was better to have the consensus that everyone agreed that “Yeah, we’re going to move the camp over here.” You’re kind of sharing the risk a little bit there. If you went out on your own and said, “No one else wanted to, and you did it. Maybe your head ends up on a spike at that point. So I think we’re fighting a lot of our biology to do some of these best practices. It doesn’t mean we shouldn’t try. But I think we have to be cognizant of that.

Sean Murray  51:17  

That’s a great note. Maybe to end on this idea that you brought up. We’re constantly battling to overcome hindsight bias. To overcome this idea that we need to decouple the outcome. Judging a decision purely by the outcome and not looking at the process. That reminds us all that it’s a learning process. We need to document decisions. Go back and look, and see how we did. 

Think about the process and not just the outcome. It’s a call to continually learn about decision-making. Hopefully, reconvene again in the future and continue our discussion about how to get better. We’ve unlocked and uncovered so many different threads and angles in decision-making, that we could explore as a group.

Brent Snow  52:01  

Sean, let me just jump in. So let me just give a couple of things that would be great angles to explore. I think there’s a whole angle that we just touched on about “risk and taking risk and gender” in decision making. I think [it] is a powerful angle to explore, and how that plays out, particularly as others are perceiving the decision. I think there’s this whole angle about helping others overcome that biology, as you described, Jake. *inaudible* say, “The traps are going to be there. The biases are going to be there, and they’re going to fall. Anchoring is going to happen.” 

We’re going to do those things naturally, one way or another. Even if we are aware that we’re doing and we’re still doing in one way or another. I tend to believe that there are practices you can put in place and things you can do that help you avoid some of those traps.

Annie Duke  52:46  

I think those two things live together because if you take a systems approach to it. Basically what you’re saying is, you’re gonna do it anyway. So I’m going to create a system where it’s very hard for that to impact you.

Brent Snow  52:56  

Harder. And then you have to somehow uphold it. And I also think that there’s a popping opportunity to tell some personal stories. I know I have been challenged in the last three or four months around making some medical decisions for myself that I’ve had to apply every little bit of what I know about decision making to myself. It traps all over the place, ultimately making a decision in the context of huge uncertainty, way too much information and multiple opinions who don’t all agree with each other. 

It’s been interesting to observe myself in action in that process. And also observe how people have responded to me as a decision-maker in that process, bringing the various tools that I have at my disposal from a decision-making standpoint into it. These are in ways that are somewhat different than what they normally experience, in terms of the questions I asked, how I narrow down options, and how I sort of want you to take between different risks. 

Sean, I would just say I think there’s certainly opportunity for multiple additional avenues here. This has been fun.

Annie Duke  53:59  

I agree. I think there’s so many different solutions that you can put into place with it. We talked a lot about what the problems are. We talked a little bit about how things are generally done. But to Brent’s point, there are things that you can really put in place within an organization that can really drastically improve how people are making decisions. 

I think, to Jay’s point, the biology is there, but I actually have sort of a hopeful take on it from two places. One is, you don’t have to move a lot to get really big results. That’s good news. So if you can overcome your biology, like a tiny little bit, you’re actually going to be way better off. But also that, I think that that natural tribalism and the way that tribe works, which some have to do with sort of the charisma and the confidence of the leader in creating a kind of consensus and whatnot, that you can actually sort of dig into what the tribe is offering you. What are the things that it’s actually doing for you? 

And then turn that on its head, so that you really organize what you would say is the epistemology of the tribe. So tribes are cohering around epistemology. Like, we’re going to move our camp here because we believe that that is correct. That’s the belief that you’re organizing around that’s causing that action to occur. 

I think that you can actually be pretty intentional about the epistemology of a group, so that you can actually get them to cohere around something that looks more open minded than it actually would be. 

Also, I’d love to have a discussion about that. How can you actually sort of create culture around really understanding. It’s sort of “It’s just a tribe, it’s why we survived.” But there are disadvantages. And so how can you actually try to mitigate the disadvantages that come along with it? It’s a really fun discussion to have. I just think there’s so much in everything that Brent, Jake and Sean that you’ve talked about. There’s so much stuff that you can pull out of that. That’s like so amazing. 

And then get into a real practical application of a lot of what we talked about with the problems. We talked a little bit about the solutions, but, even in terms of forecasting, like, “Okay, so how do you actually do that?” How do you create a great forecast? There are some high level solutions. But how do you actually dig down and get into the nuts and bolts of how you’d actually build that piece of furniture? You need a dresser. “Okay, but how?”, which I think is just so fun to talk about.

Sean Murray  56:20  

Let’s reconvene this group soon. We’ll explore the idea of how we can actually solve some of these problems. This has been just a wonderful conversation. Thank you for being on The Good Life.

Brent Snow  56:33  

Thank you, Sean. It’s been a pleasure. 

Jake Taylor  56:34  

Thanks for having me.

Annie Duke  56:36  

Thank you. I had so much fun. 

Outro  56:38  

Thank you for listening to TIP. To access our show notes, courses or forums, go to This show is for entertainment purposes only. Before making any decisions, consult a professional. This show is copyrighted by The Investor’s Podcast Network. Written permission must be granted before syndication or rebroadcasting.


Check out our latest offer for all The Investor’s Podcast Network listeners!

TGL Promotions

We Study Markets