How to Decide

I loved “Thinking in Bets” by Annie Duke immensely - it taught me to think probabilistically about the future, and it made a huge difference, especially in the darkest hours. Well, now I can say that I loved “How to Decide” even more. If “Thinking in Bets” was a WHY book, “How to Decide” is a WORKBOOK that enables learning good decision-making BY DOING: for every concept and framework, there are exercises, charts and checklists that bring a good decision making process to life. With a pencil at hand, you can examine your past decisions and develop a better process for the decisions you are facing right now. A gem.

Why is it so important to have a high-quality decision process? Because there are only two things that determine how your life turns out: luck and the quality of your decisions. You have control over only one of those two things. Luck, by definition, is out of your control. Where and when you were born, whether your boss comes into work in a bad mood, which admissions officer happens to see your college application—these are all things that are out of your hands. The only thing you have control over that can inžfluence the way your life turns out is the quality of your decisions.

How to decide offers a framework for thinking about how to improve your decisions and a set of tools for executing on that framework.

  • First, the book offers a framework to examine your past decisions: it is about figuring out the balance of luck and skill in every outcome. It’s important because how you see you past decisions will inform your future decisions.

  • Then, the focus shifts to new decisions: what a high quality decision process look like and a set of tools to implement this process. Determining whether a decision is good or bad means examining the quality of the beliefs informing the decision, the available options, and how the future might turn out given any choice you make. The book offers tools for getting better at making educated guesses about an uncertain future, and ways to improve the quality of your knowledge and beliefs that inform the decisions.

  • While we’d better be equipped with a solid decision process for the most consequential decisions, there are instances when sacrificing decision quality is not a big deal. The book offers mental models that help judge when you need to decide fast and when you need to slow down. It helps to spend less time on inconsequential decisions and more on those that matter.

  • Last but not least, the book offers ways for better leveraging the knowledge and information that other people have, including eliciting feedback from others and avoiding the pitfalls of groupthink.

Below are the wrap-ups for every chapter, just as they are in the book.

Resulting

Resulting is the tendency to look at whether a result was good or bad to figure out whether a decision was good or bad.

  • Outcomes cast a shadow over the decision process, leading you to overlook or distort information about the process, making your view of decision quality fit with outcome quality.

  • In the short term, for any single decision, there is only a loose relationship between the quality of the decision and the quality of the outcome. The two are correlated, but the relationship can take a long time to play out.

  • Luck is what intervenes between your decision and the actual outcome. Resulting diminishes your view of the role of luck.

  • You can’t tell that much about the quality of a decision from a single outcome, because of luck.

  • When you make a decision, you can rarely guarantee a good outcome (or a bad one). Instead, the goal is to try to choose the option that will lead to the most favorable range of outcomes.

  • Making better decisions starts with learning from experience. Resulting interferes with that learning, causing you to repeat some low-quality decisions and stop making some high-quality decisions. It also keeps you from examining good-quality/good-outcome decisions (as well as bad-quality/bad-outcome decisions), which still offer valuable lessons for future decisions.

  • Resulting reduces compassion when it comes to how we treat others and ourselves.

Hindsight Bias

  • Hindsight bias is the tendency to believe that an outcome after it occurs, was predictable or inevitable.

  • Hindsight bias, like resulting, is a manifestation of the outsized influence of outcomes. In this case, the outcome casts a shadow over your ability to accurately remember what you knew at the time of the decision.

  • Hindsight bias distorts the way you process outcomes in two ways: Should have known and Knew it all along.

  • Hindsight bias is frequently connected with a set of verbal or mental cues.

  • Once you know how a decision turns out, you can experience memory creep, where the stuff that reveals itself after the fact creeps into your memory of what you knew or was knowable before the decision.

  • To learn from your choices and their outcomes, you need to strive to be accurate about what you knew at the time of your decision.

  • The Knowledge Tracker is a tool that can help separate what you knew from what you subsequently learned.

  • Hindsight bias leads us to lack compassion for ourselves and others.

The Decision Multiverse

  • The paradox of experience: Experience is necessary for learning, but individual experiences often interfere with learning. This is partly because of the biases that cause us to overfit outcomes and decision quality.

  • Viewing the outcome that occurred in the context of other potential outcomes at the time of the decision can help to resolve this paradox.

  • There are many possible futures but only one past. Because of this, the past feels inevitable.

  • Re-creating a simplified version of a decision tree puts the actual outcome in its proper context.

  • Exploring the other possible outcomes is a form of counterfactual thinking. A counterfactual is something that relates to an outcome that has not happened but could have happened, or an imagined state of the world.

  • Our willingness to examine outcomes is asymmetrical. We are more eager to put bad outcomes in context than good ones. Becoming a better decision-maker requires us to try (difficult though it may be) to put those good outcomes in perspective.

The Three P’s: Preferences, Payoffs, and Probabilities

  • Incorporating preferences, payoffs, and probabilities into a decision tree is an integral part of a good decision process.

  • Preference is individual to you, dependent on your goals and values.

  • The payoff is how an outcome affects your progress toward or away from a goal.

  • Some possibilities will have payoffs where you gain something you value. These comprise the upside potential of a decision.

  • Some possibilities will have payoffs where you lose something you value. These comprise the downside potential of a decision.

  • Risk is your exposure to the downside.

  • Payoffs can be measured in anything you value (money, time, happiness, health, the happiness or health or wealth of others, social currency, etc.).

  • When you’re figuring out whether a decision is good or bad, you’re comparing the upside to the downside. Does the upside potential compensate for the risk of the downside potential?

  • Probabilities express how likely something is to occur.

  • Combining probabilities with preferences and payoffs helps you to better resolve the paradox of experience, allowing you to get out from under the shadow of the particular result that you are dealt.

  • Combining probabilities with preferences and payoffs helps you more clearly evaluate and compare options.

  • A pros and cons list is flat. It lacks information about both the size of the payoffs and the probability of any pro or con occurring. Because of that, it is a low-quality decision tool for evaluating options and comparing them to one another.

  • Most people are reluctant to estimate the likelihood of something happening in the future. (“That’s speculative.” “I don’t know enough.” “I’d just be guessing.”)

  • Even though your information is usually imperfect, you know something about most things, enough to make an educated guess.

  • The willingness to guess is essential to improving decisions. If you don’t make yourself guess, you’ll be less likely to ask “What do I know?” and “What don’t I know?”

  • You can start expressing probabilities by using common terms. That gets you thinking about how often outcomes will occur, presents a view of relative likelihood, and gives you a snapshot of the overall likelihood of the best and worst outcomes.

Taking Dead Aim

  • Natural language terms that express likelihoods, like “very likely” and “unlikely,” are useful but blunt instruments.

  • The drive to improve on your initial estimates is what motivates you to check your information and learn more. If you hide behind the safety of a general term, there’s no reason to improve on it or calibrate.

  • Terms that express likelihoods mean very different things to different people.

  • Using ambiguous terms can lead to confusion and miscommunication with people you want to engage for help. Being more precise, by expressing probabilities as percentages, makes it more likely you’ll uncover information that can correct inaccuracies in your beliefs and broaden your knowledge.

  • You can use your answers to the Mauboussins’ survey to help you convert natural language terms to exact probabilities. In addition to making precise (bull’s-eye) estimates, offer a range around that estimate to express your uncertainty. Do this by including a lower and upper bound that communicates the size of your target.

  • The size of the range signals what you know and what you don’t know. The larger the range, the less information or the lower the quality of the information informing your estimate, and the more you need to learn.

  • Communicating the size of the range also signals to others that you need their knowledge and perspective to narrow the range.

  • Use the shock test to determine if your upper and lower bounds are reasonable: Would you be really shocked if the correct answer was outside that boundary? Your goal should be to have approximately 90% of your estimates capture the objectively true value.

  • Develop a habit of asking yourself, “What information could I find out that would tell me that my estimate or my belief is wrong?”

Turning Decisions Outside In

  • The inside view is the view of the world through your own perspective, your own beliefs, and your own experiences.

  • Many common cognitive biases are, in part, the product of the inside view.

  • Pros and cons lists amplify the inside view.

  • The outside view is the way that others would see your situation, or what’s true of the world in general, independent of your own perspective.

  • It’s important to explore the outside view even if you think you’ve got your facts straight, because it’s possible other people could look at the same facts and come to different conclusions.

  • The outside view acts to discipline the biases and inaccuracies that live in the inside view, which is why you want to anchor first to the outside view.

  • Accuracy lives at the intersection between the inside view and the outside view. The things that are particular to your situation matter, but those particulars should be married with the things that are true of the world in general.

  • When it comes to reasoning about the world, your beliefs are in the driver’s seat.

  • Motivated reasoning is the tendency to process information to get a conclusion we want rather than to discover what is true.

  • Smart people aren’t immune to motivated reasoning and the inside view. In fact, being smart can make it worse because smart people have more confidence in the truth of their beliefs and can spin better narratives to sway other people (and themselves) toward their point of view.

  • A good way to get to the outside view is to look for any base rates that might apply to your situation.

  • Another way to get to the outside view is to seek out other people’s perspectives and feedback. It’s important, however, that they feel comfortable expressing disagreement or a perspective that might cast you in an unflattering light. Otherwise, they’re only amplifying the inside view, strengthening your belief in your accuracy because it feels certified by others. You should be eager to hear people disagree with you and motivate them to do so.

  • Perspective Tracking is a good decision habit to develop. Intentionally considering your situation entirely from the outside view and then entirely from the inside view can get you to a more accurate view that incorporates both.

Breaking Free from Analysis Paralysis

  • We spend an enormous amount of time on routine, inconsequential decisions. The average person spends 250–275 hours per year deciding what to eat, watch, and wear.

  • That’s the equivalent of the time they spend at work in six or seven weeks. There is a time-accuracy trade-off: Increasing accuracy costs time. Saving time costs accuracy.

  • The key to balancing the trade-off between time and accuracy is figuring out the penalty for not getting the decision exactly right.

  • Getting an initial understanding of the impact of your decision (through the framework of evaluating possibilities, payoffs, and probabilities) will identify situations in which the penalty is small or nonexistent, giving you leeway to sacrifice accuracy in favor of deciding faster.

  • Recognizing when decisions are low impact also maximizes opportunities to poke at the world, which increases your knowledge and helps you learn more about your preferences, improving the quality of all future decisions.

  • You can identify low-impact decisions with the Happiness Test, asking yourself if how your decision turns out will likely have an effect on your happiness in a week, a month, or a year. If the type of thing you are deciding about passes the Happiness Test, you can go fast.

  • If a decision passes the Happiness Test and the options repeat, you can go even faster.

  • A freeroll is a situation in which there is limited downside. Save time deciding whether to seize a freeroll; take time in deciding how to execute it.

  • When you have multiple options that are close in potential payoffs, these are sheep in wolf’s clothing decisions. Close calls for highimpact decisions tend to induce analysis paralysis, but the indecision is, in itself, a signal that you can go fast.

  • To determine if a decision is a sheep in wolf’s clothing, use the OnlyOption Test, asking yourself for each option, “If this were the only option I had, would I be happy with it?” If your answer is yes for more than one option, you could flip a coin since you can’t be that wrong whichever option you pick.

  • Allocate your decision time using the menu strategy. Spend time sorting, determining which options you like. Once you have options you like, save time picking.

  • When you pick an option, you’re passing on the potential gains associated with the options you don’t pick. This is the opportunity cost. The higher the opportunity cost, the higher the penalty for making choices that are less certain.

  • You can defray opportunity cost and decide faster by being quit-toitive, looking at decisions through the framework of whether you can change your mind, quit your choice, and choose something else at a reasonable cost.

  • Decisions with a low cost to quit, known as two-way-door decisions, also provide you with low-cost opportunities to make experimental decisions to gather information and learn about your values and preferences for future decisions.

  • When you’re facing a decision with a high or prohibitive cost of changing your mind, try decision stacking, making two-way-door decisions ahead of the one-way-door decision.

  • You can also defray opportunity cost if you can exercise multiple options in parallel.

  • Because you can rarely approach perfect information or be certain of the outcome of your decision, you will make most decisions while still uncertain. To figure out when additional time is no longer likely to increase accuracy in a worthwhile way, ask yourself, “Is there additional information (available at a reasonable cost) that would establish a clearly preferred option or, if there is already a clearly preferred option, cause you to change your preferred option?” If yes, go find it. If no, decide and move on.

The Power of Negative Thinking

  • We are pretty good at setting positive goals for ourselves. Where we fall flat is at executing the things we need to do to achieve them. The gap between the things we know we should do and the decisions we later make is known as the behavior gap.

  • The message of the power of positive thinking is that you’ll succeed if you imagine yourself succeeding. Whether explicitly or by reasonable inference, the message is also that failure is the result of thinking about failure.

  • Despite the importance of setting positive goals, positive visualization alone won’t give you the best route to success. Negative thinking helps you identify things that might get in your way so you can identify ways to reach your destination more efficiently.

  • Thinking about how things can go wrong is known as mental contrasting. You imagine what you want to accomplish and confront the barriers in the way of accomplishing it.

  • You can identify more potential obstacles by combining mental contrasting with mental time travel, picturing yourself in the future having failed to achieve a goal, and then looking back at what got you to that outcome.

  • Looking back from an imagined future at the route that got you there is called prospective hindsight.

  • A premortem combines prospective hindsight with mental contrasting. To do a premortem, you place yourself in the future and imagine that you have failed to achieve your goal. You then consider the potential reasons things worked out poorly.

  • In addition to helping individuals, premortems can help teams minimize groupthink and maximize access to the outside view by eliciting more diversity of opinions. This is especially true if team members do the premortem independently before discussing it as a group.

  • A companion technique to a premortem is backcasting, where you work backward from a positive future to figure out why you succeeded.

  • You can turn the output of premortems and backcasts, for easy reference, into a Decision Exploration Table, which also includes an estimate of the chances of the reasons for failure and success occurring.

  • Given what you’ve learned from creating a Decision Exploration Table, the first thing to ask is whether you should modify your goal or change your decision.

  • Once you’ve established that you’re sticking with your goal or decision, you can create precommitment contracts, which raise barriers to behavior that interfere with your success or lower barriers to encourage behavior that promotes your success.

  • You can also prepare for your reaction to setbacks along the way to your goal. People compound negative outcomes by making poor decisions after a bad result. Tilt is a common reaction that occurs in the wake of a bad result. The what-the-hell effect and the sunk cost fallacy are examples of tilt. Planning for your reaction allows you to create precommitments, establish criteria for changing course, and dampen your emotional reaction in the wake of a setback.

  • The Dr. Evil game helps identify and address additional ways your behavior in the future might undermine your success. In the game, you note the ways that Dr. Evil would control your mind to make you fail through decisions that are justifiable as one-offs but unjustifiable over time.

  • The Dr. Evil game can encourage you to adopt a precommitment called a category decision, where you decide in advance what options you can and cannot choose when you face a decision that falls within that category.

  • You can also address potential bad luck by hedging, paying for something that mitigates the impact of a downside event occurring.

Decision Hygiene

  • One of the best ways to improve the quality of your beliefs is to get other people’s perspectives. When their beliefs diverge from yours, it improves your decision-making by exposing you to corrective information and the stuff you don’t know.

  • Beliefs are contagious. Informing somebody of your belief before they give their feedback significantly increases the likelihood that they will express the same belief back to you.

  • Exercise decision hygiene to stem the infection of beliefs. The only way somebody can know that they’re disagreeing with you is if they know your opinion first. Keep your opinions to yourself when you elicit feedback.

  • The frame you choose can signal whether you have a positive or negative view about what you’re trying to get feedback on. Stay in neutral as much as possible.

  • The word “disagree” has very negative connotations. Using “divergence” or “dispersion” of opinion instead of “disagreement” is a more neutral way of talking about places where people’s opinions differ.

  • Outcomes can also infect the quality of feedback. Quarantine others from the way things turned out while eliciting their feedback. When you’re asking for feedback about something that’s happened in the past and several outcomes are sitting in the way, iterate feedback.

  • For feedback of any kind, put the person, as closely as possible, into the state of knowledge you were in when you made the decision.

  • Group settings offer the potential of improving decision quality if you can access the different perspectives of the group. Often, this potential is undermined by the tendency of groups to coalesce around consensus quickly, discouraging members with information or opinions that disagree with the consensus from sharing them.

  • Groups can better fulfill their decision-making potential by exercising group decision hygiene, soliciting initial opinions and rationales independently before sharing with the group.

  • Due to the halo effect, opinions from high-status members of the group are especially contagious.

  • Anonymizing feedback on the first pass allows ideas to be better considered on their merits rather than according to the status of the individual who holds the belief.

  • For lower-impact, easier-to-reverse decisions, the group can still contain the contagion through a quick-and-dirty version of this process, where group members write down their opinions and someone reads them aloud or writes them on a whiteboard before discussion, or where members read their own opinions aloud in reverse order of seniority.

  • The quality of feedback is limited by the quality of the input into the feedback elicitation process. We tend to spin narratives that highlight, lowlight, or even omit information that isn’t helpful to the conclusion that we would like others to reach.

  • Give the other person what they need to know to give you a qualified opinion and no more.

  • Access the outside view by asking yourself, “If someone came to me asking my opinion about this kind of decision, what would I need to know to give good advice?”

  • Build a checklist of relevant details for repeating decisions and make that checklist before you’re in the midst of a decision. Such a list should focus on the applicable goals, values, and resources, along with the details of the situation.

  • Members of a group should hold one another accountable to the checklist. If someone is eliciting feedback and they can’t provide details on the checklist, there should be an agreement not to give feedback


Arina Divo