Michael Mauboussin’s short book on the psychology of bad decisions, Think Twice, features an endorsement on its cover from Billy Beane, saying he hopes his competitors don’t read the book. While it doesn’t go into anywhere near the depth on the psychology (and neurology) of decision-making as Daniel Kahnemann’s Thinking, Fast and Slow, Mauboussin’s book covers much of the same ground and does so in a quick, superficial way that might reach more people than Kahnemann’s more thorough but often dense treatise could.
Mauboussin’s book carries the subtitle “Harnessing the Power of Counterintuition,” but I would describe it more as a guide to avoiding decisions based on easily avoidable mental traps. Think Twice has eight chapters dealing with specific traps, most of which will be familiar to readers of Kahnemann’s book: base-rate neglect, tunnel vision, irrational optimism, overreliance on experts, ignoring context, phase transitions (black and grey swans), and conflating skill and luck. Where Kahnemann went into great depth with useful examples and sometimes less-useful descriptions of fMRI test results, Mauboussin writes like he can’t get to the point fast enough – an often desirable trait in the popular business non-fiction section of the bookstore, since the assumption is that business executives don’t have time to read (even if the book might save millions of dollars).
That lightweight approach still gives Mauboussin plenty of space to hammer home the critical lessons of the book. Some of his examples don’t need a lot of explanation, such as pointing out that playing French music or German music in a wine store aisle with wines from both countries skewed consumer choices – even though those consumers explicitly denied that the music affected their choices. (Context matters.) He targets sportswriters directly when discussing their (our) difficulty (or inability) in distinguishing skill from luck – and, in my experience, fans often don’t want to hear that something is luck, even when the sample size is so small that you couldn’t prove it was skill no matter how broad the confidence test. He mentions The Boss going off in the papers when the Yankees started 4-12 in 2005, and writers buying right into the narrative (or just enjoying the free content Steinbrenner was providing). But we see it every October, and during every season; are the Giants really the best team in baseball, or is there an element of luck (or, to use the more accurate term, randomness) in their three championship runs in five seasons? Yet we see articles that proclaim players to be clutch or “big game” every year; my colleague Skip Bayless loves to talk about the “clutch gene,” yet I see no evidence to support its existence. I think Mauboussin would take my side in the debate, and he’d argue that an executive making a decision on a player needs to set aside emotional characterizations like that and focus on the hard data where the sample sizes are sufficiently large.
His chapter on the world’s overreliance on experts also directly applies to the baseball industry, both within teams and within the media. It is simply impossible for any one person to be good enough at predictions or forecasting to beat a well-designed projection system. I could spend every night from February 10th until Thanksgiving scouting players, see every prospect every year, and still wouldn’t be better on a macro level at predicting, say, team won-lost records or individual player performances than ZiPS or Steamer or any other well-tested system. The same goes for every scout in the business, and it’s why the role of scouting has already started to change. Once data trackers (like Tracman) can provide accurate data on batted ball speeds/locations or spin rate on curveballs for most levels of the minors and even some major college programs, how much value will individual scouts’ opinions on player tools matter in the context of team-level decisions on draft picks or trades? The most analytically-inclined front offices already meld scouting reports with such data, using them all as inputs to build better expert systems that can provide more accurate forecasts – which is the goal, because whether you like projection systems or not, you want your team to make the best possible decisions, and you can’t make better decisions without better data and better analysis of those data. (Mauboussin does describe situations where experts can typically beat computer models, but those are typically more static situations where feedback is clear and cause/effect relationships are simple. That’s not baseball.)
Mauboussin’s first chapter describes the three central illusions that lead to irrational optimism, one we see all the time in baseball when teams are asked to evaluate or potentially trade their own prospects: the illusions of superiority, optimism, and control. Our prospects are better than everyone else’s because we scout better, we develop better, and we control their development paths. When you hear that teams are overrating prospects, sometimes that’s just another GM griping that he can’t get what he wants for his veteran starter, but it can also be this irrational optimism that leads many teams to overrate their own kids. There’s a strong element of base-rate neglect in all of these illusions; if you have a deep farm system with a dozen future grade-50 prospects, you know, based on all of the great, deep systems we’ve seen in the last few years (the Royals, Rangers, Padres, Red Sox, Astros) that some of those players simply won’t work out, due to injuries, undiscovered weaknesses, or just youneverknows. A general manager has to be willing to take the “outside view” of his own players, viewing them through objective lenses, rather than the biased “inside view,” which also requires that he be able to take that view because he has the tools available to him and the advisers who are willing to tell him “no.”
The passage on unintended consequences is short and buried within a chapter on complex adaptive systems, but if I could send just two pages of the book to new MLB Commissioner Rob Manfred, I’d send these. Mauboussin gives two examples, one of incompetent wildlife management in Yellowstone Park, one of the feds’ decision to let Lehman Brothers fail and thus start the 2008 credit crisis, both of which involve single actions to a complex system that the actors didn’t fully understand (or try to). So when MLB tries to tinker with the draft, or fold in the July 2nd international free agents into the rule 4 draft or a new one, or changes free agent compensation rules … whatever they do, this is a complex system with hundreds of actors who will react to any such rules changes in ways that can’t be foreseen without a look at the entire system.
The seven-page concluding chapter is a great checklist for anyone trying to bring this kind of “counterintuitive” thinking into an organization or just into his/her own decision-making. It’s preventative: here’s how you avoid rushing into major decisions with insufficient data or while under a destructive bias. I can see why Beane doesn’t want other GMs or executives reading this; competing against people who suffer from these illusions and prejudices is a lot easier than competing against people who think twice.