Everything is Obvious.

Duncan Watts’ book Everything is Obvious *Once You Know the Answer: How Common Sense Fails Us fits in well in the recent string of books explaining or demonstrating how the way we think often leads us astray. As with Thinking Fast and Slow, by Nobel Prize winner Daniel Kahneman, Watts’ book highlights some specific cognitive biases, notably our overreliance on what we consider “common sense,” lead us to false conclusions, especially in the spheres of the social sciences, with clear ramifications in the business and political worlds as well as some strong messages for journalists who always seek to graft narratives on to facts as if the latter were inevitable outcomes.

The argument from common sense is one of the most frequently seen logical fallacies out there – X must be true because common sense says it’s true. But common sense itself is, of course, inherently limited; our common sense is the result of our individual and collective experiences, not something innate given to us by God or contained in our genes. Given the human cognitive tendency to assign explanations to every event, even those that are the result of random chance, this is a recipe for bad results, whether it’s the fawning over a CEO who had little or nothing to do with his company’s strong results or top-down policy prescriptions that lead to billions in wasted foreign aid.

Watts runs through various cognitive biases and illusions that you may have encountered in other works, although a few of them were new to me, like the Matthew Effect, by which the rich get richer and the poor get poorer. According to the theory behind it, the Matthew Effect argues that success breeds success, because it means those people get greater opportunities going forward. A band that has a hit album will get greater airplay for its next record, even if that isn’t as good as the first one, or markedly inferior to an album released on the same day by an unknown artist. A good student born into privilege will have a better chance to attend a fancy-pants college, like, say, Harfurd, and thus benefits further from having the prestigious brand name on his resume. A writer who has nearly half a million Twitter followers might find it easier to land a deal for a major publisher to produce his book, Smart Baseball, available in stores now, and that major publisher then has the contacts and resources to ensure the book is reviewed in critical publications. It could be that the book sells well because it’s a good book, but I doubt it.

Watts similarly dispenses with the ‘great man theory of history’ – and with history in general, if we’re being honest. He points out that historical accounts will always include judgments or information that was not available to actors at the time of these events, citing the example of a soldier wandering around the battlefield in War and Peace, noticing that the realities of war look nothing like the genteel paintings of battle scenes hanging in Russian drawing rooms. He asks if the Mona Lisa, which wasn’t regarded as the world’s greatest painting or even its most famous until it was stolen from the Louvre by an Italian nationalist before World War II, ascended to that status because of innate qualities of the painting – or if circumstances pushed it to the top, and only after the fact do art experts argue for its supremacy based on the fact that it’s already become the Mona Lisa of legend. In other words, the Mona Lisa may be great simply because it’s the Mona Lisa, and perhaps had the disgruntled employee stolen another painting, da Vinci’s masterpiece would be seen as just another painting. (His description of seeing the painting for the first time mirrored my own: It’s kind of small, and because it’s behind shatterproof glass, you can’t really get close it.)

Without directly referring to it, Watts also perfectly describes the inexcusable habit of sportswriters to assign huge portions of the credit for team successes to head coaches or managers rather than distributing the credit across the entire team or even the organization. I’ve long used the example of the 2001 Arizona Diamondbacks as a team that won the World Series in spite of the best efforts of its manager, Bob Brenly, to give the series away – repeatedly playing small ball (like bunting) in front of Luis Gonzalez, who’d hit 57 homers that year, and using Byung-Hyun Kim in save situations when it was clear he wasn’t the optimal choice. Only the superhuman efforts by Randy Johnson and That Guy managed to save the day for Arizona, and even then, it took a rare misplay by Mariano Rivera and a weakly hit single to an open spot on the field for the Yanks to lose. Yet Brenly will forever be a “World Series-winning manager,” even though there’s no evidence he did anything to make the win possible. Being present when a big success happens can change a person’s reputation for a long time, and then future successes may be ascribed to that person even if he had nothing to do with them.

Another cognitive bias Watts discusses, the Halo Effect, seems particularly relevant to my work evaluating and ranking prospects. First named by psychologist Edward Thorndike, the Halo Effect refers to our tendency to apply positive impressions of a person, group, or company to their other properties or characteristics, so we might subconsciously consider a good-looking person to be better at his/her job. For example, do first-round draft picks get greater considerations from their organizations when it comes to promotions or even major-league opportunities? Will an org give such a player more time to work out of a period of non-performance than they’d give an eighth-rounder? Do some scouts rate players differently, even if it’s entirely subconscious, based on where they were drafted or how big their signing bonuses were? I don’t think I do this directly, but my rankings are based on feedback from scouts and team execs, so if their own information – including how teams internally rank their prospects – is affected by the Halo Effect, then my rankings will be too, unless I’m actively looking for it and trying to sieve it out.

Where I wish Watts had spent even more time was in describing the implications of these ideas and research for government policies, especially foreign aid, most of which would be just as productive if we flushed it all down those overpriced Pentagon toilets. Foreign aid tends to go to where the donors, whether private or government, think it should go, because the recipients are poor but the donors know how to fix it. In reality, this money rarely spurs any sort of real change or economic growth, because the common-sense explanation – the way to fix poverty is to send money and goods to poor people – never bothers to examine the root causes of the problem the donors want to solve, asking the targets what they really need, examining and removing obstacles (e.g., lack of infrastructure) that might require more time and effort to fix but prevent the aid from doing any good. Sending a boat full of food to a country in the grip of a famine only makes sense if you have a way to get the food to the starving people, but if the roads are bad, dangerous, or simply don’t exist, then that food will sit in the harbor until it rots or some bureaucrat sells it.

Everything Is Obvious is aimed at a more general audience than Thinking Fast and Slow, as its text is a little less dense and it contains fewer and shorter descriptions of research experiments. Watts refers to Kahneman and his late reseach partner Amos Tversky a few times, as well as other researchers in the field, so it seems to me like this book is meant as another building block on the foundation of Kahneman’s work. I think it applies to all kinds of areas of our lives, even just as a way to think about your own thinking and to try to help yourself avoid pitfalls in your financial planning or other decisions, but it’s especially apt for folks like me who write for a living and should watch for our human tendency to try to ascribe causes post hoc to events that may have come about as much due to chance as any deliberate factors.

Stick to baseball, 3/4/17.

No new Insider content this week, although I believe I’ll have a new piece up on Tuesday, assuming all goes to plan. I did hold a Klawchat on Thursday.

My latest boardgame review for Paste covers Mole Rats in Space, a cooperative game for kids from the designer of Pandemic and Forbidden Desert. It’s pretty fantastic, and I think if you play this you’ll never have to see Chutes and Ladders again.

You can preorder my upcoming book, Smart Baseball, on amazon, or from other sites via the Harper-Collins page for the book. The book now has two positive reviews out, one from Kirkus Reviews and one from Publishers Weekly.

Also, please sign up for my more-or-less weekly email newsletter.

And now, the links…


I’m a bit surprised that Philip Tetlock’s 2015 book Superforecasting: The Art and Science of Prediction hasn’t been a bigger phenomenon along the lines of Thinking Fast and Slow and its offshoots, because Tetlock’s research, from the decades-long Good Judgment Project, goes hand in hand with Daniel Kahneman’s book and research into cognitive biases and illusions. Where Kahneman’s views tend to be macro, Tetlock is focused on the micro: His research looks at people who are better at predicting specific, short-term answers to questions like “Will the Syrian government fall in the next six months?” Tetlock’s main thesis is that such people do exist – people who can consistently produce better forecasts than others, even soi-disant “experts,” can produce – and that we can learn to do the same thing by following their best practices.

Tetlock’s superforecasters have a handful of personality traits in common, but they’re not terribly unusual and if you’re here there’s a good chance you have them. These folks are intellectually curious and comfortable with math. They’re willing to admit mistakes, driven to avoid repeating them, and rigorous in their process. But they’re not necessarily more or better educated and typically lack subject-matter expertise in most of the areas in the forecasting project. What Tetlock and co-author Dan Gardner truly want to get across is that any of us, whether for ourselves or for our businesses, can achieve marginal but tangible gains in our ability to predict future events.

Perhaps the biggest takeaway from Superforecasting is the need to get away from binary forecasting – that is, blanket statements like “Syria’s government will fall within the year” or “Chris Sale will not be a major-league starting pitcher.” Every forecast needs a probability and a timeframe, for accountability – you can’t evaluate a forecaster’s performance if he avoids specifics or deals in terms like “might” or “somewhat” – and for the forecaster him/herself to improve the process.

Within that mandate for clearer predictions that allow for post hoc evaluation comes the need to learn to ask the right questions. Tetlock reaches two conclusions from his research, one for the forecasters, one for the people who might employ them. Forecasters have to walk a fine line between asking the right questions and the wrong ones: One typical cognitive bias of humans is to substitute a question that is too difficult to answer with a similar question that is easier but doesn’t get at the issue at hand. (Within this is the human reluctance to provide the answer that Tetlock calls the hardest three words for anyone to say: “I don’t know.”) Managers of forecasters or analytics departments, on the other hand, must learn the difference between subjects for which analysts can provide forecasts and those for which they can’t. Many questions are simply too big or vague to answer with probabilistic predictions, so either the manager(s) must provide more specific questions, or the forecaster(s) must be able to manage upwards by operationalizing those questions, turning them into questions that can be answered with a forecast of when, how much, and at what odds.

Tetlock only mentions baseball in passing a few times, but you can see how these precepts would apply to the work that should come out of a baseball analytics department. I think by now every team is generating quantitative player forecasts beyond the generalities of traditional scouting reports. Nate Silver was the first analyst I know of to publicize the idea of attaching probabilities to these forecasts – here’s the 50th percentile forecast, the 10th, the 90th, and so on. More useful to the GM trying to decide whether to acquire player A or player B would be the probability that a player’s performance over the specified period will meet a specific threshold: There is a 63% chance that Joey Bagodonuts will produce at least 6 WAR of value over the next two years. You can work with a forecast like that – it has a specific value and timeframe with specific odds, so the GM can price a contract offer to Mr. Bagodonuts’ agent accordingly.

Could you bring this into the traditional scouting realm? I think you could, carefully. I do try to put some probabilities around my statements on player futures, more than I did in the past, certainly, but I also recognize I could never forecast player stat lines as well as a well-built model could. (Many teams fold scouting reports into their forecasting models anyway.) I can say, however, I think there’s a 40% chance of a pitcher remaining a starter, or a 25% chance that, if player X gets 500 at bats this season, he’ll hit at least 25 home runs. I wouldn’t go out and pay someone $15 million on the comments I make, but I hope it will accomplish two things: force me to think harder before making any extreme statements on potential player outcomes, and furnish those of you who do use this information (such as in fantasy baseball) with value beyond a mere ranking or a statement of a player’s potential ceiling (which might really be his 90th or 95th percentile outcome).

I also want to mention another book in this vein that I enjoyed but never wrote up – Dan Ariely’s Predictably Irrational: The Hidden Forces that Shape Our Decisions, another entertaining look at cognitive illusions and biases, especially those that affect the way we value transactions that involve money – including those that involve no money because we’re getting or giving something for free. As in Kahneman’s book, Ariely’s explains that by and large you can’t avoid these brain flaws; you learn they exist and then learn to compensate for them, but if you’re human, they’re not going away.

Next up: Paul Theroux’s travelogue The Last Train to Zona Verde.

Stick to baseball, 7/9/16.

My annual top 25 MLB players under age 25 ranking went up this week for Insiders, and please read the intro while you’re there. I also wrote a non-Insider All-Star roster reaction piece, covering five glaring snubs and five guys who made it but shouldn’t have. I also held my usual Klawchat on Thursday.

My latest boardgame review for Paste covers the reissue of the Reiner Knizia game Ra.

Sign up for my newsletter! You’ll get occasional emails from me with links to my content and stray thoughts that didn’t fit anywhere else.

And now, the links…