I’ve got two posts up for Insiders looking back at the 2005 draft, one redrafting the top 30 picks and one examining the sixteen first-round “misses” from that loaded class. I’ll be chatting today at 1 pm ET.
Since reading Daniel Kahnemann’s book Thinking, Fast and Slow about this time last year, I’ve been exploring more titles in that subgenre, the intersection of cognitive psychology and everyday decision-making, particularly in business settings. Kahnemann discusses the phenomenon of inattentional blindness, which was first demonstrated in the experiment by Daniel Simons and Christopher Chabris that you can take here. That experiment gives The Invisible Gorilla: How Our Intuitions Deceive Us
(Speaking of perception, the short-lived TNT series of that name, which just ended its three-season run in March, devoted an episode called “Blindness” to two of the cognitive illusions discussed in The Invisible Gorilla, inattentional blindness and change blindness, even reproducing the experiment I linked above. It’s worth checking out when it reairs, even with its hamhanded crime story.)
The Invisible Gorilla is one of the best books of its kind that I’ve encountered, because it has the right balance of educational material, concrete examples, and exploration of the material’s meaning and possible remedies. The authors take a hard line on the six illusions they cover, saying there’s no way to avoid them, so the solution is to think our way around them – to recognize, for example, that just because we don’t notice our inattentional blindness when we talk on the phone while driving, we’re still prey to it. Yet the book remains instructive because forewarned is forearmed: if you know you’re going to fall for these illusions, you can take one more step back in your decision-making processes and prepare yourself for the trap.
The two “blindness” illusions make for the best stories, and are even applicable at times in baseball (how often have you been at a game, focusing on a particular player, and not realized that the pitcher had changed or another player had changed positions?), but the illusions of knowledge and confidence resonate more with the work that I do for ESPN. I’ve accepted and even embraced the fact that I will be wrong frequently on player evaluations, especially of amateur players, because that’s just inherent in the job: there’s far too much unpredictability involved in the development of individual players, so scouting relies on heuristics that will often miss on outliers like the Dustin Pedroias of the world. It’s also why, at a macro level, projection systems like ZiPS beat individual guesses on standings or overall player performances. (Projection systems can miss outliers too, like pitchers with new pitches or hitters with new swing mechanics, but that’s a different and I think more easily addressed deficiency.)
Even understanding the illusion of knowledge puts scouts in a quandary, as they’re expected to offer strong, even definitive takes on players when it would be more rational to discuss outcomes in probabilistic terms – e.g., I think Joey Bagodonuts has a 60% chance to reach the majors, a 20% chance to be an everyday shortstop, a 30% chance to end up at another position, etc. No one evaluates like that because they’re not asked to do so and they’re not trained to think like that. I’m in a similar boat: I tell readers I think a certain pitcher is a fifth starter, and if he has a few good starts in a row I’ll get some trolling comments, but when I call anyone a fifth starter I’m giving you a most likely outcome (in my opinion, which is affected by all of the above illusions) that doesn’t explicitly describe variance over shorter timeframes.
The illusion of confidence comes into play just as frequently, and to some extent it’s almost a requirement of the job. How could you offer an evaluation of a potential first-round pick or pull the trigger on a trade if you had an accurate view of your own limitations as an evaluator or executive? Would a proper system of safeguards to cover this illusion just lead to “paralysis by analysis?” I don’t know that I could ever have enough information to make me feel properly confident (as opposed to the illusory sense of overconfidence that the authors describe here) to decide who to take with the first overall pick in this year’s draft; I think Houston’s predraft process last year led them to take the right guy, and they still ended up with nothing because of a sort of black swan event with Aiken’s elbow. The authors express the need for readers to recognize their confidence in their own abilities is often exaggerated, but taken to its logical end it seems like a persuasive argument against getting out of bed in the morning, because we’re just going to do the wrong thing. In my position, at least, I’m better off pretending I’m a slightly better evaluator of baseball talent than I actually am, because otherwise my writing would be peppered with conditionals and qualifications that would make it unreadable and probably not very helpful to those of you looking for information on the players I cover.
Simons and Chabris present a very compelling if sobering case that the human mind, while highly evolved, has some serious holes in its approach, and that we need to understand five of the six illusions (or failures of intuition) to make better decisions, whether it’s improving our awareness to avoid hitting a motorcyclist on the road or dismissing misplaced self-confidence in our investing acumen to make better choices with our retirement accounts. It seems applicable to just about any line of work, but reading it from the perspective of my thirteen-plus years working in baseball – perhaps now I’m subject to the illusion of independent thinking – I found it immensely applicable and valuable as a reminder of how easy it is to fall into these traps when trying to evaluate a player or a team.