I’ve got two posts up for Insiders looking back at the 2005 draft, one redrafting the top 30 picks and one examining the sixteen first-round “misses” from that loaded class. I’ll be chatting today at 1 pm ET.
Since reading Daniel Kahnemann’s book Thinking, Fast and Slow about this time last year, I’ve been exploring more titles in that subgenre, the intersection of cognitive psychology and everyday decision-making, particularly in business settings. Kahnemann discusses the phenomenon of inattentional blindness, which was first demonstrated in the experiment by Daniel Simons and Christopher Chabris that you can take here. That experiment gives The Invisible Gorilla: How Our Intuitions Deceive Us, the book by Simons and Chabris that explores six “everyday illusions” that distort our thinking and decision-making, its title, but the scope goes well beyond inattentional blindness to expose all kinds of holes in our perception.
(Speaking of perception, the short-lived TNT series of that name, which just ended its three-season run in March, devoted an episode called “Blindness” to two of the cognitive illusions discussed in The Invisible Gorilla, inattentional blindness and change blindness, even reproducing the experiment I linked above. It’s worth checking out when it reairs, even with its hamhanded crime story.)
The Invisible Gorilla is one of the best books of its kind that I’ve encountered, because it has the right balance of educational material, concrete examples, and exploration of the material’s meaning and possible remedies. The authors take a hard line on the six illusions they cover, saying there’s no way to avoid them, so the solution is to think our way around them – to recognize, for example, that just because we don’t notice our inattentional blindness when we talk on the phone while driving, we’re still prey to it. Yet the book remains instructive because forewarned is forearmed: if you know you’re going to fall for these illusions, you can take one more step back in your decision-making processes and prepare yourself for the trap.
The six illusions the authors cover are easy to understand once you hear them explained with an example. Inattentional blindness occurs when you are so focused on one task or object that you don’t notice something else happening in the background – for example, the gorilla wandered across the basketball court while you’re counting shots made by players in white. Change blindness is similar, but in this case you fail to notice the change in something or even someone when you’re focused on a different aspect of the person or image – which is how continuity errors end up in movies and escape the notice of most viewers, even when somewhat glaring once they’re pointed out. The illusion of memory revolves around our false confidence in what we remember, often to the point of being convinced that a story we heard that happened to someone else actually happened to us. The chapter covers the unreliability of eyewitness testimony, including a compelling (and awful) story of a rape victim who actively tried to remember details of her attacker’s face and still identified the wrong man when police arrested a suspect. The illusion of confidence involves overrating our own knowledge and abilities, such as the oft-cited statistic that a wide majority of American drivers consider themselves to be above-average at the task. (I’m not one of them; I dislike driving because I know I’m not good at it.) The illusion of knowledge is our mistaken belief that we know more than we do; the authors give a great test of this, pretending to be a child who keeps asking you “but why?” to show that, for example, you may think you know how a toilet works until someone actually asks you to go into detail on its operation. The sixth illusion, the illusion of potential, seems a bit forced in the context of the first five, even thought I enjoyed the authors’ attacks on pseudoscience crap like using Mozart or other classical music to raise your IQ (shocker: it’s bullshit) or the use of subliminal messages or advertising to change your thinking (the original subliminal advertising stunt in a movie theater was faked). It encapsulates the belief that we can improve our cognitive skills more quickly and easily than we actually can, or that improvements in a small, specific area result in more generalized improvements than they actually do.
The two “blindness” illusions make for the best stories, and are even applicable at times in baseball (how often have you been at a game, focusing on a particular player, and not realized that the pitcher had changed or another player had changed positions?), but the illusions of knowledge and confidence resonate more with the work that I do for ESPN. I’ve accepted and even embraced the fact that I will be wrong frequently on player evaluations, especially of amateur players, because that’s just inherent in the job: there’s far too much unpredictability involved in the development of individual players, so scouting relies on heuristics that will often miss on outliers like the Dustin Pedroias of the world. It’s also why, at a macro level, projection systems like ZiPS beat individual guesses on standings or overall player performances. (Projection systems can miss outliers too, like pitchers with new pitches or hitters with new swing mechanics, but that’s a different and I think more easily addressed deficiency.)
Even understanding the illusion of knowledge puts scouts in a quandary, as they’re expected to offer strong, even definitive takes on players when it would be more rational to discuss outcomes in probabilistic terms – e.g., I think Joey Bagodonuts has a 60% chance to reach the majors, a 20% chance to be an everyday shortstop, a 30% chance to end up at another position, etc. No one evaluates like that because they’re not asked to do so and they’re not trained to think like that. I’m in a similar boat: I tell readers I think a certain pitcher is a fifth starter, and if he has a few good starts in a row I’ll get some trolling comments, but when I call anyone a fifth starter I’m giving you a most likely outcome (in my opinion, which is affected by all of the above illusions) that doesn’t explicitly describe variance over shorter timeframes.
The illusion of confidence comes into play just as frequently, and to some extent it’s almost a requirement of the job. How could you offer an evaluation of a potential first-round pick or pull the trigger on a trade if you had an accurate view of your own limitations as an evaluator or executive? Would a proper system of safeguards to cover this illusion just lead to “paralysis by analysis?” I don’t know that I could ever have enough information to make me feel properly confident (as opposed to the illusory sense of overconfidence that the authors describe here) to decide who to take with the first overall pick in this year’s draft; I think Houston’s predraft process last year led them to take the right guy, and they still ended up with nothing because of a sort of black swan event with Aiken’s elbow. The authors express the need for readers to recognize their confidence in their own abilities is often exaggerated, but taken to its logical end it seems like a persuasive argument against getting out of bed in the morning, because we’re just going to do the wrong thing. In my position, at least, I’m better off pretending I’m a slightly better evaluator of baseball talent than I actually am, because otherwise my writing would be peppered with conditionals and qualifications that would make it unreadable and probably not very helpful to those of you looking for information on the players I cover.
Simons and Chabris present a very compelling if sobering case that the human mind, while highly evolved, has some serious holes in its approach, and that we need to understand five of the six illusions (or failures of intuition) to make better decisions, whether it’s improving our awareness to avoid hitting a motorcyclist on the road or dismissing misplaced self-confidence in our investing acumen to make better choices with our retirement accounts. It seems applicable to just about any line of work, but reading it from the perspective of my thirteen-plus years working in baseball – perhaps now I’m subject to the illusion of independent thinking – I found it immensely applicable and valuable as a reminder of how easy it is to fall into these traps when trying to evaluate a player or a team.
I remember reading about the phenomenon of inattentional blindness and how it affects drivers, particular regarding pedestrians and cyclists. Pretty much made me not want to cycle to work. American drivers aren’t expecting bikes, therefore they don’t see them. Whereas in other countries with more cyclists, you end up with fewer accidents because the drivers are expecting to see bikes. The brain is weird.
“often miss on outliers like the Dustin Pedroias”
Odd choice. Or have you already punched his ticket to the HOF?
I missed on Pedroia. Saw a bad athlete with an unworkable swing. That’s why I chose him.
I just finished reading your chat transcript, was wondering if you could expand on what the issue was regarding the NWL and Baseball America?
Also, you touched on players at junior colleges briefly; I’m wondering why more prospects don’t go to junior colleges? It seems like a good way to improve on some skills that may need work without sacrificing three years at a D1 school.
It’s mentioned on BA’s Northwoods League top 10:
I know it’s not my battle to fight, but I was disgusted to hear about this. BA has done more to promote the Northwoods League than any national outlet, and for the league to repay that kindness in this fashion is appalling. And it does no favors for anyone who does the kind of work BA and I and others do.