Stick to baseball, 10/7/23.

I’ve had one post up for subscribers to the Athletic since the last roundup, with my hypothetical postseason awards ballots for 2023. I do have another story filed for Sunday, so keep an eye skinned for that.

Over at Paste, I reviewed Votes for Women, a (mostly) two-player, asymmetrical game about the fight for women’s suffrage. It’s fantastic, and I also love that this review went up the week that Glynis Johns turned 100.

On the Keith Law Show this week, my guest was MLB’s Sarah Langs, talking about the season that was, who she would vote for in the various awards, and what excited her about this year’s playoff teams. You can listen and subscribe via iTunes, Spotify, Stitcher, amazon, or wherever you get your podcasts.

And now, the links…

Mistakes Were Made (But Not by Me).

Mistakes Were Made (But Not by Me) is the story of cognitive dissonance, from its origins in the 1950s – one of the authors worked with Dr. Leon Festinger, the man who coined the term – to the modern day, when we routinely hear politicians, police officers, and sportsball figures employ it to avoid blame for their errors. What Dr. Carol Tavris and Dr. Elliot Aronson, the authors of the book, emphasize in Mistakes Were Made, however, is that this is not mere fecklessness, or sociopathy, or evil, but a natural defense mechanism in our brains that protects our sense of self.

Cognitive dissonance refers to the conflict that arises in our brains when an established belief runs into contradictory information. We have the choice: Admit our beliefs were mistaken, and conform our beliefs to the new information; or, explain away the new information, by dismissing it, or interpreting it more favorably (and less accurately), so that our preconceived notions remain intact. You can see this playing out right now on social media, where anti-vaxxers and COVID denialists will refuse to accept the copious amounts of evidence undermining their views, claiming that any contradictory research came from “Pharma shills,” or was in unreliable journals (like JAMA or BMJ, you know, sketchy ones) or offering specious objections, like the possible trollbot account claiming a sample size of 2300 was too small.

The term goes back to the 1950s, however, when a deranged Wisconsin housewife named Dorothy Martin claimed she’d been communicating with an alien race, and a bunch of other morons followed her, in some cases selling their worldly possessions, because the Earth was going to be destroyed and the aliens were coming to pick them up and bring them to … I don’t know where, the fifth dimension or something. Known as the Seekers, they were inevitably disappointed when the aliens didn’t know. The crazy woman at the head of the cult claimed that the aliens had changed their minds, and her followers had somehow saved the planet after all.

What interested Festinger and his colleagues was how the adherents responded to the obvious disconfirmation of their beliefs. The aliens didn’t come, because there were no aliens. Yet many of the believers still believed, despite the absolute failure of the prophecy – giving Festinger et al the name of their publication on the aftermath, When Prophecy Fails. The ways in which these people would contort their thinking to avoid the reality that they’d just fallen for a giant scam, giving up their wealth, their jobs, sometimes even family connections to chase this illusion opened up a new field of study for psychologists.

Tavris and Aronson take this concept and pull it forward into modern contexts so we can identify cognitive dissonance in ourselves and in others, and then figure out what to do about it when it rears its ugly head. They give many examples from politicians, such as the members of the Bush Administration who said it wasn’t torture if we did it – a line of argument that President Obama did not reject when he could have – even though we were torturing people at Guantanamo Bay, and Abu Ghraib, and other so-called “black sites.” They also show how cognitive dissonance works in more commonplace contexts, such as how it can affect married couples’ abilities to solve conflicts between them – how we respond to issues big and small in our marriages (or other long-term relationships) can determine whether these relationships endure, but we may be stymied by our minds’ need to preserve our senses of self. We aren’t bad people, we just made mistakes – or mistakes were made, by someone – and it’s easier to remain believers in our inherent goodness if we deny the mistakes, or ascribe them to an external cause. (You can take this to the extreme, where abusers say that their victims “made” them hit them.)

There are two chapters here that I found especially damning, and very frustrating to read because they underscore how insoluble these problems might be. One looks at wrongful convictions, and how prosecutors and police officers refuse to admit they got the wrong guy even when DNA evidence proves that they got the wrong guy. The forces who put the Central Park Five in prison still insisted those five innocent men were guilty even after someone else admitted he was the sole culprit. The other troubling chapter looked at the awful history of repressed memory therapy, which is bullshit – there are no “repressed memories,” so the whole idea is based on a lie. Memories can be altered by suggestion, however, and we have substantial experimental research showing how easily you can implant a memory into someone’s mind, and have them believe it was real. Yet therapists pushed this nonsense extensively in the 1980s, leading to the day care sex abuse scares (which put many innocent people in jail, sometimes for decades), and some still push it today. I just saw a tweet from someone I don’t know who said he was dealing with the trauma of learning he’d been sexually abused as a child, memories he had repressed and only learned about through therapy. It’s nonsense, and now his life – and probably that of at least one family member – will be destroyed by a possibly well-meaning but definitely wrong therapist. Tavris and Aronson provide numerous examples, often from cases well-covered in the media, of therapists insisting that their “discoveries” were correct, or displaying open hostility to evidence-based methods and even threatening scientists whose research showed that repressed memories aren’t real.

I see this stuff play out pretty much any time I say something negative about a team. I pointed out on a podcast last week that the Mets have overlooked numerous qualified candidates of color, in apparent violation of baseball’s “Selig rule,” while reaching well beyond normal circles and apparently targeting less qualified candidates. The response from some Met fans was bitter acknowledgement, but many Met fans responded by attacking me, claiming I couldn’t possibly know what I know (as if, say, I couldn’t just call or text a reported candidate to see if he’d been contacted), or to otherwise defend the Mets’ bizarre behavior. Many pointed out that they tried to interview the Yankees’ Jean Afterman, yet she has made it clear for years that she has no interest in a GM job, which makes this request – if it happened at all – eyewash, a way to appear to comply with the Selig rule’s letter rather than its intent. Allowing cognitive dissonance to drive an irrational defense of yourself, or your family, or maybe even your company is bad enough, but allowing it to make you an irrational defender of a sportsball team in which you have no stake other than your fandom? I might buy a thousand copies of Craig Calcaterra’s new book and just hand it out at random.

Theauthors updated Mistakes Were Made in 2016, in a third edition that includes a new prologue and updates many parts of the text, with references to more recent events, like the murders of Tamir Rice and Eric Garner, so that the text doesn’t feel as dated with its extensive look at the errors that led us into the Iraq War. I also appreciated the short section on Andrew Wakefield and how his paper has created gravitational waves of cognitive dissonance that we will probably face until our species drives itself extinct. I couldn’t help but wonder, however, how the authors might feel now about Michael Shermer, who appears in a story about people who believe they’ve been abducted by aliens (he had such an experience, but knew it was the result of a bout of sleep paralysis) and who provides a quote for the back of the book … but who was accused of sexual harassment and worse before this last edition was published. Did cognitive dissonance lead them to dismiss the allegations (from multiple women) and leave the story and quote in place? The authors are human, too, and certainly as prone to experiencing cognitive dissonance as anyone else is. Perhaps it only strengthens the arguments in this short and easy-to-read book. Mistakes Were Made should be handed to every high school student in the country, at least until we ban books from schools entirely.

Next up: David Mitchell’s The Thousand Autumns of Jacob de Zoet.

Biased.

Dr. Jennifer Eberhardt is a social psychologist and professor at Stanford University who received a MacArthur Foundation “Genius” Grant in 2014 for her work on implicit bias and how stereotypic associations on race have substantial consequences when they intersect with crime. Her first book, Biased: Uncovering the Hidden Prejudice That Shapes What We See, Think, and Do, came out in 2019 and explains much of her work on the topic with concrete and often very moving examples of such bias occurring in the real world – often in Eberhardt’s own life – when Black Americans encounter the police.

The heart of Biased comes from Eberhardt’s work on racial bias and crime, and many of the stories that she uses to illustrate conclusions from broader research efforts involve the murders of unarmed Black men by police. One chapter starts with the shooting of Terence Crutcher, who was shot and killed by a panicked white police officer, Betty Shelby, who was, of course, acquitted of all charges in connection with her actions. (She later said that she was “sorry he lost his life,” as if she wasn’t involved in that somewhow.) Crutcher’s twin sister, Tiffany, has become a prominent activist focusing on criminal justice reform and raising awareness of the role white supremacy plays in endangering Black lives.

Eberhardt uses Crutcher’s story and her words to frame discussions of how implicit bias – the kind of bias that happens beneath our conscious thought process – leads to outcomes like Shelby killing Terence Crutcher. We can all recognize the kind of bias that uses racial slurs, or explicitly excludes some group, or traffics in open stereotypes, but implicit bias can have consequences every bit as significant, and is more insidious because even well-intentioned people can fall prey to it. Multiple studies have found, for example, that white subjects have subconscious associations between Black people and various negative character traits – and some Black subjects did as well, which indicates that these are societal messages that everyone receives, through the news, entertainment, even at school. When police officers have those implicit biases, they might be more likely to assume that a Black man holding a cell phone is actually holding a gun when they wouldn’t make the same assumption with a white man. This becomes a failure of officer training, not a matter of all cops who shoot Black men being overtly racist, while also drawing another line between those who say Black Lives Matter and those who counter that All or Blue or Fuchsia Lives Matter instead.

No other arena has the same stakes as policing and officer-involved shootings, but implicit bias also has enormous consequences in areas like education, hiring, and the housing market. Eberhardt runs through numerous studies showing implicit but unmistakable bias in the employment sphere, such as when test candidates with identical resumes but different names, one of whom bears a name that might imply the candidate is Black, receive calls back at vastly different rates. Implicit bias can explain why we still see evidence of redlining even when the explicit practice – denying the applications of nonwhite renters, or the offers of nonwhite home buyers, to keep white neighborhoods white – has been outlawed since the Fair Housing Act was passed in 1968.

Eberhardt also speaks to Bernice Donald, a Black woman who is now a federal judge but who experienced discrimination in education firsthand as one of the first Black students in DeSoto County, Mississippi, to attend her local whites-only high school, where she was ignored by some white teachers, singled out by faculty and students alike, and denied opportunities for advancement, including college scholarships she had earned through her academic performance. The implicit biases we see today affect not just students’ grades, but how students of different races are disciplined, and how severe such discipline is. Eberhardt doesn’t mention the school-to-prison pipeline, but the research she cites here shows how that pipeline can exist and the role that implicit bias plays in filling it with Black students.

Some of the studies Eberhardt describes in Biased will be familiar if you’ve read any similar books, such as Claude Steele’s Whistling Vivaldi or Banaji & Greenwald’s Blindspot, that cover this ground, but Eberhardt’s look is newer, more comprehensive, and punctuated by deeply personal anecdotes, including a few of her own. While she was a graduate student at Harvard, on the eve of commencement, she and her roommate were pulled over by a Boston police officer for a minor equipment violation, harassed, injured, and brought to the station, where a Dean from their department had to come vouch for their release. She eventually had to go to court, where she was acquitted of all charges – which included a claim that she had injured the officer, a claim the judge ridiculed, according to Eberhardt. Would that have happened if she were white? Would it surprise you to hear that the cop who hassled her and her friend was Black? And what, ultimately, does this, and research showing that Black motorists are far more likely to be stopped for the most trivial of causes and more likely to end up dead when stopped by police, tell us about solutions to the problem of implicit bias in policing? The answers are not easy, because implicit bias is so hard to root out and often isn’t evident until we have enough data to show it’s affecting outcomes. We won’t get to that point if we can’t agree that the problem exists in the first place.

Next up: I just finished Ishmael Reed’s Mumbo Jumbo last night and am reading Graham Swift’s new novel Here We Are.

Mindware.

I appeared on the Inquiring Minds podcast this spring to promote my book The Inside Game, and co-host Adam Bristol recommended a book to me after the show, Dr. Richard Nisbett’s Mindware: Tools for Smart Thinking. Dr. Nisbett is a professor of social psychology at the University of Michigan and co-directs the school’s Culture and Cognition program, and a good portion of Mindware focuses on how our environment affects our cognitive processes, especially the unconscious mind, as he gives advice on how to improve our decision-making processes and better understand the various ways our minds work.

Nisbett starts out the book with an obvious but perhaps barely understood point: Our understanding of the world around us is a matter of construal, a combination of inferences and interpretations, because of the sheer volume of information and stimuli coming into our brains at all times, and how much of what we see or hear is indirect. (If you want to get particularly technical, even what we see directly is still a matter of interpretation; even something as seemingly concrete as color is actually a sensation created in the brain, an interpolation of different wavelengths of light that also renders colors more stable in our minds than they would be if we were just relying on levels of illumination.) So when we run into biases or illusions that affect our inferences and interpretations, we will proceed on the basis of unreliable information.

He then breaks down three major ways in which we can understand how our minds process all of these stimuli. One is that our environments affect how we think and how we behave far more than we realize they do. Another is that our unconscious minds do far more work than we acknowledge, including processing environmental inputs that we may not actively register. And the third is that we see and interpret the world through schemas, frameworks or sets of heuristics that we use to make sense of the world and simplify the torrent of information coming at us.

From that outline, Nisbett marches through a series of cognitive biases and errors, many of which overlap with those I covered in The Inside Game, but explains more of how cognition is affected by external stimuli, including geography (the subject of one of his previous books), culture, and “preperception” – how the subconscious mind gets you started before you actively begin to perceive things. This last point is one of the book’s most powerful observations: We don’t know why we know what we know, and we can’t always account for our motives and reasons, even if we’re asked to explain them directly. Subjects of experiments will deny that their choices or responses were influenced by stimuli that seem dead-obvious to outside observers. They can be biased by anchors that have nothing to do with the topic of the questions, and even show effects after the ostensible study itself – for example, that subjects exposed to more words related to aging will walk more slowly down the hall out of the study room than those exposed to words relate to youth or vitality. It seems absurd, but multiple studies have shown effects like these, as with the study I mentioned in my book about students’ guesses on quantities being biased by the mere act of writing down the last two digits of their social security numbers. We would like to think that our brains don’t work that way, but they do.

Nisbett is a psychologist but crosses comfortably into economics territory, including arguments in favor of using cost/benefit analyses any time a decision has significant costs and the process allows you the time to perform such an analysis. He even gets into the thorny question of how much a life is worth, which most people do not want to consider but which policymakers have to consider when making major decisions on, say, how much and for how long to shut down the economy in the face of a global pandemic. There is some death rate from COVID-19 that we would – and should – accept, and to figure that out, we have to consider what values to put on the lives that might be lost at each level of response, and then compare that to economic benefits of remaining open or additional costs of overloaded hospitals. “Zero deaths” is the compassionate answer, but it isn’t the rational one; if zero deaths in a pandemic were even possible, it would be prohibitively expensive in time and money, so much so that it would cause suffering (and possibly deaths) from other causes.

In the conclusion to Mindware, Dr. Nisbett says that humans are “profligate causal theorists,” and while that may not quite roll off the tongue, it’s a pithy summary of how our minds work. We are free and easy when it comes to finding patterns and ascribing causes to outcomes, but far less thorough when it comes to testing these hypotheses, or even trying to make these hypotheses verifiable or falsifiable. It’s the difference between science and pseudoscience, and between a good decision-making process and a dubious one. (You can still make a good decision with a bad process!) This really is a great book if you like the kind of books that led me to write The Inside Game, or just want to learn more about how your brain deals with the huge volume of information it gets each day so that you can make better decisions in your everyday life.

Next up: I just finished Ann Patchett’s The Dutch House this weekend and am about halfway through Patrick Keefe’s Say Nothing: A True Story of Murder and Memory in Northern Ireland.

Whistling Vivaldi.

In this era of increased awareness of cognitive biases and how they affect human behavior, stereotype threat seems to be lagging behind similar phenomena in its prevalence in policy discussions. Stereotype threat refers to how common stereotypes about demographic groups can then affect how members of those groups perform in tasks that are covered by the stereotypes. For example, women fare worse on math tests than men because there’s a pervasive stereotype about women being inferior at math. African-American students perform worse on tests that purport to measure ‘intelligence’ for a similar reason. The effect is real, with about two decades of research testifying to its existence, although there’s still disagreement over how strong the effect is in the real world (versus structured experiments).

Stanford psychology professor Claude Steele, a former provost at Columbia University and himself African-American, wrote a highly personal account of what we know about stereotype threat and its presence in and effects on higher education in the United States in Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do. Steele blends personal anecdotes – his own and those of others – with the research, mostly in lab settings, that we have to date on stereotype threat, which, again, has largely focused on demonstrating its existence and the pernicious ways in which it can affect not just performance on tests but decisions by students on what to study or even where to do so. The resulting book, which runs a scant 200 pages, is less academic in nature than Thinking Fast and Slow and its ilk, and thus a little less intellectually satisfying, but it’s also an easier read and I think the sort of book anyone can read regardless of their backgrounds in psychology or even in reading other books on human behavior.

The best-known proofs of stereotype threat, which Steele recounts throughout the first two thirds of the book, come from experiments where two groups are asked to take a specific test that encompasses a stereotype of one of the groups – for example, men and women are given a math test, especially one where they are told the test itself measures their math skills. In one iteration, the test-takers are told beforehand that women tend to fare worse than men on tests of mathematical abilities; in another iteration, they’re told no such thing, or something irrelevant. Whether it’s women and math, blacks and intelligence, or another stereotype, the results are consistently – the ‘threatened’ group performs worse than expected (based on predetermined criteria like grades in math classes or scores on standardized math tests) when they’re reminded of the stereotype before the test. Steele recounts several such experiments, even someone that don’t involve academic goals (e.g., whites underperforming in tests of athleticism),and shows that not only do the threatened groups perform worse, they often perform less – answering fewer questions or avoiding certain tasks.

Worse for our academic world is that stereotype threat appears to lead to increased segregation in the classroom and deters threatened groups from pursuing classes or majors that fall into the stereotyped category. If stereotype threat is directly* or indirectly convincing women not to choose STEM majors, or steering African-American students away from more academically rigorous majors or schools, then we need policy changes to try to address the threat and either throttle it before it starts or counteract it once it has begun. And Steele argues, with evidence, that stereotype threat begins much earlier than most people aware of the phenomenon would guess. Stereotype threat can be found, again through experiment, in kids as young as six years old. Marge and Homer may not have taken Lisa’s concerns about Malibu Stacy seriously, but she was more right than even the Simpsons writers of the time (who were probably almost all white men) realized.

* For example, do guidance counselors or academic advisors tell female students not to major in math or engineering? Do they discourage black students from applying to the best possible colleges to which they might gain admission?

To keep Whistling Vivaldi readable, Steele intersperses his recounting of academic studies with personal anecdotes of his own or of students and professors he’s met throughout his academic career. The anecdote of the title is almost painful to read – it’s from a young black man who noticed how differently white pedestrians would treat him on the street, avoiding eye contact or even crossing to the other side, so he adopted certain behaviors, not entirely consciously, to make himself seem less threatening. One of them was whistling classical music, like that of Vivaldi. Other stories demonstrate subtle changes in behavior in class that also result from stereotype threat, and show how students in threatened groups perform better in environments where the threat is diminished by policies, positive environments, or sheer numbers.

Stereotype threat is a major and almost entirely unaddressed policy issue for teachers, principals, and local politicians, at the very least. Avoiding our own use, even in jest, of such stereotypes can help start the process of ending how they affect the next generation of students, but the findings Steele recounts in Whistling Vivaldi call for much broader action. It’s essential reading for anyone who works in or wishes to work in education at any level.

Next up: Michael Ondaatje’s The English Patient.

The Hidden Brain.

I’ve become a huge fan of the NPR prodcast The Hidden Brain, hosted by Shankar Vedantam, a journalist whose 2010 book The Hidden Brain: How Our Unconscious Minds Elect Presidents, Control Markets, Wage Wars, and Save Our Lives spawned the podcast and a regular radio program on NPR. Covering how our subconscious mind influences our decisions in ways that traditional economists would call ‘irrational’ but modern behavioral economists recognize as typical human behavior, Vedantam’s book is a great introduction to this increasingly important way of understanding how people act and think.

Vedantam walks the reader through these theories via concrete examples, much as he now does in the podcast – this week’s episode, “Why Now?” about the #MeToo movement and our society’s sudden decision to pay attention to these women, is among its best. Some of the stories in the book are shocking and/or hard to believe, but they’re true and serve to emphasize these seemingly counterintuitive concepts. He discusses a rape victim who had focused on remembering details about her attacker, and was 100% sure she’d correctly identified the man who raped her – but thirteen years after the man she identified was convicted of the crime, a DNA test showed she was wrong, and she then discovered a specific detail she’d overlooked at the time of the investigation because no one asked her the ‘right’ question. This is a conscientious, intelligent woman who was certain of her memories, and she still made a mistake.

Another example that particularly stuck with me was how people react in the face of imminent danger or catastrophe. Just before the 2004 Indian Ocean tsunami, the sea receded from coastal areas, a typical feature before a tidal wave hits. Vedantam cites reports from multiple areas where people living in those regions “gathered to discuss the phenomenon” and “asked one another what was happening,” instead of running like hell for high ground. Similar reports came from the World Trade Center after 9/11. People in those instances didn’t rely on their instincts to flee, but sought confirmation from others nearby – if you don’t run, maybe I don’t need to run either. In this case, he points to the evolutionary history of man, where staying with the group was typically the safe move in the face of danger; if running were the dominant, successful strategy for survival, that would still be our instinct today. It even explains why multiple bystanders did not help Deletha Word, a woman who was nearly beaten to death in a road-rage incident on the packed Belle Isle bridge in Detroit in 1996 – if no one else helped her, why should I?

Vedantam’s writing and speaking style offers a perfect blend of colloquial storytelling and evidence-based arguments. He interviews transgender people who describe the changes attitudes they encounter between before and after their outward appearances changed. (One transgender man says, “I can even complete a whole sentence [post-transition] without being interrupted by a man.) And he looks at data on racial disparities in sentencing convicted criminals to death – including data that show darker-skinned blacks are more likely to receive a death sentence than lighter-skinned blacks.

The last chapter of The Hidden Brain came up last week on Twitter, where I retweeted a link to a story in the New York Times from the wife of a former NFL player, describing her husband’s apparent symptoms of serious brain trauma. One slightly bizarre response I received was that this was an “appeal to emotion” argument – I wasn’t arguing anything, just sharing a story I thought was well-written and worth reading – because it was a single datum rather than an extensive study. Vedantam points out, with examples and some research, that the human brain does much better at understanding the suffering of one than at understanding the suffering of many. He tells how the story of a dog named Hokget, lost in the Pacific on an abandoned ship, spurred people to donate thousands of dollars, with money coming from 39 states and four countries. ( An excerpt from this chapter is still online on The Week‘s site.) So why were people so quick to send money to save one dog when they’re so much less likely to send money when they hear of mass suffering, like genocide or disaster victims in Asia or Africa? Because, Vedantam argues, we process the suffering of an individual in a more “visceral” sense than we do the more abstract suffering of many – and he cites experimental data from psychologist Paul Slovic to back it up.

The Hidden Brain could have been twice as long and I would still have devoured it; Vedantam’s writing is much like his podcast narration, breezy yet never dumbed down, thoroughly explanatory without becoming dense or patronizing. If you enjoy books in the Thinking Fast and Slow or Everybody Lies vein, you’ll enjoy both this title and the podcast, which has become one of my go-to listens to power me through mindless chores around the house.

Everything is Obvious.

Duncan Watts’ book Everything is Obvious *Once You Know the Answer: How Common Sense Fails Us fits in well in the recent string of books explaining or demonstrating how the way we think often leads us astray. As with Thinking Fast and Slow, by Nobel Prize winner Daniel Kahneman, Watts’ book highlights some specific cognitive biases, notably our overreliance on what we consider “common sense,” lead us to false conclusions, especially in the spheres of the social sciences, with clear ramifications in the business and political worlds as well as some strong messages for journalists who always seek to graft narratives on to facts as if the latter were inevitable outcomes.

The argument from common sense is one of the most frequently seen logical fallacies out there – X must be true because common sense says it’s true. But common sense itself is, of course, inherently limited; our common sense is the result of our individual and collective experiences, not something innate given to us by God or contained in our genes. Given the human cognitive tendency to assign explanations to every event, even those that are the result of random chance, this is a recipe for bad results, whether it’s the fawning over a CEO who had little or nothing to do with his company’s strong results or top-down policy prescriptions that lead to billions in wasted foreign aid.

Watts runs through various cognitive biases and illusions that you may have encountered in other works, although a few of them were new to me, like the Matthew Effect, by which the rich get richer and the poor get poorer. According to the theory behind it, the Matthew Effect argues that success breeds success, because it means those people get greater opportunities going forward. A band that has a hit album will get greater airplay for its next record, even if that isn’t as good as the first one, or markedly inferior to an album released on the same day by an unknown artist. A good student born into privilege will have a better chance to attend a fancy-pants college, like, say, Harfurd, and thus benefits further from having the prestigious brand name on his resume. A writer who has nearly half a million Twitter followers might find it easier to land a deal for a major publisher to produce his book, Smart Baseball, available in stores now, and that major publisher then has the contacts and resources to ensure the book is reviewed in critical publications. It could be that the book sells well because it’s a good book, but I doubt it.

Watts similarly dispenses with the ‘great man theory of history’ – and with history in general, if we’re being honest. He points out that historical accounts will always include judgments or information that was not available to actors at the time of these events, citing the example of a soldier wandering around the battlefield in War and Peace, noticing that the realities of war look nothing like the genteel paintings of battle scenes hanging in Russian drawing rooms. He asks if the Mona Lisa, which wasn’t regarded as the world’s greatest painting or even its most famous until it was stolen from the Louvre by an Italian nationalist before World War II, ascended to that status because of innate qualities of the painting – or if circumstances pushed it to the top, and only after the fact do art experts argue for its supremacy based on the fact that it’s already become the Mona Lisa of legend. In other words, the Mona Lisa may be great simply because it’s the Mona Lisa, and perhaps had the disgruntled employee stolen another painting, da Vinci’s masterpiece would be seen as just another painting. (His description of seeing the painting for the first time mirrored my own: It’s kind of small, and because it’s behind shatterproof glass, you can’t really get close it.)

Without directly referring to it, Watts also perfectly describes the inexcusable habit of sportswriters to assign huge portions of the credit for team successes to head coaches or managers rather than distributing the credit across the entire team or even the organization. I’ve long used the example of the 2001 Arizona Diamondbacks as a team that won the World Series in spite of the best efforts of its manager, Bob Brenly, to give the series away – repeatedly playing small ball (like bunting) in front of Luis Gonzalez, who’d hit 57 homers that year, and using Byung-Hyun Kim in save situations when it was clear he wasn’t the optimal choice. Only the superhuman efforts by Randy Johnson and That Guy managed to save the day for Arizona, and even then, it took a rare misplay by Mariano Rivera and a weakly hit single to an open spot on the field for the Yanks to lose. Yet Brenly will forever be a “World Series-winning manager,” even though there’s no evidence he did anything to make the win possible. Being present when a big success happens can change a person’s reputation for a long time, and then future successes may be ascribed to that person even if he had nothing to do with them.

Another cognitive bias Watts discusses, the Halo Effect, seems particularly relevant to my work evaluating and ranking prospects. First named by psychologist Edward Thorndike, the Halo Effect refers to our tendency to apply positive impressions of a person, group, or company to their other properties or characteristics, so we might subconsciously consider a good-looking person to be better at his/her job. For example, do first-round draft picks get greater considerations from their organizations when it comes to promotions or even major-league opportunities? Will an org give such a player more time to work out of a period of non-performance than they’d give an eighth-rounder? Do some scouts rate players differently, even if it’s entirely subconscious, based on where they were drafted or how big their signing bonuses were? I don’t think I do this directly, but my rankings are based on feedback from scouts and team execs, so if their own information – including how teams internally rank their prospects – is affected by the Halo Effect, then my rankings will be too, unless I’m actively looking for it and trying to sieve it out.

Where I wish Watts had spent even more time was in describing the implications of these ideas and research for government policies, especially foreign aid, most of which would be just as productive if we flushed it all down those overpriced Pentagon toilets. Foreign aid tends to go to where the donors, whether private or government, think it should go, because the recipients are poor but the donors know how to fix it. In reality, this money rarely spurs any sort of real change or economic growth, because the common-sense explanation – the way to fix poverty is to send money and goods to poor people – never bothers to examine the root causes of the problem the donors want to solve, asking the targets what they really need, examining and removing obstacles (e.g., lack of infrastructure) that might require more time and effort to fix but prevent the aid from doing any good. Sending a boat full of food to a country in the grip of a famine only makes sense if you have a way to get the food to the starving people, but if the roads are bad, dangerous, or simply don’t exist, then that food will sit in the harbor until it rots or some bureaucrat sells it.

Everything Is Obvious is aimed at a more general audience than Thinking Fast and Slow, as its text is a little less dense and it contains fewer and shorter descriptions of research experiments. Watts refers to Kahneman and his late reseach partner Amos Tversky a few times, as well as other researchers in the field, so it seems to me like this book is meant as another building block on the foundation of Kahneman’s work. I think it applies to all kinds of areas of our lives, even just as a way to think about your own thinking and to try to help yourself avoid pitfalls in your financial planning or other decisions, but it’s especially apt for folks like me who write for a living and should watch for our human tendency to try to ascribe causes post hoc to events that may have come about as much due to chance as any deliberate factors.

Stick to baseball, 3/4/17.

No new Insider content this week, although I believe I’ll have a new piece up on Tuesday, assuming all goes to plan. I did hold a Klawchat on Thursday.

My latest boardgame review for Paste covers Mole Rats in Space, a cooperative game for kids from the designer of Pandemic and Forbidden Desert. It’s pretty fantastic, and I think if you play this you’ll never have to see Chutes and Ladders again.

You can preorder my upcoming book, Smart Baseball, on amazon, or from other sites via the Harper-Collins page for the book. The book now has two positive reviews out, one from Kirkus Reviews and one from Publishers Weekly.

Also, please sign up for my more-or-less weekly email newsletter.

And now, the links…

Superforecasting.

I’m a bit surprised that Philip Tetlock’s 2015 book Superforecasting: The Art and Science of Prediction hasn’t been a bigger phenomenon along the lines of Thinking Fast and Slow and its offshoots, because Tetlock’s research, from the decades-long Good Judgment Project, goes hand in hand with Daniel Kahneman’s book and research into cognitive biases and illusions. Where Kahneman’s views tend to be macro, Tetlock is focused on the micro: His research looks at people who are better at predicting specific, short-term answers to questions like “Will the Syrian government fall in the next six months?” Tetlock’s main thesis is that such people do exist – people who can consistently produce better forecasts than others, even soi-disant “experts,” can produce – and that we can learn to do the same thing by following their best practices.

Tetlock’s superforecasters have a handful of personality traits in common, but they’re not terribly unusual and if you’re here there’s a good chance you have them. These folks are intellectually curious and comfortable with math. They’re willing to admit mistakes, driven to avoid repeating them, and rigorous in their process. But they’re not necessarily more or better educated and typically lack subject-matter expertise in most of the areas in the forecasting project. What Tetlock and co-author Dan Gardner truly want to get across is that any of us, whether for ourselves or for our businesses, can achieve marginal but tangible gains in our ability to predict future events.

Perhaps the biggest takeaway from Superforecasting is the need to get away from binary forecasting – that is, blanket statements like “Syria’s government will fall within the year” or “Chris Sale will not be a major-league starting pitcher.” Every forecast needs a probability and a timeframe, for accountability – you can’t evaluate a forecaster’s performance if he avoids specifics or deals in terms like “might” or “somewhat” – and for the forecaster him/herself to improve the process.

Within that mandate for clearer predictions that allow for post hoc evaluation comes the need to learn to ask the right questions. Tetlock reaches two conclusions from his research, one for the forecasters, one for the people who might employ them. Forecasters have to walk a fine line between asking the right questions and the wrong ones: One typical cognitive bias of humans is to substitute a question that is too difficult to answer with a similar question that is easier but doesn’t get at the issue at hand. (Within this is the human reluctance to provide the answer that Tetlock calls the hardest three words for anyone to say: “I don’t know.”) Managers of forecasters or analytics departments, on the other hand, must learn the difference between subjects for which analysts can provide forecasts and those for which they can’t. Many questions are simply too big or vague to answer with probabilistic predictions, so either the manager(s) must provide more specific questions, or the forecaster(s) must be able to manage upwards by operationalizing those questions, turning them into questions that can be answered with a forecast of when, how much, and at what odds.

Tetlock only mentions baseball in passing a few times, but you can see how these precepts would apply to the work that should come out of a baseball analytics department. I think by now every team is generating quantitative player forecasts beyond the generalities of traditional scouting reports. Nate Silver was the first analyst I know of to publicize the idea of attaching probabilities to these forecasts – here’s the 50th percentile forecast, the 10th, the 90th, and so on. More useful to the GM trying to decide whether to acquire player A or player B would be the probability that a player’s performance over the specified period will meet a specific threshold: There is a 63% chance that Joey Bagodonuts will produce at least 6 WAR of value over the next two years. You can work with a forecast like that – it has a specific value and timeframe with specific odds, so the GM can price a contract offer to Mr. Bagodonuts’ agent accordingly.

Could you bring this into the traditional scouting realm? I think you could, carefully. I do try to put some probabilities around my statements on player futures, more than I did in the past, certainly, but I also recognize I could never forecast player stat lines as well as a well-built model could. (Many teams fold scouting reports into their forecasting models anyway.) I can say, however, I think there’s a 40% chance of a pitcher remaining a starter, or a 25% chance that, if player X gets 500 at bats this season, he’ll hit at least 25 home runs. I wouldn’t go out and pay someone $15 million on the comments I make, but I hope it will accomplish two things: force me to think harder before making any extreme statements on potential player outcomes, and furnish those of you who do use this information (such as in fantasy baseball) with value beyond a mere ranking or a statement of a player’s potential ceiling (which might really be his 90th or 95th percentile outcome).

I also want to mention another book in this vein that I enjoyed but never wrote up – Dan Ariely’s Predictably Irrational: The Hidden Forces that Shape Our Decisions, another entertaining look at cognitive illusions and biases, especially those that affect the way we value transactions that involve money – including those that involve no money because we’re getting or giving something for free. As in Kahneman’s book, Ariely’s explains that by and large you can’t avoid these brain flaws; you learn they exist and then learn to compensate for them, but if you’re human, they’re not going away.

Next up: Paul Theroux’s travelogue The Last Train to Zona Verde.

Stick to baseball, 7/9/16.

My annual top 25 MLB players under age 25 ranking went up this week for Insiders, and please read the intro while you’re there. I also wrote a non-Insider All-Star roster reaction piece, covering five glaring snubs and five guys who made it but shouldn’t have. I also held my usual Klawchat on Thursday.

My latest boardgame review for Paste covers the reissue of the Reiner Knizia game Ra.

Sign up for my newsletter! You’ll get occasional emails from me with links to my content and stray thoughts that didn’t fit anywhere else.

And now, the links…