Noise.

Nobel Prize-winning economist Daniel Kahneman’s first book, Thinking Fast and Slow, has been hugely influential on the baseball industry and on my own career, inspiring me to write The Inside Game as a way to bring some of the same concepts to a broader audience. Kahneman is back with a sequel of sorts, co-authoring the book Noise: A Human Flaw in Human Judgment with Cass Sunstein and Oliver Sibony, that shifts the focus away from cognitive biases towards a different phenomenon, one that the authors call “noise.”

Noise, in their definition, involves “variability in judgments that should be identical.” They break this down into three different types of noise, all of which add up together to be “system noise.” (There’s a lot of jargon in the book, and that’s one of its major drawbacks.)

  • Level noise, where different individuals make different judgments across different sets of data. The authors cite “some judges are generally more severe than others, and others are more lenient” as an example.
  • Pattern noise, where different individuals make different judgments with the same data.
  • Occasion noise, where an individual makes different judgment depending on when they see the data (which can literally mean the time of day or day of the week). This is probably the hardest for people to accept, but there’s clear evidence that doctors prescribe more opioids near the end of a work day, and judges are more lenient when the local football team won on Sunday.

There’s a hierarchy of noise here, where level noise comprises pattern noise, and pattern noise comprises occasion noise (which they classify as transient pattern noise, as opposed to “stable” pattern noise, which would be, say, how I underrate hitting prospects with high contact rates but maybe Eric Longenhagen rates them consistently more highly). That’s the entire premise of Noise; the book devotes its time to exploring noise in different fields, notably the criminal justice system and medicine, where the stakes are so high and the benefit of a reduction in noise is likely to justify the costs, and to ways we can try to reduce noise in our fields of work.

As with Thinking Fast and Slow, Noise doesn’tmake many accommodations for the lay reader. There’s an expectation here that you are comfortable with the vernacular of behavioral economics and with some basic statistical arguments. It’s an arduous read with a strong payoff if you can get through it, but I concede that it was probably the hardest I’ve worked to read (and understand) anything I’ve read this year. It doesn’t help that noise is itself a more abstruse concept than bias, and the authors make constant references to the difference here.

Some of the examples here will be familiar if you’ve read any literature on behavioral economics before. The sentencing guidelines that resulted from Marvin Frankel, a well-known judge and human rights advocate, pointing out the gross inequities that resulted from giving judges wide latitude in sentencing – resulting in sentences that might range from a few months to 20 years for two defendants convicted the same crime. (The guidelines that resulted from Frankel’s work were later struck down by the Supreme Court, which not only reintroduced noise into the system, but restored old levels of racial bias in sentencing as well.) The authors also attempt to bring noise identification and noise reduction into the business world, with some examples where they brought evidence of noise to the attention of executives who sometimes didn’t believe them.

Nothing was more familiar to me than the discussion of the low value of performance evaluations in the workplace. For certain jobs, with measurable progress and objectives, they may make sense, but in my experience across a lot of jobs in several industries, they’re a big waste of time – and I do mean a big one, because if you add up the hours dedicated to filling out the forms required, writing them up, conducting the reviews, and so on, that’s a lot of lost productivity. One problem is that there’s a lack of consistency in ratings, because raters do not have a common frame of reference for their grades, making grades more noise than signal. Another is that raters tend not to think in relative terms, so you end up with oxymoronic results like 98% of employees grading out as above average. The authors estimate that 70-80% of the output from traditional performance evaluations is noise – meaning it’s useless for its intended purpose of allowing for objective evaluation of employee performance, and thus also useless for important decisions like pay raises, promotions, and other increases in responsibility. Two possible solutions: ditching performance evaluations altogether, using them solely for developmental purposes (particularly 360-degree systems, which are rather in vogue), or spend time and money to train raters and develop evaluation metrics that have objective measurements or “behaviorally anchored” rating scales.

It wouldn’t be a Daniel Kahneman product if Noise failed to take aim at one of his particular bêtes noires, the hiring interview. He explained why they’re next to worthless in Thinking Fast and Slow, and here he does it again, saying explicitly, “if your goal is to determine which candidates will succeed in a job and which will fail, standard interviews … are not very informative. To put it more starkly, they are often useless.” There’s almost no correlation between interview success and job performance, and that’s not surprising, because the skills that make someone good at interviewing would only make them a better employee if the job in question also requires those same skills, which is … not most jobs. Unstructured interviews, the kind most of us know, are little more than conversations, and they serve as ideal growth media for noise. Two interviewers will have vastly differing opinions of the same candidate, even if they interview the candidate together as part of a panel. This pattern noise is amplified by the occasion noise prompted by how well the first few minutes of an interview go. (They don’t mention something I’ve suspected: You’ll fare better in an interview if the person interviewing you isn’t too tired or hungry, so you don’t want to be the last interview before lunch or the last one of the day.) They cite one psychology experiment where researchers assigned students to role-play interviews, splitting them between interviewer and candidate, and then told half of the candidates to answer questions randomly … and none of the interviewers caught on.

There’s plenty of good material here in Noise, concepts and recommended solutions that would apply to a lot of industries and a lot of individuals, but you have to wade through a fair bit of jargon to get to it. It’s also less specific than Thinking Fast and Slow, and I suspect that reducing noise in any environment is going to be a lot harder than reducing bias (or specific biases) would be. But the thesis that noise is at least as significant a problem in decision-making as bias is should get wider attention, and it’s hard to read about the defenses of the “human element” in sentencing people convicted of crimes and not think of how equally specious defenses of the “human element” in sports can be.

Next up: Martha Wells’ Nebula & Locus Award-winning novel Network Effect, part of her MurderBot series.

Mindware.

I appeared on the Inquiring Minds podcast this spring to promote my book The Inside Game, and co-host Adam Bristol recommended a book to me after the show, Dr. Richard Nisbett’s Mindware: Tools for Smart Thinking. Dr. Nisbett is a professor of social psychology at the University of Michigan and co-directs the school’s Culture and Cognition program, and a good portion of Mindware focuses on how our environment affects our cognitive processes, especially the unconscious mind, as he gives advice on how to improve our decision-making processes and better understand the various ways our minds work.

Nisbett starts out the book with an obvious but perhaps barely understood point: Our understanding of the world around us is a matter of construal, a combination of inferences and interpretations, because of the sheer volume of information and stimuli coming into our brains at all times, and how much of what we see or hear is indirect. (If you want to get particularly technical, even what we see directly is still a matter of interpretation; even something as seemingly concrete as color is actually a sensation created in the brain, an interpolation of different wavelengths of light that also renders colors more stable in our minds than they would be if we were just relying on levels of illumination.) So when we run into biases or illusions that affect our inferences and interpretations, we will proceed on the basis of unreliable information.

He then breaks down three major ways in which we can understand how our minds process all of these stimuli. One is that our environments affect how we think and how we behave far more than we realize they do. Another is that our unconscious minds do far more work than we acknowledge, including processing environmental inputs that we may not actively register. And the third is that we see and interpret the world through schemas, frameworks or sets of heuristics that we use to make sense of the world and simplify the torrent of information coming at us.

From that outline, Nisbett marches through a series of cognitive biases and errors, many of which overlap with those I covered in The Inside Game, but explains more of how cognition is affected by external stimuli, including geography (the subject of one of his previous books), culture, and “preperception” – how the subconscious mind gets you started before you actively begin to perceive things. This last point is one of the book’s most powerful observations: We don’t know why we know what we know, and we can’t always account for our motives and reasons, even if we’re asked to explain them directly. Subjects of experiments will deny that their choices or responses were influenced by stimuli that seem dead-obvious to outside observers. They can be biased by anchors that have nothing to do with the topic of the questions, and even show effects after the ostensible study itself – for example, that subjects exposed to more words related to aging will walk more slowly down the hall out of the study room than those exposed to words relate to youth or vitality. It seems absurd, but multiple studies have shown effects like these, as with the study I mentioned in my book about students’ guesses on quantities being biased by the mere act of writing down the last two digits of their social security numbers. We would like to think that our brains don’t work that way, but they do.

Nisbett is a psychologist but crosses comfortably into economics territory, including arguments in favor of using cost/benefit analyses any time a decision has significant costs and the process allows you the time to perform such an analysis. He even gets into the thorny question of how much a life is worth, which most people do not want to consider but which policymakers have to consider when making major decisions on, say, how much and for how long to shut down the economy in the face of a global pandemic. There is some death rate from COVID-19 that we would – and should – accept, and to figure that out, we have to consider what values to put on the lives that might be lost at each level of response, and then compare that to economic benefits of remaining open or additional costs of overloaded hospitals. “Zero deaths” is the compassionate answer, but it isn’t the rational one; if zero deaths in a pandemic were even possible, it would be prohibitively expensive in time and money, so much so that it would cause suffering (and possibly deaths) from other causes.

In the conclusion to Mindware, Dr. Nisbett says that humans are “profligate causal theorists,” and while that may not quite roll off the tongue, it’s a pithy summary of how our minds work. We are free and easy when it comes to finding patterns and ascribing causes to outcomes, but far less thorough when it comes to testing these hypotheses, or even trying to make these hypotheses verifiable or falsifiable. It’s the difference between science and pseudoscience, and between a good decision-making process and a dubious one. (You can still make a good decision with a bad process!) This really is a great book if you like the kind of books that led me to write The Inside Game, or just want to learn more about how your brain deals with the huge volume of information it gets each day so that you can make better decisions in your everyday life.

Next up: I just finished Ann Patchett’s The Dutch House this weekend and am about halfway through Patrick Keefe’s Say Nothing: A True Story of Murder and Memory in Northern Ireland.

The Tyranny of Metrics.

A scout I’ve seen a few times already this spring on the amateur trail recommended Jerry Muller’s brief polemic The Tyranny of Metrics, a quick and enlightening read on how the business world’s obsession with measuring everything creates misaligned incentives in arenas as disparate as health care, education, foreign aid, and the military, and can lead to undesirable or even counterproductive outcomes. With the recent MLB study headed by physicist Prof. Alan Nathan that found, among other things, that players trying to optimize their launch angles hasn’t contributed to rising home run rates, the book is even somewhat applicable to baseball – although I think professional sports, especially our favorite pastime, do offer a good contrast to fields where the focus on metrics leads people to measure and reward the wrong things.

The encroachment of metrics on education is probably the best known of the examples that Muller provides in the book, which is strident in tone but measured (pun intended) in the way he supports his arguments. Any reader who has children in grade school now is familiar with the heavy use of standardized testing to measure student progress, which is then in turn used to grade teacher performance and track outcomes by schools as well, which can alter funding decisions or even lead to school takeovers and closings. Of course, I think it’s common knowledge at this point that grading teachers on the test performance of their students leads teachers to “teach to the test,” eschewing regular material, which may be important but more abstract, in favor of the specific material and question types to be found on these tests. My daughter is in a charter school in Delaware, and loses more than a week of schooldays each year to these statewide tests, which, as far as I can tell, are the primary way the state tracks charter school performance – even though charters nationwide are rife with fraud and probably require more direct observation and evaluation. That would be expensive and subjective, however, so the tests become a weak proxy for the ostensible goal in measurement, allowing the state to point and say that these charters are doing their jobs because the student test scores are above the given threshold.

The medical world isn’t immune to this encroachment, and Muller details more pernicious outcomes that result from grading physicians on seemingly sensible statistics like success or mortality rates from surgeries. If a surgeon at a busy hospital knows that any death on the operating table during a surgery s/he performs will count, so to speak, against his/her permanent record, the surgeon may choose to avoid the most difficult surgeries, whether due to the complexity of the operations or risk factors in the patients themselves, to avoid taking the hit to his/her surgical batting average. Imagine if you’re an everyday player in the majors, entering arbitration or even free agency, and get to pick the fifteen games you’re going to skip to rest over the course of the season. If your sole goal is maximizing your own statistics to thus increase your compensation, are you skipping Clayton Kershaw and Max Scherzer, or skipping Homer Bailey and some non-prospect spot starter?

Muller mentions sports in passing in The Tyranny of Metrics but focuses on other, more important industries to society and the economy as a whole; that’s probably a wise choice, as the increased use of metrics in sports is less apt than the other examples he chooses in his book. However, there are some areas where his premise holds true, with launch angle a good one to choose because it’s been in the news lately. Hitters at all levels are now working with coaches, both with teams and private coaches, to optimize their swings to maximize their power output. For a select few hitters, it has helped, unlocking latent power they couldn’t get to because their swings were too flat; for others, it may help reduce flyouts and popups and get some of those balls the hitter already puts in the air to fall in for hits or go over the fence. But for many hitters, this emphasis on launch angle hasn’t produced results, and there are even players in this year’s draft class who’ve hurt themselves by focusing on launch angle – knowing that teams measure it and grade players in the draft class on it – to the exclusion of other areas of their game, like just plain hitting. Mike Siani of William Penn Charter has cost himself a little money this spring for this exact reason; working with a coach this offseason to improve his launch angle, he’s performed worse for scouts this spring, becoming more pull-conscious and trying to hit for power he doesn’t naturally possess. He’s a plus runner who can field, but more of an all-fields hitter who would benefit from just putting the ball in play and letting his speed boost him on the bases. Because many teams now weigh such Trackman data as launch angle, spin rate, and extension heavily in their draft process, either boosting players who score well in those areas or excluding those who don’t, we now see coaches trying to ‘teach to the test,’ and that approach will help only a portion of the draft class while actively harming the prospects of many others.

At barely 220 pages, The Tyranny of Metrics feels like a pamphlet version of what could easily be a heavy 500-page academic tome, recounting all of the ways in which the obsession with metrics produces less than ideal results while also explaining the behavioral economics principles that underlie such behavior. If you have some of that background, or just don’t want it (understandable), then Muller’s book is perfect – a concise argument that should lead policymakers and business leaders to at least reconsider their reliance on the specific metrics they’ve chosen to measure employee performance. Using metrics may be the right strategy, but be sure they measure what you want to measure, and that they’re not skewing behavior as a result.

Next up: I’m currently reading Ray Bradbury’s short story collection I Sing the Body Electric!.

Whistling Vivaldi.

In this era of increased awareness of cognitive biases and how they affect human behavior, stereotype threat seems to be lagging behind similar phenomena in its prevalence in policy discussions. Stereotype threat refers to how common stereotypes about demographic groups can then affect how members of those groups perform in tasks that are covered by the stereotypes. For example, women fare worse on math tests than men because there’s a pervasive stereotype about women being inferior at math. African-American students perform worse on tests that purport to measure ‘intelligence’ for a similar reason. The effect is real, with about two decades of research testifying to its existence, although there’s still disagreement over how strong the effect is in the real world (versus structured experiments).

Stanford psychology professor Claude Steele, a former provost at Columbia University and himself African-American, wrote a highly personal account of what we know about stereotype threat and its presence in and effects on higher education in the United States in Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do. Steele blends personal anecdotes – his own and those of others – with the research, mostly in lab settings, that we have to date on stereotype threat, which, again, has largely focused on demonstrating its existence and the pernicious ways in which it can affect not just performance on tests but decisions by students on what to study or even where to do so. The resulting book, which runs a scant 200 pages, is less academic in nature than Thinking Fast and Slow and its ilk, and thus a little less intellectually satisfying, but it’s also an easier read and I think the sort of book anyone can read regardless of their backgrounds in psychology or even in reading other books on human behavior.

The best-known proofs of stereotype threat, which Steele recounts throughout the first two thirds of the book, come from experiments where two groups are asked to take a specific test that encompasses a stereotype of one of the groups – for example, men and women are given a math test, especially one where they are told the test itself measures their math skills. In one iteration, the test-takers are told beforehand that women tend to fare worse than men on tests of mathematical abilities; in another iteration, they’re told no such thing, or something irrelevant. Whether it’s women and math, blacks and intelligence, or another stereotype, the results are consistently – the ‘threatened’ group performs worse than expected (based on predetermined criteria like grades in math classes or scores on standardized math tests) when they’re reminded of the stereotype before the test. Steele recounts several such experiments, even someone that don’t involve academic goals (e.g., whites underperforming in tests of athleticism),and shows that not only do the threatened groups perform worse, they often perform less – answering fewer questions or avoiding certain tasks.

Worse for our academic world is that stereotype threat appears to lead to increased segregation in the classroom and deters threatened groups from pursuing classes or majors that fall into the stereotyped category. If stereotype threat is directly* or indirectly convincing women not to choose STEM majors, or steering African-American students away from more academically rigorous majors or schools, then we need policy changes to try to address the threat and either throttle it before it starts or counteract it once it has begun. And Steele argues, with evidence, that stereotype threat begins much earlier than most people aware of the phenomenon would guess. Stereotype threat can be found, again through experiment, in kids as young as six years old. Marge and Homer may not have taken Lisa’s concerns about Malibu Stacy seriously, but she was more right than even the Simpsons writers of the time (who were probably almost all white men) realized.

* For example, do guidance counselors or academic advisors tell female students not to major in math or engineering? Do they discourage black students from applying to the best possible colleges to which they might gain admission?

To keep Whistling Vivaldi readable, Steele intersperses his recounting of academic studies with personal anecdotes of his own or of students and professors he’s met throughout his academic career. The anecdote of the title is almost painful to read – it’s from a young black man who noticed how differently white pedestrians would treat him on the street, avoiding eye contact or even crossing to the other side, so he adopted certain behaviors, not entirely consciously, to make himself seem less threatening. One of them was whistling classical music, like that of Vivaldi. Other stories demonstrate subtle changes in behavior in class that also result from stereotype threat, and show how students in threatened groups perform better in environments where the threat is diminished by policies, positive environments, or sheer numbers.

Stereotype threat is a major and almost entirely unaddressed policy issue for teachers, principals, and local politicians, at the very least. Avoiding our own use, even in jest, of such stereotypes can help start the process of ending how they affect the next generation of students, but the findings Steele recounts in Whistling Vivaldi call for much broader action. It’s essential reading for anyone who works in or wishes to work in education at any level.

Next up: Michael Ondaatje’s The English Patient.

Nudge.

Richard Thaler won the 2017 Nobel Prize in Economics – or whatever the longer title is, it’s the one Nobel Prize people don’t seem to take all that seriously – for his work in the burgeoning field of behavioral economics, especially on what is now called “choice architecture.” Thaler’s work focuses on how the way we make decisions is affected by the way in which we are presented with choices. I mentioned one of Thaler’s findings in my most recent stick to baseball roundup – the candidate listed first on a ballot receives an average boost of 3.5% in the voting, with the benefit higher in races where all candidates are equally unknown (e.g., there’s no incumbent). You would probably like to think that voters are more rational than that, or at least just not really that irrational, but the data are clear that the order in which names are listed on ballots affects the outcomes. (It came up in that post because Iowa Republicans are trying to rig election outcomes in that state, with one possible move to list Republican candidates first on nearly every ballot in the state.)

Thaler’s first big book, Nudge: Improving Decisions About Health, Wealth, and Happiness, co-authored with Harvard Law School professor Cass Sunstein came out in 2008, and explains the effects of choice architecture while offering numerous policy prescriptions for various real-world problems where giving consumers or voters different choices, or giving them choices in a different order, or even just flipping the wording of certain questions could dramatically alter outcomes. Thaler describes this approach as “libertarian paternalism,” saying that the goal here is not to mandate or restrict choices, but to use subtle ‘nudges’ to push consumers toward decisions that are better for them and for society as a whole. The audiobook is just $4.49 as I write this.

This approach probably mirrors my own beliefs on how governments should craft economic policies, although it doesn’t appear to be in favor with either major party right now. For example, trans fats are pretty clearly bad for your health, and if Americans consume too many trans fats, national expenditures on health care will likely rise as more Americans succumb to heart disease and possibly cancer as well. However, banning trans fats, as New York City has done, is paternalism without liberty – these jurisdictions have decided for consumers that they can’t be trusted to consume only small, safer amounts of trans fats. You can certainly have tiny amounts of trans fats without significantly altering your risk of heart disease, and you may decide for yourself that the small increase in health risk is justified by the improved flavor or texture of products containing trans fats. (For example, pie crusts made with traditional shortening have a better texture than those made with new, trans fat-free shortening. And don’t get me started on Oreos.) That’s your choice to make, even if it potentially harms your health in the long run.

Choice architecture theory says that you can deter people from consuming trans fats or reduce such consumption by how you present information to consumers at the point of purchase. Merely putting trans fat content on nutrition labels is one step – if consumers see that broken out as a separate line item, they may be less likely to purchase the product. Warning labels that trans fats are bad for your heart might also help. Some consumers will consume trans fats anyway, but that is their choice as free citizens. The policy goal is to reduce the public expenditure on health care expenses related to such consumption without infringing on individual choice. There are many such debates in the food policy world, especially when it comes to importing food products from outside the U.S. – the USDA has been trying for years to ban or curtail imports of certain cheeses made from raw milk, because of the low risk that they’ll carry dangerous pathogens, even though the fermentation process discourages the growth of such bugs. (I’m not talking about raw milk itself, which has a different risk profile, and has made a lot of people sick as it’s come back into vogue.) I’ve also run into trouble trying to get products imported from Italy like bottarga and neonata, which are completely safe, but for whatever reason run afoul of U.S. laws on bringing animal products into the country.

Thaler and Sunstein fry bigger fish than neonata in Nudge, examining how choice architecture might improve employee participation in and choices within their retirement accounts, increase participation in organ donation programs, or increase energy conservation. (The last one is almost funny: If you tell people their neighbors are better at conserving energy, then it makes those people reduce their own energy use. South Africa has been using this and similar techniques to try to reduce water consumption in drought-stricken Cape Town. Unfortunately, publicizing “Day Zero” has also hurt the city’s tourism industry.) Thaler distinguishes between Econs, the theoretical, entirely rational actors of traditional economic theory; and Humans, the very real, often irrational people who live in this universe and make inefficient or even dumb choices all the time.

Nudge is enlightening, but unlike most books in this niche, like Thinking, Fast and Slow or The Invisible Gorilla, it probably won’t help you make better choices in your own life. You can become more aware of choice architecture, and maybe you’ll overrule your status quo bias, or will look at the top or bottom shelves in the supermarket instead of what’s at eye level (hint: the retailer charges producers more to place their products at eye level), but the people Nudge is most likely to help seem like the ones least likely to read it: Elected and appointed officials. I’ve mentioned many times how disgusted I was with Arizona’s lack of any kind of energy or water conservation policies. They have more sun than almost any place in the country, but have done little to nothing to encourage solar uptake, although the state’s utility commission may have finally forced some change on the renewable energy front this week. Las Vegas actually pays residents to remove grass lawns and replace them with low-water landscaping; Arizona does nothing of the sort, and charges far too little for water given its scarcity and dwindling supply. Improving choice architecture in that state could improve its environmental policies quickly without infringing on Arizonans’ rights to leave the lights on all night.

Speaking of Thinking, Fast and Slow, its author, Daniel Kahneman, was a guest last week on NPR’s Hidden Brain podcast, and it was both entertaining and illuminating.

Next up: Hannah Arendt’s The Origins of Totalitarianism. No reason.

The Hidden Brain.

I’ve become a huge fan of the NPR prodcast The Hidden Brain, hosted by Shankar Vedantam, a journalist whose 2010 book The Hidden Brain: How Our Unconscious Minds Elect Presidents, Control Markets, Wage Wars, and Save Our Lives spawned the podcast and a regular radio program on NPR. Covering how our subconscious mind influences our decisions in ways that traditional economists would call ‘irrational’ but modern behavioral economists recognize as typical human behavior, Vedantam’s book is a great introduction to this increasingly important way of understanding how people act and think.

Vedantam walks the reader through these theories via concrete examples, much as he now does in the podcast – this week’s episode, “Why Now?” about the #MeToo movement and our society’s sudden decision to pay attention to these women, is among its best. Some of the stories in the book are shocking and/or hard to believe, but they’re true and serve to emphasize these seemingly counterintuitive concepts. He discusses a rape victim who had focused on remembering details about her attacker, and was 100% sure she’d correctly identified the man who raped her – but thirteen years after the man she identified was convicted of the crime, a DNA test showed she was wrong, and she then discovered a specific detail she’d overlooked at the time of the investigation because no one asked her the ‘right’ question. This is a conscientious, intelligent woman who was certain of her memories, and she still made a mistake.

Another example that particularly stuck with me was how people react in the face of imminent danger or catastrophe. Just before the 2004 Indian Ocean tsunami, the sea receded from coastal areas, a typical feature before a tidal wave hits. Vedantam cites reports from multiple areas where people living in those regions “gathered to discuss the phenomenon” and “asked one another what was happening,” instead of running like hell for high ground. Similar reports came from the World Trade Center after 9/11. People in those instances didn’t rely on their instincts to flee, but sought confirmation from others nearby – if you don’t run, maybe I don’t need to run either. In this case, he points to the evolutionary history of man, where staying with the group was typically the safe move in the face of danger; if running were the dominant, successful strategy for survival, that would still be our instinct today. It even explains why multiple bystanders did not help Deletha Word, a woman who was nearly beaten to death in a road-rage incident on the packed Belle Isle bridge in Detroit in 1996 – if no one else helped her, why should I?

Vedantam’s writing and speaking style offers a perfect blend of colloquial storytelling and evidence-based arguments. He interviews transgender people who describe the changes attitudes they encounter between before and after their outward appearances changed. (One transgender man says, “I can even complete a whole sentence [post-transition] without being interrupted by a man.) And he looks at data on racial disparities in sentencing convicted criminals to death – including data that show darker-skinned blacks are more likely to receive a death sentence than lighter-skinned blacks.

The last chapter of The Hidden Brain came up last week on Twitter, where I retweeted a link to a story in the New York Times from the wife of a former NFL player, describing her husband’s apparent symptoms of serious brain trauma. One slightly bizarre response I received was that this was an “appeal to emotion” argument – I wasn’t arguing anything, just sharing a story I thought was well-written and worth reading – because it was a single datum rather than an extensive study. Vedantam points out, with examples and some research, that the human brain does much better at understanding the suffering of one than at understanding the suffering of many. He tells how the story of a dog named Hokget, lost in the Pacific on an abandoned ship, spurred people to donate thousands of dollars, with money coming from 39 states and four countries. ( An excerpt from this chapter is still online on The Week‘s site.) So why were people so quick to send money to save one dog when they’re so much less likely to send money when they hear of mass suffering, like genocide or disaster victims in Asia or Africa? Because, Vedantam argues, we process the suffering of an individual in a more “visceral” sense than we do the more abstract suffering of many – and he cites experimental data from psychologist Paul Slovic to back it up.

The Hidden Brain could have been twice as long and I would still have devoured it; Vedantam’s writing is much like his podcast narration, breezy yet never dumbed down, thoroughly explanatory without becoming dense or patronizing. If you enjoy books in the Thinking Fast and Slow or Everybody Lies vein, you’ll enjoy both this title and the podcast, which has become one of my go-to listens to power me through mindless chores around the house.