Noise.

Nobel Prize-winning economist Daniel Kahneman’s first book, Thinking Fast and Slow, has been hugely influential on the baseball industry and on my own career, inspiring me to write The Inside Game as a way to bring some of the same concepts to a broader audience. Kahneman is back with a sequel of sorts, co-authoring the book Noise: A Human Flaw in Human Judgment with Cass Sunstein and Oliver Sibony, that shifts the focus away from cognitive biases towards a different phenomenon, one that the authors call “noise.”

Noise, in their definition, involves “variability in judgments that should be identical.” They break this down into three different types of noise, all of which add up together to be “system noise.” (There’s a lot of jargon in the book, and that’s one of its major drawbacks.)

  • Level noise, where different individuals make different judgments across different sets of data. The authors cite “some judges are generally more severe than others, and others are more lenient” as an example.
  • Pattern noise, where different individuals make different judgments with the same data.
  • Occasion noise, where an individual makes different judgment depending on when they see the data (which can literally mean the time of day or day of the week). This is probably the hardest for people to accept, but there’s clear evidence that doctors prescribe more opioids near the end of a work day, and judges are more lenient when the local football team won on Sunday.

There’s a hierarchy of noise here, where level noise comprises pattern noise, and pattern noise comprises occasion noise (which they classify as transient pattern noise, as opposed to “stable” pattern noise, which would be, say, how I underrate hitting prospects with high contact rates but maybe Eric Longenhagen rates them consistently more highly). That’s the entire premise of Noise; the book devotes its time to exploring noise in different fields, notably the criminal justice system and medicine, where the stakes are so high and the benefit of a reduction in noise is likely to justify the costs, and to ways we can try to reduce noise in our fields of work.

As with Thinking Fast and Slow, Noise doesn’tmake many accommodations for the lay reader. There’s an expectation here that you are comfortable with the vernacular of behavioral economics and with some basic statistical arguments. It’s an arduous read with a strong payoff if you can get through it, but I concede that it was probably the hardest I’ve worked to read (and understand) anything I’ve read this year. It doesn’t help that noise is itself a more abstruse concept than bias, and the authors make constant references to the difference here.

Some of the examples here will be familiar if you’ve read any literature on behavioral economics before. The sentencing guidelines that resulted from Marvin Frankel, a well-known judge and human rights advocate, pointing out the gross inequities that resulted from giving judges wide latitude in sentencing – resulting in sentences that might range from a few months to 20 years for two defendants convicted the same crime. (The guidelines that resulted from Frankel’s work were later struck down by the Supreme Court, which not only reintroduced noise into the system, but restored old levels of racial bias in sentencing as well.) The authors also attempt to bring noise identification and noise reduction into the business world, with some examples where they brought evidence of noise to the attention of executives who sometimes didn’t believe them.

Nothing was more familiar to me than the discussion of the low value of performance evaluations in the workplace. For certain jobs, with measurable progress and objectives, they may make sense, but in my experience across a lot of jobs in several industries, they’re a big waste of time – and I do mean a big one, because if you add up the hours dedicated to filling out the forms required, writing them up, conducting the reviews, and so on, that’s a lot of lost productivity. One problem is that there’s a lack of consistency in ratings, because raters do not have a common frame of reference for their grades, making grades more noise than signal. Another is that raters tend not to think in relative terms, so you end up with oxymoronic results like 98% of employees grading out as above average. The authors estimate that 70-80% of the output from traditional performance evaluations is noise – meaning it’s useless for its intended purpose of allowing for objective evaluation of employee performance, and thus also useless for important decisions like pay raises, promotions, and other increases in responsibility. Two possible solutions: ditching performance evaluations altogether, using them solely for developmental purposes (particularly 360-degree systems, which are rather in vogue), or spend time and money to train raters and develop evaluation metrics that have objective measurements or “behaviorally anchored” rating scales.

It wouldn’t be a Daniel Kahneman product if Noise failed to take aim at one of his particular bêtes noires, the hiring interview. He explained why they’re next to worthless in Thinking Fast and Slow, and here he does it again, saying explicitly, “if your goal is to determine which candidates will succeed in a job and which will fail, standard interviews … are not very informative. To put it more starkly, they are often useless.” There’s almost no correlation between interview success and job performance, and that’s not surprising, because the skills that make someone good at interviewing would only make them a better employee if the job in question also requires those same skills, which is … not most jobs. Unstructured interviews, the kind most of us know, are little more than conversations, and they serve as ideal growth media for noise. Two interviewers will have vastly differing opinions of the same candidate, even if they interview the candidate together as part of a panel. This pattern noise is amplified by the occasion noise prompted by how well the first few minutes of an interview go. (They don’t mention something I’ve suspected: You’ll fare better in an interview if the person interviewing you isn’t too tired or hungry, so you don’t want to be the last interview before lunch or the last one of the day.) They cite one psychology experiment where researchers assigned students to role-play interviews, splitting them between interviewer and candidate, and then told half of the candidates to answer questions randomly … and none of the interviewers caught on.

There’s plenty of good material here in Noise, concepts and recommended solutions that would apply to a lot of industries and a lot of individuals, but you have to wade through a fair bit of jargon to get to it. It’s also less specific than Thinking Fast and Slow, and I suspect that reducing noise in any environment is going to be a lot harder than reducing bias (or specific biases) would be. But the thesis that noise is at least as significant a problem in decision-making as bias is should get wider attention, and it’s hard to read about the defenses of the “human element” in sentencing people convicted of crimes and not think of how equally specious defenses of the “human element” in sports can be.

Next up: Martha Wells’ Nebula & Locus Award-winning novel Network Effect, part of her MurderBot series.

Comments

  1. “Nothing was more familiar to me than the discussion of the low value of performance evaluations in the workplace.”

    • Part of my reply was lost. It should have had:

      Insert Meryl_Streep_Clapping.gif

  2. A pedantic thing to point out, but I’d want to know. You reference the co-authors as “Cass Sunstein and Oliver” which I’m guessing was a place holder that didn’t get fixed. The other co-author is Olivier Sibony.