Superforecasting.

I’m a bit surprised that Philip Tetlock’s 2015 book Superforecasting: The Art and Science of Prediction hasn’t been a bigger phenomenon along the lines of Thinking Fast and Slow and its offshoots, because Tetlock’s research, from the decades-long Good Judgment Project, goes hand in hand with Daniel Kahneman’s book and research into cognitive biases and illusions. Where Kahneman’s views tend to be macro, Tetlock is focused on the micro: His research looks at people who are better at predicting specific, short-term answers to questions like “Will the Syrian government fall in the next six months?” Tetlock’s main thesis is that such people do exist – people who can consistently produce better forecasts than others, even soi-disant “experts,” can produce – and that we can learn to do the same thing by following their best practices.

Tetlock’s superforecasters have a handful of personality traits in common, but they’re not terribly unusual and if you’re here there’s a good chance you have them. These folks are intellectually curious and comfortable with math. They’re willing to admit mistakes, driven to avoid repeating them, and rigorous in their process. But they’re not necessarily more or better educated and typically lack subject-matter expertise in most of the areas in the forecasting project. What Tetlock and co-author Dan Gardner truly want to get across is that any of us, whether for ourselves or for our businesses, can achieve marginal but tangible gains in our ability to predict future events.

Perhaps the biggest takeaway from Superforecasting is the need to get away from binary forecasting – that is, blanket statements like “Syria’s government will fall within the year” or “Chris Sale will not be a major-league starting pitcher.” Every forecast needs a probability and a timeframe, for accountability – you can’t evaluate a forecaster’s performance if he avoids specifics or deals in terms like “might” or “somewhat” – and for the forecaster him/herself to improve the process.

Within that mandate for clearer predictions that allow for post hoc evaluation comes the need to learn to ask the right questions. Tetlock reaches two conclusions from his research, one for the forecasters, one for the people who might employ them. Forecasters have to walk a fine line between asking the right questions and the wrong ones: One typical cognitive bias of humans is to substitute a question that is too difficult to answer with a similar question that is easier but doesn’t get at the issue at hand. (Within this is the human reluctance to provide the answer that Tetlock calls the hardest three words for anyone to say: “I don’t know.”) Managers of forecasters or analytics departments, on the other hand, must learn the difference between subjects for which analysts can provide forecasts and those for which they can’t. Many questions are simply too big or vague to answer with probabilistic predictions, so either the manager(s) must provide more specific questions, or the forecaster(s) must be able to manage upwards by operationalizing those questions, turning them into questions that can be answered with a forecast of when, how much, and at what odds.

Tetlock only mentions baseball in passing a few times, but you can see how these precepts would apply to the work that should come out of a baseball analytics department. I think by now every team is generating quantitative player forecasts beyond the generalities of traditional scouting reports. Nate Silver was the first analyst I know of to publicize the idea of attaching probabilities to these forecasts – here’s the 50th percentile forecast, the 10th, the 90th, and so on. More useful to the GM trying to decide whether to acquire player A or player B would be the probability that a player’s performance over the specified period will meet a specific threshold: There is a 63% chance that Joey Bagodonuts will produce at least 6 WAR of value over the next two years. You can work with a forecast like that – it has a specific value and timeframe with specific odds, so the GM can price a contract offer to Mr. Bagodonuts’ agent accordingly.

Could you bring this into the traditional scouting realm? I think you could, carefully. I do try to put some probabilities around my statements on player futures, more than I did in the past, certainly, but I also recognize I could never forecast player stat lines as well as a well-built model could. (Many teams fold scouting reports into their forecasting models anyway.) I can say, however, I think there’s a 40% chance of a pitcher remaining a starter, or a 25% chance that, if player X gets 500 at bats this season, he’ll hit at least 25 home runs. I wouldn’t go out and pay someone $15 million on the comments I make, but I hope it will accomplish two things: force me to think harder before making any extreme statements on potential player outcomes, and furnish those of you who do use this information (such as in fantasy baseball) with value beyond a mere ranking or a statement of a player’s potential ceiling (which might really be his 90th or 95th percentile outcome).

I also want to mention another book in this vein that I enjoyed but never wrote up – Dan Ariely’s Predictably Irrational: The Hidden Forces that Shape Our Decisions, another entertaining look at cognitive illusions and biases, especially those that affect the way we value transactions that involve money – including those that involve no money because we’re getting or giving something for free. As in Kahneman’s book, Ariely’s explains that by and large you can’t avoid these brain flaws; you learn they exist and then learn to compensate for them, but if you’re human, they’re not going away.

Next up: Paul Theroux’s travelogue The Last Train to Zona Verde.

Comments

  1. I found Ariely’s book to have interesting points but him and his writing style/tone to be quite annoying and condescending to the point where it detracted from the book.

  2. Dan Ariely’s short videos for the Big Think (on YouTube) are recommended viewing on various topics as well, in case anyone hasn’t seen them. Even if you don’t agree with the conclusions, I appreciate that he comes at problems in a different way than I do.

  3. I just wish this approach worked better as a business model for the media. The shouty personalities get noticed, even if they really aren’t good at all at it. Sports, politics, entertainment all have these people. There were probably a lot of sports radio and tv people who dismissed Clemson’s chances of winning the national title, but there was only one media personality that Dabo Swinney called out after the game. And that media personality probably loved every minute of it as it means more people tuning in to watch him say mea culpa. Say outlandish things and either they’re right, and you’ll never hear the end of it, or they say they’re wrong before making another spectacular claim. Either way, people watch.

  4. Keith, another good read in the same vein is Michael Lewis’s “The Undoing Project” which biographs Kahneman and Tversky’s collaboration on behavior economics

  5. I believe the Freakanomics folks did a podcast about this topic (maybe about this author?) a year or two ago. Can anyone back me up on that?

Trackbacks

  1. […] Law recently reviewed Philip Tetlock’s book, Superforecasting. I read the book about a year ago and found it […]