13.7: Cosmos and Culture has a good article on predictions.
Making good predictions isn’t just about your accuracy; it’s also about your calibration.
Accuracy = how often correct
Calibration = confidence level in the prediction
All too often when we see predictions no one asks about the calibration. Nor do we go back and check the accuracy. I know I am weird in the sense that when I make predictions for work such as a file system is going to run out of space in x days, I often place my level of confidence in that prediction. (It might happen sooner, but it could also happen much later.)
The greater our expertise, the worse our overconfidence level tends to bite us. Knowing a lot hurts us. We tend to minimize the space of allowable error in our thinking when our confidence is too high. Even a worrier like myself falls victim often enough. Here is an example overconfidence test.
The paper, by psychologists Joyce Ehrlinger, Ainsley Mitchum and Carol Dweck, reports three studies in which participants were asked to estimate their own performance on a task, either a multiple choice test with antonym problems or a multiple choice general knowledge quiz. The participants were asked to estimate their percentile relative to other students completing the task, from zero percent (worse than all other students) to 100 percent (better than all other students). If participants were perfectly calibrated, then the average percentile estimate should have been 50 percent. But that’s not what they found. In Study 1, for instance, the average was 66 percent. Like the children of Lake Wobegon, participants (on average) believed themselves to be better than average.
Thinking back about many of my work predictions? They were probably way to high. Something that was essentially a super wild ass guess (technical term is SWAG)Â may have been reported as 70-80% confident. It was informed by metrics and a trend, but there was no reason to think that trend would continue.
Leave a Reply