Do you struggle with a protracted process of interviewing potential hires only to end up choosing the wrong person?
Do you share identical information with multiple "experts" only to receive wildly different results?
Research shows that the accuracy of your predictive judgments is probably not only low, but most likely so low as to be inferior to a simple formula or even a coin flip.
To find out why, read Noise by Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein.
Imagine you are a judge, and you are deciding whether to grant bail at a preliminary hearing. This scene is played out in courtroom dramas all the time in movies with all sorts of dramatic arguments back and forth about flight risk, and resources and so on. But in fact, there are two factors which are overwhelmingly accurate predictors of whether a defendant is a risk for flight – whether they have done it before, and their age; old people do not jump bail and bail jumpers repeat. But judges will still convince themselves that their experience and wisdom, applied in a nuanced fashion, will generate an accurate prediction of the likelihood that the defendant will jump bail, when a decision based on two simple statistics will yield a much higher percentage of accurate decisions.
Do you realize that your subjective confidence in your own judgments has very little relationship to their objective accuracy?
Although this can be related to psychological biases, or objective ignorance, most often it is based on what is called an internal signal, which is a self-generated reward humans generate for themselves for fitting the facts and the judgment into a coherent story, when, more often than not, it is the result of noise in the decision-making process, which is defined as variability in judgments which should otherwise be identical.
How many times do we hear ourselves say, after an event has taken place, "I knew that was going to happen", when, of course, we didn’t. In hindsight, we remove the noise and uncertainty from the narrative and convince ourselves, out of an instinctive human desire for coherence, that we understand why the event occurred, and even that we had some experience of prescience about it beforehand.
The authors of this book, including Daniel Kahneman, winner of the Nobel Prize for Behavioral Economics and author of the book, Thinking Fast and Slow, explore multiple studies in the areas of variability in judicial sentencing, hiring decisions in corporations, handling of customer complaints, judgments by insurance adjusters, and find an astonishing degree of variability in judgments based on the exact same facts that should be identical, but because of noise, are not. A judge whose hometown football team loses a game on the weekend may deliver a harsher sentence on a Monday, an insurance adjuster may make an overly generous resolution of a claim because their child was just accepted at a good college, and so on. This kind of variability is the result not of bias, which is relatively simple to identify and correct, but because of noise, which is not. Insurance companies are aware of this problem, and many have attempted to implement systems to compensate for it, with varying degrees of success. Jurists and politicians are aware of inequitable variability in sentencing, which is why sentencing guidelines exist, but again their effectiveness is uneven, and sometimes guidelines can make the problem worse, rather than correcting it.
This book picks up where Thinking Fast and Slow leaves off and provides even a different perspective on the practice of predictive analysis and decision-making which I guarantee will affect your perception of your own decision-making and your understanding of how to apply that knowledge to an organization or a process.
In my opinion, Daniel Kahneman is a genius, and I’m guessing these other two guys, Sibony and Sunstein, must be pretty smart as well.