Pages

The Perils of Being Overconfident

... and advice on becoming as certain as you need to be.


firefighter-woman-rescue-1176686.jpg

Can one have too much confidence? Don Moore of the Haas Business School at the University of California at Berkeley has done more for the study of overconfidence than nearly anyone. (The papers he publishes with his students and colleagues are invariably of the highest quality. For anyone interested in this important topic, I recommend consulting one of Don’s papers.)
In a recent article with Al Mannes, who also contributes to social and business science, Don introduced a behavioral measure of overconfidence (Mannes & Moore, 2013). This research deviated from the familiar way in which overconfidence has been seen and assessed. Traditionally, we ask participants general knowledge questions, often of the trivial variety. How long, for example, do you think is the River Nile? The "true" answer is 4,258 miles (6,853 km), but I leave true in quotation marks because no one really knows exactly. What passes as the river’s "true" length is really just our best geographical estimate. Most people on the street wouldn’t know the answer without checking their hand-held electronic devices. Yet many know that the Nile is pretty darn long, like the Amazon or the Mississippi-Missouri. So their guesses might gravitate toward the right neighborhood. There is, however, and should be, uncertainty.
Researchers of judgment and decision-making usually ask participants to throw a 90% confidence interval around their estimates. In other words, they invite them to produce a low estimate and a high estimate for a fact in such a way that they are 90% certain that the true value lies between those estimates. Most research participants do as asked, although it must seem odd to some—it is not common to think this way. The concept of the confidence interval is a statistical one. It has a precise technical meaning, which statisticians of different schools still debate. Mannes and Moore sought a new measure, more closely aligned with how people think naturally. Then they put the hypothesis of overconfidence to a tougher and more convincing test.  
Their ingenious solution was to give participants a fixed interval. The task was to estimate the high temperature for their city of residence for a particular day. What, for example, was the high temperature in Berkeley on May 1? Respondents could win "points" (in the form of lottery tickets) if they got the answer correct within 6 degrees—that is, an estimate could be a winner if it overestimated or underestimated the true value by 3 degrees. On some trials, however, respondents could earn only points if their estimates were either correct or too high by up to 6 degrees; on yet other trials, they could earn from being correct or estimating too low. Notice that the interval for earning points was always the same width; yet, moving the interval up or down from its centered position forced respondents to reveal how confident they were in their estimates.
Imagine a person who is absolutely certain that her temperature estimate is accurate. This person would not increase her estimate when overestimation is also rewarded. There would be no need. In contrast, a person with a great deal of uncertainty would provide ahigher estimate in order to minimize the chances of estimating too low and getting no reward. From this general consideration, Mannes and Moore derived a statistical index of overconfidence (or “overprecision,” to be more precise), the details of which need not detain us here.
The main finding was that people are indeed overconfident in their estimates. This means that the conventional way of measuring overconfidence is not that insufficient after all. Mannes and Moore replicated the traditional effect and found that it tracked the findings obtained with the new method. Mannes and Moore did not only want to improve measurement, but also connect overconfidence with human action.
In the introduction to their paper, they warn against overconfidence because it can compromise action planning. Overconfident individuals bite off more than they can chew. For example, they may leave too late to arrive at an appointment on time, thinking that they can make it. As a result, they fall off the metaphorical cliff, to use the authors’ apt phrase.
Consider this deadline scenario: It is 11:00 am and you have agreed to meet Louie for lunch at La Cantina at 12:00 pm. You estimate that it will take you 30 minutes to get there. If you are absolutely sure about this, you can leave home at 11:30. If you were less sure, you would ask the traditional question of what window of travel time would include the desired outcome with 90% confidence. If the answer is 25-to-35 minutes, it is now the upper boundary (35) that is relevant for your decision about when to leave—which would be at 11:25. With overconfidence, then, your chances of meeting Louie in time may just be 50:50.
Mannes and Moore’s revised approach acknowledges that a deadline presents an asymmetric reward structure: Being a bit too early still yields the reward of being able to keep the appointment. Being too late produces a cost. An uncertain person (and perhaps certain ones, too) should strategically estimate their travel time to be longer. They should know that a Type I error (the false positive of showing up early) is less costly than a Type II error (missing the appointment). An overconfident person would add less time to the estimate than her actual uncertainty demands.
What if the estimating person is not you but someone on whose advice you rely? What if your dentist is certain that your bicuspid does not need a root canal? Even this expert dentist may be overconfident—statistically, overconfidence are greatest when respondents think they are certain (unless the task is trivially easy and everyone is certain). If you think the dentist might be wrong, getting a second opinion is a good idea. When time is scarce, my heuristic work-around follows the traditional rabbi’s response to the would-be convert: I ask the advisor (in this case, the dentist) three times: Are you sure? Are you really sure? Are you absolutely sure? If the advisor confirms each time with the appropriate body language and a deepening air of annoyance, I take it.  
Although Mannes and Moore are sensitive to the possibility that overconfidence can be a good thing, they focus on the risks. To my mind, a good example of rational and adaptive overconfidence is found among those who resist probability matching (Tversky & Edwards, 1966). Suppose you look for cookies, but you are allowed to open only one jar at a time. The jars are refilled and shuffled around, and 70% of the time the cookies are in the left one and not the right one. A rational person bets on the left jar all the time, thus behaviorally predicting that that’s where the cookies are. Someone else might open the left jar 70% of the time. This person has learned the probability of reward well, but misused it. In other words, if you know that action X is more likely to yield a reward than the alternative Y, it is wise to predict and go for X with confidence.

No comments:

Post a Comment