Skip to main content
Research

Having Your Performance Misjudged Distorts How You Assess Others‌‌

A new study co-authored by Yale SOM’s Tristan Botelho found that if we are overlooked when we perform well or praised when we perform poorly, we tend to pass that misrecognition on when we evaluate others. ‌

If you’ve ever been denied a promotion or award you believed you earned, you know the sting can linger—as can the glow of unexpectedly receiving recognition. ‌

In fact, such instances of misrecognition may influence how you evaluate others in the future, according to new research from Yale SOM’s Tristan L. Botelho, with co-authors Mabel Abraham of Columbia and James T. Carter of Cornell. The researchers found that, whether positive or negative, misrecognition begets misrecognition: people who don’t get the accolades they’ve earned are stingier when handing out accolades themselves, while people who receive unearned favor pay it forward to others. ‌

But the forces underlying these two patterns are different, the researchers discovered: underrecognized evaluators are driven to withhold recognition by perceptions that the recognition process is unfair, while overrecognized evaluators update their understanding of who ought to be recognized based on their personal experience of being evaluated.‌

Understanding the effects of misrecognition on future recognition has meaningful real-world implications, Botelho points out, since many people are both evaluators and evaluated. “For example,” he says, “you might be in charge of your department’s end-of-year reviews, but you also are subject to that end-of-year review as well.” ‌‌

Botelho has been studying evaluation systems for years—he’s studied racial and gender differences in online ratings, as well as the effects of seeing prior evaluations on future evaluations. Given how often people toggle between rating others and being rated themselves, he and his colleagues wondered what the literature said about how evaluators respond to over- and underrecognition when they assess others. “We were surprised that we couldn’t find cases where other researchers had studied this relationship,” he says. ‌

So the researchers set out looking for a way to understand the effects of misrecognition on people’s subsequent evaluation behavior. They found a natural test in data from a digital platform where investment professionals share advice with each other, which the researchers pseudonymously call the Real Investors Club. These users submit recommendations to buy or short sell particular stocks, along with detailed justifications for their recommendations, which others on the platform can view and evaluate using a five-star system.‌

Botelho, who had used data from the Real Investors Club in a previous study, remembered that the platform had piloted a form of recognition for its users: an email that highlighted the top-rated recommendation of the week. Sometimes, however, there was a tie, when two recommendations had received a five-star rating. In that situation, Botelho explains, “the employee who was in charge of the pilot would just choose one,” inadvertently creating a useful experiment. How did underrecognized users behave before and after the email was sent? Did their evaluations of other users on the platform change?‌

The answer is yes, it turned out. Before the email, correctly recognized users (that is, those whose recommendations received a five-star rating and were highlighted in the email) and underrecognized professionals (those with five-star ratings but whose recommendations were not highlighted) made very similar average recommendations to other users. After the email, however, underrecognized users began giving lower ratings—their average rating went down from 2.88 stars to 2.38 stars, a decrease of 17.4%. ‌

What’s motivating the overrecognized to overrecognize is that they have a misunderstanding of how the process works. But what’s really motivating the underrecognized to underrecognize is a sense that the process isn’t fair.

But while the data from Real Investors Club “has this quasi-natural experimental feature to it, it’s a small amount of data,” Botelho says. It didn’t examine the question of overrecognition, and it couldn’t help the researchers understand why misrecognition perpetuates. To get closer to those answers, Botelho and his co-authors needed to run larger, more controlled experiments. ‌

In the first of their experiments, the researchers recruited online participants to complete a 10-question aptitude test. Participants also learned about a recognition system the researchers had devised, called the Elite Award, given (ostensibly) to high performers. After taking the test, participants were told, based on their real results, whether they were high performers or low performers on the test and whether they had received the Elite Award. To understand the effects of misrecognition, the researchers intentionally gave the award to some undeserving low performers and denied it some deserving high performers. ‌

Next, participants were tasked with evaluating an aptitude test themselves—and handing out the Elite Award to others. Participants were given a test containing five correct and five incorrect answers and had to decide whether the test-taker should be given the Elite Award. In addition, they were asked to what extent the Elite Award process was fair. ‌

Just as they’d seen in the Real Investors Club data, the researchers found that people who were underrecognized tended to underrecognize others. Among high performers, correctly recognized evaluators (those who received the Elite Award) granted the Elite Award to a peer 44% of the time, while underrecognized evaluators (those who did not receive the Elite Award) were stingier, granting the award to others only 27% of the time. ‌

Misrecognized low performers, meanwhile, proved to be generous evaluators, granting recognition to a peer 59% of the time, in comparison to just 26% among correctly recognized low performers. ‌

But why did underrecognized evaluators withhold the Elite Award from others? In their analysis, the researchers identified fairness—or lack thereof—as a key factor. “Participants who reported the allocation process to be more unfair withhold recognition the most,” Botelho explains.‌

However, the same was not true among overrecognized participants—among this group, there wasn’t a strong relationship between perceptions of fairness and likelihood of granting the Elite Award. The researchers suspected this was because overrecognized evaluators took their own status as a cue for how to behave. In other words, these participants figured if they had gotten the Elite Award despite low performance, the award was intended to be given out generously. ‌

So, in another study, the researchers tested whether providing more explicit criteria for the Elite Award would change subsequent evaluations. They repeated the same experiment again, but this time, some participants were told that the Elite Award is generally given to people who answered at least 7 out of 10 questions correctly on the test. ‌

This intervention helped bring overrecognized evaluators back down to earth. In the face of this new informational cue, they began giving out the Elite Award at rates more in line with other participants. The more explicit criteria did not significantly change the behavior of underrecognized evaluators—likely because it did not lessen the sense that the Elite Award process was unfair. ‌

“What’s motivating the overrecognized to overrecognize in the first place is that they have a misunderstanding of how the process works,” Botelho explains—and that misunderstanding can be corrected. “But what’s really motivating the underrecognized to underrecognize is a sense that the process isn’t fair.” ‌

In future research, Botelho hopes to understand how long the effects of misrecognition last. “Maybe it lasts in some settings 60 days, and in other settings a whole year,” he says. Knowing how stubborn the effects are could shape future interventions. ‌

For now, he’s trying to take his own findings to heart. “I would never review a paper in short order after getting bad news about my own paper—or good news about my paper,” he says. “I try my best to be cognizant of the fact that this could bias the way I’m thinking about it and give myself a cooling-off period.” While no system is perfect, trying our best to reduce is bias is essential, “because I think we can all agree that we want evaluation processes to be structured in such a way that they’re fair and accurate.” ‌

Department: Research
Topics: