Skip to main content
Research

How Can You Make Incentives More Effective? Make Them Opaque. 

Conventional wisdom says that when you’re evaluating someone, you should be transparent about the criteria. But a study from Yale SOM’s Florian Ederer suggests that when individuals or organizations don’t fully understand how they’re being ranked, they’re likely to work harder for higher ratings.

An illustration of a "black box" incentive system

If a teacher told students that they would be tested on only three chapters of the textbook, would they bother to read the rest? Ranking systems for law schools and hospitals, employee reward schemes, and other incentive systems make just that mistake: revealing too much about their ranking criteria.

When organizations or individuals know how an incentive scheme works, they’re likely to try to game the system in order to come out on top. Being opaque about precisely what ranking criteria are used can make incentive schemes more effective, according to a new study by Florian Ederer, assistant professor of economics at Yale SOM, and his co-authors.

Their results, published in the RAND Journal of Economics, offer solutions to a problem that scientists have grappled with for nearly 200 years. As early as 1830, philosopher Jeremy Bentham proposed that being less transparent about the content of civil service selection tests would drive people to study harder for them.

“Even though these ideas are old, gaming is still incredibly widespread,” Ederer says. “People don’t fully realize that using explicit incentive schemes leaves these incentive schemes open to being exploited.”


Read the study: “Gaming and strategic opacity in incentive provision”

For example, if one ranking criterion for hospitals was how much time patients spent in a waiting room, the hospital might simply have patients wait in exam rooms—reducing “wait times” as measured by the ranking scheme.

Ederer and his co-authors Margaret Meyer and Richard Holden created a model showing how three factors influenced whether a person (or organization) tried to game an incentive scheme: the agent’s risk aversion, the agent’s knowledge of the environment, and the incentive designer’s desire to balance agents’ efforts among different activities.

They found that all three factors worked together to explain why transparent incentive schemes were more susceptible to gaming. Without any one of these three, the need for opacity disappeared.

When people or organizations know precisely what they’re being rewarded for, and they know their own environment well enough, it’s easy to find ways to focus on tasks that require less effort but hold value in the incentive scheme. But a person who is averse to taking risks and unaware of how they’re being ranked is still likely to hedge their bets by balancing their efforts across all aspects of performance.

Transparency is likely to make participants focus on specific, low-effort, high-reward tasks; opacity will require larger rewards, but result in participants exerting more effort across the board.

Using the model, the researchers could predict when opacity paid off—and when it didn’t. Consider, for example, office workers who are unaware of whether they’re being evaluated for speed, accuracy, punctuality, or customer satisfaction. While a risk-averse person may choose to expend efforts on all four aspects of the job, a speedy, risk-taking worker might take the chance that being quick matters the most, and continue to focus on pace alone.

Ederer’s findings upend the conventional wisdom that opaque incentives are always less effective.

“There’s a long literature that says any additional randomness or opacity introduced in an incentive scheme can’t be good for the incentive designer,” he says. “But we found these situations where adding some uncertainty to the mix led to less gaming.”

Opacity does have a price: since players must put in more effort across the board, the rewards must be worthwhile. Otherwise, they’re unlikely to participate in the incentive scheme.

Incentive designers should factor in all three criteria to refine how players earn high rankings. Transparency is likely to make participants focus on specific, low-effort, high-reward tasks; opacity will require larger rewards, but result in participants exerting more effort across the board.

There’s already real-world evidence that substantiates the researchers’ findings. U.S News’ influential law school rankings used a transparent methodology and linear scoring, so law schools tweaked their strategies—by decreasing the number of full-time students so they’d show higher median LSAT scores, for example. After reports of such gaming, U.S News has said they intend to move toward being less transparent about their ranking methods.

The results apply to school rankings, employee performance incentives, sports, and a wide range of other scenarios, Ederer says. “When laid out in this model, everything we show sounds very intuitive,” Ederer says. “What’s surprising is that we could nicely highlight situations where either transparency or opacity is optimal. Ultimately, our goal is to learn how to design good, effective incentives for complex situations.”

Department: Research