Skip to main content
Research

To Improve the Accuracy of Prediction Markets, Just Ask

Prediction markets forecast future events by offering the opportunity to buy and sell contracts that pay if an event occurs. In theory, the markets are as accurate as possible because they incorporate all available information. But a study by Yale SOM’s Jason Dana and his co-authors showed that in some cases, forecasts can be improved by simply asking people what they think will happen.

A crowd of people on the street with percentages indicating their opinions.
  • Jason Dana
    Associate Professor of Management and Marketing

By Dylan Walsh

Will Joe Biden be the Democratic nominee for president? Will SpaceX land a crew on Mars by 2024? Will the Toronto Raptors repeat in next year’s NBA finals?

In the notoriously difficult business of forecasting, prediction markets may be one of the most reliable crystal balls. On these markets, investors trade contracts that pay a fixed amount if an event occurs. Say, for instance, a contract pays $100 if the Raptors win next year’s finals. Buyers might be willing to pay $85 for the contract if that outcome seems likely; they’ll each receive $100 if the Raptors win and $0 if they don’t. The price at which a contract trades implicitly represents the estimated probability that the event will occur; a contract trading at $85 represents an 85% chance.

“Prediction markets are really quite good,” says Jason Dana, an associate professor of management and marketing at Yale SOM. Given people have a financial stake in deducing the right answer, they tend to consider the question seriously, weigh their own knowledge and publicly available information about the outcome, and answer honestly. But working with three colleagues from the University of Pennsylvania, Dana found a surprising way to improve on the accuracy of these markets: just ask people what they think the outcome will be. The results were published in Judgment and Decision Making.


Read the study: “Are markets more accurate than polls? The surprising informational value of ‘just asking’”

Participants in the study traded contracts as they would in any prediction market. The novelty came in also asking participants to simply report their beliefs in the probability of an event’s occurrence, on a 0 to 100 scale, before they completed their buy or sell orders. The aggregated reported beliefs were then weighted according to several variables and compared with the estimates from the prediction market.

Dana and his colleagues used a particularly knowledgeable group for the study: volunteers participating in the ACE Program, which is sponsored by the U.S. government’s Intelligence Advanced Research Projects Agency and is intended to enhance the quality of forecasting. In 2013, more than 500 volunteers in this program were randomly assigned to take part in a prediction market covering 113 geopolitical questions, like “Will India and/or Brazil become a permanent member of the U.N. Security Council before 1 March 2015?” Participants could buy or sell shares of events at prices between 0 and 100 (no actual money was traded on this market).

When participants’ beliefs are aggregated correctly, “they were at least as accurate as the market price,” says Dana. “Sometimes they were significantly more accurate.” The finding suggests that while prediction markets capture some information related to the likelihood of an event’s taking place, they don’t capture all of the available information. Aggregating people’s opinions and then averaging those with the market prices generated significantly more accurate predictions than market prices alone, suggesting that the opinions carried unique information that the market did not capture.

There are two circumstances in particular in which asking people their opinions improves predictions: when there are not many people taking part in a market—when the trading is “thin”—or when the outcome is months or years away. So asking about the result of the 2020 presidential election, more than a year off, would be more accurate than data from a prediction market.

Dana offered a couple of precautionary notes on the findings. First, the method for aggregating self-reported opinions had to be carefully calibrated in order to improve the prediction. This involved giving more weight to recent opinions and to forecasters with a better track record; the sum of these opinions was then fed into a process known as “extremizing.”

“You have to think carefully about how to aggregate these scores into one guess,” Dana says. “That’s a big thing, not a small thing.”

“I was not exactly a champion of this kind of self-reporting when I walked into the project. I didn’t expect that just asking would produce such good results.”

Second, ACE Program participants were deeply informed experts who were personally engaged with the questions they were being asked. Both prediction markets and self-reported opinions may be less accurate when relying on a general population. But any organization facing questions about the future tends to employ people who carry expertise on a given issue and care about its successful resolution. Some companies and governments are already running prediction markets to map future scenarios; the study suggests that they can get similarly accurate results with surveys—which are simpler and easier for participants.

With this study, Dana and his colleagues have built a methodological bridge between two camps that are often wary—even dismissive—of one another’s approaches. Economists tend to study what people think by observing their behavior; prediction markets, for example, reveal what people believe through the wagers that they make. Other social scientists explicitly ask people to express their opinions and beliefs by number (on a scale of 1-5…). This work shows that combining both methods can lead to a result better than either on its own.

“That surprised me, because I was not exactly a champion of this kind of self-reporting when I walked into the project,” says Dana. “I didn’t expect that just asking would produce such good results.”

Department: Research
Topics: