How Social Media Rewards Misinformation
A majority of false stories are spread by a small number of frequent users, suggests a new study co-authored by Yale SOM’s Gizem Ceylan. But they can be taught to change their ways.
In the early months of the COVID-19 pandemic, posts and videos promoting natural remedies for the virus—everything from steam inhalation to ginger—proliferated online. What explains the storm of likes and shares that propels online misinformation through a social media network?
It’s not that people are lazy or don’t want to know the truth. The platforms’ reward systems are wrong.
Some scholars suggest that people share falsehoods out of bias, while others argue that failures of critical thinking or media literacy are to blame. But new research from Gizem Ceylan, a postdoctoral scholar at Yale SOM, suggests that these explanations miss something important. In a new study co-authored by Ian Anderson and Wendy Wood of the University of Southern California, Ceylan found that the reward systems of social media platforms are inadvertently encouraging users to spread misinformation.
By constantly reinforcing sharing—any sharing—with likes and comments, platforms have created habitual users who are largely unconcerned with the content they post. And these habitual users, the research shows, spread a disproportionate share of misinformation.
“It’s not that people are lazy or don’t want to know the truth,” Ceylan says. “The platforms’ reward systems are wrong.”
The researchers conducted several experiments to test how different kinds of social media users interact with true and false headlines. In the first, they asked participants to review eight true and eight false headlines and decide whether to share them to a simulated Facebook feed. They also asked several questions designed to assess how habitually participants used Facebook—that is, how much time they spent on the platform and how automatically they shared content.
The results showed that, overall, participants shared many more true headlines than false ones. However, the patterns were markedly different for the most habitual Facebook users. These participants not only shared more headlines overall, but also shared a roughly equal percentage of true (43%) and false (38%) ones. Less frequent users, by contrast, shared 15% of true and just 6% of false headlines.
Ceylan and her co-authors calculated that the 15% most habitual Facebook users were responsible for 37% of the false headlines shared in the study, suggesting that a relatively small number of people can have an outsized impact on the information ecosystem.
But why do the heaviest users share the most misinformation? Is it because the misinformation is in line with their beliefs? Not so, a subsequent experiment found. Habitual users will even share misinformation that goes against their political beliefs, the researchers discovered.
In this experiment, Ceylan and her colleagues asked participants to consider headlines that reflected partisan bias. Then, they examined concordant sharing (that is, liberals sharing liberal headlines or conservatives sharing conservative headlines) and discordant sharing (Iiberals sharing conservative headlines and vice-versa) among both habitual Facebook users and less frequent ones.
Interestingly, while less frequent users showed a stark preference for sharing concordant as compared to discordant headlines (that is, reflecting the partisan bias found in prior research), the pattern was less marked among the heaviest users. These habitual users shared more overall and also showed a less strong preference for concordant as compared to discordant information—another indication that habit was driving their behavior.
To Ceylan, this does not suggest that heavy Facebook users are not ideologically motivated or lazy in their processing when they spread misinformation. Instead, when users’ sharing habits are activated with cues from the social platform, the content they are sharing—its accuracy and partisan slant—is largely irrelevant to them. They post simply because the platform rewards posting with engagement in the form of likes, comments, and re-shares.
“This was kind of a shocker for the misinformation research community,” she says. “What we showed is that, if people are habitual sharers, they’ll share any type of information, because they don’t care [about the content]. All they care about is likes and attention. The content that gets attention become part of habitual users’ mental representations. Over time, they just share content that fits this mental representation. Thus, rewards on a social platform are critical in shaping people’s habits and what they are attuned to share with others.”
It’s a system issue, not an individual issue. We need to create better environments on social media platforms to help people make better decisions.
The study also suggests a potential way out of this trap. In their last experiment, the researchers simulated a new structure that rewarded participants for accuracy. When they share accurate information, participants were awarded points that were redeemable for an Amazon gift card.
“In this simulation, everyone shared lots of true headlines and few false headlines. Their previous social media habits did not matter,” Ceylan explains. “What this means is that, when you reward people for accuracy, people learn the type of content that gets rewards and build habits for sharing that content (i.e., accuracy in this case).” What’s more, surprisingly, the users continued to share accurate headlines even after the researchers removed the reward for accuracy. These results show that users can be incentivized to develop a new habit: accuracy.
To Ceylan, the research demonstrates how powerfully social media has shaped our habits. Platforms aim for more profit and engagement —i.e., more users spending longer hours using them. By rewarding and amplifying any type of engagement regardless of its quality or accuracy, platforms have created users who will share indiscriminately. “It’s a system issue, not an individual issue. We need to create better environments on social media platforms to help people make better decisions. We cannot keep on blaming users for showing political biases or being lazy for the misinformation problem. We have to change the reward structure on these platforms.”