Every year in the United States, volunteers perform more than seven billion hours of service—delivering food to seniors, tutoring children, and much more. Platforms like VolunteerMatch, the nation’s largest online volunteer-recruitment resource, help make it happen by connecting volunteers with organizations that are seeking them. The site makes more than a million matches each month.
But even though the platform’s overall output is prolific, VolunteerMatch told Yale SOM’s Vahideh Manshadi, many of the organizations using it still were unable to find the volunteers they needed. And this was not because one organization was somehow more attractive to volunteers than others.
“The platform was collecting data, but they did not have the resources to analyze it in detail,” says Manshadi, a professor of operations whose research uses operations tools to improve the functioning of online platforms used by nonprofits and other organizations. ”And when we looked at their data, it revealed a winner-take-all effect. Some organizations received more volunteers than they actually needed. Others were getting none at all.”
The algorithm was showing users the most popular opportunity as being most relevant. That is exactly what Google does, but VolunteerMatch is not a search engine; it is a matching platform. The goal is to match demand and supply.
Working with UCLA Anderson’s Scott Rodilitz, Stanford’s Daniela Saban, and Yale SOM’s Akshaya Suresh, Manshadi dove into the code of the VolunteerMatch algorithm. The researchers found that the platform was using a recommendation algorithm that sorted opportunities based on their relevance to the user, much like a search engine would. Its search function presented users with a ranking of the “top” opportunities, and this meant that some opportunities were shown to many users, while others languished.
“The algorithm was showing users the most popular opportunity as being most relevant. That is exactly what Google does, but VolunteerMatch is not a search engine; it is a matching platform. The goal is to match demand and supply,” says Manshadi.
And when opportunities fell further down the algorithm’s rankings, they often went unseen—and unfilled. This had negative implications for both low-ranking opportunities that could not find volunteers, and high-ranking ones that got more sign-ups than they needed and were forced to devote additional resources to the administrative work of processing too many applications.
In order to increase the number of opportunities that got some engagement, Manshadi and her team set out to optimize the way the algorithm ranks opportunities. The algorithm assigns each opportunity a score, and was using a simple logic based primarily on proximity and recency. This meant that a recently posted opportunity that was near a user’s physical location would be ranked higher than one that was farther away and older. And this would occur even if the newer opportunity had already attracted multiple volunteers, and the older one had not gotten any.
“Volunteers want to make a contribution through work they enjoy and care about, but they also want to be needed. And if you are volunteer number 50, you are not really needed,” says Manshadi.
“We wanted to tweak the algorithm to better represent all possibilities, and make it more equitable. And the logic of how we did this is very simple. Once an opportunity has received attention and gotten some signups, the algorithm reduces the score it assigns to it. And when you do that, other opportunities move up the ranking.”
The modified algorithm was tested in two of VolunteerMatch’s key markets. The Dallas/Fort Worth region has urban, suburban, and rural areas that allowed the researchers to observe whether there were any differences in how the new algorithm performed in different environments. They also tested it in Southern California, which allowed them to observe differences in two different-sized cities: San Diego and Los Angeles.
They found that with the new algorithm in place, the number of organizations that got at least one volunteer signup through the platform increased by 8-9%. At the outset of the study, the VolunteerMatch staff was concerned that the changes to the algorithm could lower the total number of volunteers who actually signed up for opportunities, but in fact the loss was negligible.
The results were consistent across the different markets, and VolunteerMatch has since deployed the new version of the algorithm nationwide. If the effect is similar on a national scale, the researchers estimate, more than 30,000 volunteer applications would be re-directed to opportunities that would have been overlooked otherwise.
“We learned that if you are willing to lose very little efficiency—close to nothing—there are ways to make things more equitable,” says Manshadi, who notes that the winner-take-all effect is pervasive in other algorithms too. She suggests that similar techniques could also be used to make widely used platforms like Amazon or Yelp, which also tend to push users toward a few top results, more equitable.
Manshadi also hopes that the project will serve as a demonstration of how data-driven approaches can increase the impact of nonprofit organizations.
“We approached VolunteerMatch with an open mind. It runs a very impactful and important platform, and we have expertise in improving platform operations. They were open to having a conversation,” says Manshadi.
“Nonprofits have a lot to gain, but many people in the sector may not be aware of the possibilities of leveraging their data, or have resources to invest in it,” she adds. “And if nonprofits have not had experience with analytics and optimization, they may not know how much it can help. The nonprofit sector’s use of data is behind for-profit organizations, and this is unfortunate. It could really increase their impact—and make a difference in people’s lives.”