Now researchers led by Penn biologist Joshua B. Plotkin and the University of Houston’s Alexander J. Stewart have identified another impediment to democratic decision making, one that may be particularly relevant in online communities.
In what the scientists have termed “information gerrymandering,” it’s not geographical boundaries that confer a bias but the structure of social networks, such as social media connections.
Reporting in the journal Nature, the researchers first predicted the phenomenon from a mathematical model of collective decision making, and then confirmed its effects by conducting social network experiments with thousands of human subjects. Finally, they analyzed a variety of real-world networks and found examples of information gerrymandering present on Twitter, in the blogosphere, and in U.S. and European legislatures.
“People come to form opinions, or decide how to vote, based on what they read and who they interact with,” says Plotkin. “And in today’s world we do a lot of sharing and reading online. What we found is that the information gerrymandering can induce a strong bias in the outcome of collective decisions, even in the absence of ‘fake news.’
“This tells us that we need to be cautious about relying on social media for communication because the network structure is not under our control and yet it can distort our collective decisions.”
The researchers’ analysis revealed that information gerrymandering could easily produce biases of 20%. In other words, a group that was evenly split into two parties could nonetheless arrive at 60-40 decision due solely to information gerrymandering.
“The idea is akin to electoral gerrymandering, where one party can gain an advantage not by sheer number but deciding who votes in which district,” Plotkin says.
The question of whether this influence could lead to biased outcomes was one that felt particularly salient to Plotkin, given concerns about how the flow of information has been changed by social media.
“Right now, we need research about the effects of social media on the health of liberal democracies,” he says.
To begin, the researchers built a simple game in which players were assigned to competing groups, or parties. Placed on a network that determined whose voting intentions each person could see, players were incentivized so that the best outcome would be for their party to “win” the election. The second best outcome would be for the other party to win, and the worst result would be deadlock.
“What we found in a nutshell,” says Plotkin, “is that, even when two parties have an equal number of members and everything seems fair—everyone in the network is equally influential—the structure of the social network can still bias the outcome toward one party or another.”
The reason has to do with the way that the two parties interact with each other. When members of a single party are talking almost exclusively to one another and not across party lines, it can lead to what is known online as a filter bubble, where someone’s views are reinforced by those around them. Put two such groups together, each on the opposite side of an issue, and deadlock ensues.
When information is gerrymandered, however, a few members of one party end up in a conversation dominated by members of the other party. There, they have the opportunity to persuade the other side, or to be persuaded.
“The party at a disadvantage,” Plotkin explains, “is the one that has divided its influence—with most its members talking only to their own party, while a few of its members interact in bubbles dominated by the other party, where they are likely to be flipped.”
Working with coauthor David Rand at the Massachusetts Institute of Technology and colleagues, the team conducted more than 100 online experiments with more than 2,500 human subjects to test the effects of information gerrymandering. The games entailed the same scenario as the mathematical model: Teams of 12 players each were assigned to “vote” for either the yellow party or the purple party and incentivized to favor their assigned party with consensus as a second-best outcome. The experiments varied the structure of the social network and confirmed the predicted effects of information gerrymandering on vote outcomes.
“We can swing the final vote in these experimental games by 20% or more just by the structure of the social network,” Plotkin says. “Even if one party has a 2-to-1 size advantage, we predict the minority party can win a majority of votes through information gerrymandering.”
Curious whether they could induce information gerrymandering using automated bots, the researcher also inserted “zealot bots” that refuse to compromise. Sure enough, the appropriate placement of only a few zealots could also induce information gerrymandering and undemocratic outcomes.
To assess real-world networks for the presence of information gerrymandering, the researchers analyzed data on bill co-sponsorship in the U.S. Congress as well as European legislatures and networks of social media users participating in political discussion.
They found that information gerrymandering was extremely common in these real-world networks.
The researchers see this as the beginning of a new avenue of study focused on how social networks impact collective decision making.
“There has been a lot of attention on fake news and online trolls, which are certainly disruptive,” says Plotkin. “What we’re studying is something different, which depends on the overall network structure—a more subtle but possibly more pernicious problem for democratic decision making.”
Joshua B. Plotkin is a professor in the Department of Biology in the University of Pennsylvania School of Arts and Sciences. He has secondary appointments in the Department of Mathematics and in the School of Engineering and Applied Science’s Department of Computer and Information Science.
Alexander J. Stewart is an assistant professor at the University of Houston.
Plotkin and Stewart coauthored the work with Mohsen Mosleh, Antonio Arechar, and David G. Rand of the Massachusetts Institute of Technology and Marina Diakonova of the University of Oxford.
The study was supported by the Defense Advanced Research Projects Agency NGS2 programs (Grant D17AC00005), Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation, Templeton World Charity Foundation, Army Research Office (Grant W911NF-17-1-0083), and David and Lucile Packard Foundation.