Mislove says it might not be a problem for marketing ads or political ads that are intended to raise money by appealing to a campaign’s base, but it is a problem for an ad that’s intended to change the mind of a voter who’s not already on board with the message.

In one case, the researchers found that when they targeted an audience of users defined by Facebook to have “likely engagement with US political content (Liberal)” and an equal audience of people who have “likely engagement with US political content (Conservative),” 60 percent of the liberal users saw their Democratic ads, and only 25 percent saw the Republican ads.

In another ad run, researchers pushed out Sanders and Trump ads at the same time to a conservative audience. All else being equal, the Trump ad was delivered to 21,792 conservative Facebook users, and the Sanders ad to 17,964 conservative users—almost 20 percent fewer people.

The researchers also found that if a political advertiser wanted to overcome this ideological divide, the advertiser had to pay more for the ad. In the most extreme cases, this meant paying as much as two- or three times more for an ad, Mislove says.

When the researchers sent out a neutral ad that encouraged people to register to vote, it reached a much more balanced proportion of liberal and conservative Facebook users, despite all other constraints being the same.

For Mislove, the results illustrate a broader problem in society today—the sheer amount of influence that unseen and unregulated algorithms have on everything we do.

“Whether you’re browsing Facebook or using Google Maps, there’s an algorithm that’s optimizing everything you see online,” he says. “And there’s very little accountability, and very little transparency, about how these algorithms determine what that optimization looks like. What I’m thinking about is how we can measure these things, and how we can audit them.”

For media inquiries, please contact Mike Woeste at [email protected] or 617-373-5718.