New! Sign up for our email newsletter on Substack.

Algorithms Can Pull Us Apart. This Tool Shows They Can Bring Us Back

A small tweak to your social media feed can make your opponents feel a little less like enemies. In a new study published in Science, a Stanford-led team used a browser extension and a large language model to rerank posts on X during the 2024 U.S. presidential campaign, showing that changing the visibility of the most hostile political content can measurably dial down partisan heat without deleting a single post or asking the platform for permission.

The experiment, run with 1,256 Democrats and Republicans who used X in the weeks after an attempted assassination of Donald Trump and the withdrawal of Joe Biden from the race, targeted a particular kind of content. The researchers focused on posts that expressed antidemocratic attitudes and partisan animosity, such as cheering political violence, rejecting bipartisan cooperation, or suggesting that democratic rules are expendable when they get in the way of the preferred party.

A Browser Extension To Reroute Outrage

To reach inside a platform they did not control, first author Tiziano Piccardi and colleagues built a browser extension that quietly intercepted the web version of the X timeline. Every time a participant opened the For you feed, the extension captured the posts, sent them to a remote backend, and had a large language model score each political post on eight dimensions of antidemocratic attitudes and partisan animosity.

If a post hit at least four of those eight factors, it was tagged as the kind of content most likely to inflame. The tool then reordered the feed for consenting users in real time. In one experiment, it pushed those posts down the feed so participants would need to scroll further to hit the worst material. In a parallel experiment, it did the opposite and pulled that content higher.

“Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them,” said Michael Bernstein, a professor of computer science in Stanford’s School of Engineering and the study’s senior author. “We have demonstrated an approach that lets researchers and end users have that power.”

The intervention was subtle on purpose. Participants were not told exactly how their feeds might change, and most later reported they did not notice any systematic difference. Still, the algorithm was busy underneath. In the reduced exposure condition, the tool downranked an average of about 85 hostile posts per person per day. In the increased exposure condition, it selectively promoted fewer but more intense posts that scored higher on the hostility scale.

Two Degrees Warmer, Or Colder, Toward The Other Side

The question was not whether people would like their feeds more. It was whether this targeted reranking would change how they felt about the people on the other side of the political aisle.

Before and after the 10 day study, participants rated their feelings toward the opposing party on a 0 to 100 feeling thermometer. In the group whose hostile content was pushed down the feed, warmth toward the opposing party rose by just over 2 points. In the group whose hostile content was pulled up the feed, warmth dropped by a similar margin. The researchers note that this shift is comparable to roughly three years of change in nationwide affective polarization.

“When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party,” said Piccardi, now an assistant professor of computer science at Johns Hopkins University. “When they were exposed to more, they felt colder.”

Those shifts were bipartisan. Democrats and Republicans responded in similar ways when their feeds were tilted away from or toward antidemocratic and hostile posts. And when the team probed which kinds of hostility mattered most, posts that attacked bipartisanship, showed overt animosity, or twisted politicized facts were especially predictive of colder feelings toward the out party.

The feed changes also touched emotion in the moment, although not in a way that obviously rewired people days later. In short in feed surveys, participants who saw less hostile content reported less anger and sadness while using X. Those pushed toward more of that content reported more anger and sadness. Longer term, off platform emotion reports did not show the same signal, suggesting that the emotional effects might be sharp but short lived.

Algorithmic Power, Minus Platform Permission

The work is partly a methods paper in disguise. By proving that a browser extension and a large language model can rerank feeds in real time without platform cooperation, the team is trying to hand some power back to outside researchers and, eventually, to users themselves. Instead of waiting for platforms to run carefully negotiated experiments, independent teams can prototype ranking systems that prioritize healthier democratic outcomes, mental health, or other social goals.

There are tradeoffs. In the reduced exposure experiment, participants spent somewhat less time on the platform and saw and liked fewer posts overall, even though they engaged at slightly higher rates with what they did see. That pattern matches a hard reality for engagement driven business models. Content that stokes anger and contempt is very good at keeping people scrolling.

For now, the study stops short of claims about permanent attitude change or how this kind of reranking would play out over months or years. But it offers something that has been rare in the debate over social media and democracy, a clean causal test that shows how one specific algorithmic choice changes how citizens feel about one another. It hints at a future in which the knobs on those algorithms are not locked inside the platforms, and where a user who wants less antidemocratic bile in their feed might finally get a meaningful say.

Journal: Science
Article title: Reranking partisan animosity in algorithmic social media feeds alters affective polarization
DOI: 10.1126/science.adu5584


Quick Note Before You Read On.

ScienceBlog.com has no paywalls, no sponsored content, and no agenda beyond getting the science right. Every story here is written to inform, not to impress an advertiser or push a point of view.

Good science journalism takes time — reading the papers, checking the claims, finding researchers who can put findings in context. We do that work because we think it matters.

If you find this site useful, consider supporting it with a donation. Even a few dollars a month helps keep the coverage independent and free for everyone.


Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.