New! Sign up for our email newsletter on Substack.

Honey Beats Vinegar in Fighting Online Toxicity, Study Shows

When someone posts something offensive in a comment section, your first instinct might be to fire back with an insult. But Cornell University researchers have discovered that taking the moral high road actually works better at cleaning up online discourse.

In a comprehensive analysis of over 8,500 comments from YouTube and Twitter, scientists identified seven distinct ways people try to shut down objectionable content online. While personal attacks dominated the battlefield of public comment sections, making up nearly half of all objections, the researchers found something surprising when they tested different approaches on simulated social media platforms.

“Many people tend to think it’s only a small minority that comment on public social media posts like news stories, and while that might be true on the whole, many more people do read these comments,” said Ashley Shea, a Ph.D. candidate in the field of communication.

Comments that encouraged offenders to apologize rather than shaming them received more community support and were viewed as more effective at achieving justice. This “restorative justice” approach proved more successful than the typical internet response of attacking someone’s character or credibility.

The Anatomy of Online Arguments

The research team spent months in fall 2021 dissecting comment threads on trending news videos, cataloging exactly how people attempt to police each other’s speech. Only about 10% of all comments qualified as objections to someone else’s post, but within that slice, the patterns were stark.

Personal attacks emerged as the weapon of choice, accounting for 42% to 45% of objections across different samples. These ranged from direct name-calling to more subtle character assassinations. Physical threats ranked among the least common tactics, appearing in less than 2% of objections.

The study revealed that comment sections function as informal battlegrounds where people with opposing viewpoints clash over values and facts. Trending topics naturally draw diverse audiences beyond ideological echo chambers, creating fertile ground for conflict.

When Redemption Trumps Retribution

The second phase of research, led by former Social Media Lab member Pengfei Zhao, tested how different intervention styles actually performed with online communities. Participants viewed comments that either sought to punish offenders through public shaming or encouraged them to reflect and apologize.

“If people think that the offender cannot change their future wrongdoing,” Zhao said, “then appealing to moral values, inviting them to apologize, won’t work. Then people will resort to retribution; they think this person deserves punishment.”

The twist came in the results. While punishment-focused comments were more common in the wild, healing-oriented responses earned significantly more upvotes and community support. People reported greater satisfaction with discussions when moral appeals were used instead of character attacks.

But there was a crucial caveat: when community members viewed an offender as fundamentally incapable of change, the dynamic flipped. In those cases, punitive responses gained favor as people concluded that rehabilitation was pointless.

The findings challenge conventional wisdom about online behavior modification. Rather than matching fire with fire, the research suggests that appeals to better angels may be more effective at fostering constructive dialogue.

The researchers trained crowd workers on Amazon’s Mechanical Turk to identify different objection tactics, finding that people could learn to recognize these patterns with proper instruction. However, the training proved costly, with nearly 40% of participants failing to meet quality standards.

As social media platforms struggle with content moderation at scale, understanding grassroots intervention tactics becomes increasingly important. The study suggests that organic community responses might play a larger role in shaping online discourse than previously recognized.

Both studies were supported by grants from the National Science Foundation, reflecting growing institutional interest in understanding and improving digital communication. The research offers hope that even in the seemingly lawless frontier of internet comment sections, civility might have a fighting chance against vitriol.

PLOS One: 10.1371/journal.pone.0328550


Quick Note Before You Read On.

ScienceBlog.com has no paywalls, no sponsored content, and no agenda beyond getting the science right. Every story here is written to inform, not to impress an advertiser or push a point of view.

Good science journalism takes time — reading the papers, checking the claims, finding researchers who can put findings in context. We do that work because we think it matters.

If you find this site useful, consider supporting it with a donation. Even a few dollars a month helps keep the coverage independent and free for everyone.


Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.