In the age of generative AI and large language models (LLMs), the spread of inauthentic content and misinformation on social media platforms has become increasingly sophisticated and pervasive.
Malicious actors, often backed by state-sponsored information operations (IOs), employ tactics such as hashtag hijacking, artificial amplification of misleading content, and mass resharing of propaganda to sway public opinion during major geopolitical events. Combating these IOs is crucial, and researchers from the University of Southern California (USC) and the Massachusetts Institute of Technology (MIT) are developing innovative solutions to address this pressing issue.
Unmasking Coordinated Influence Campaigns on Social Media
USC Information Sciences Institute (ISI) researcher Luca Luceri is co-leading an effort funded by the Defense Advanced Research Project Agency (DARPA) to identify and characterize influence campaigns on social media. In his recent paper, “Unmasking the Web of Deceit: Uncovering Coordinated Activity to Expose Information Operations on Twitter,” Luceri and his team propose a suite of unsupervised and supervised machine learning models that can detect orchestrated influence campaigns from different countries on the platform X (formerly Twitter).
By examining a comprehensive dataset of 49 million tweets from verified campaigns originating in six countries, the researchers identified five key sharing behaviors that IO drivers participate in: co-retweeting, co-URL sharing, hashtag sequence, fast retweeting, and text similarity. The team constructed a unified similarity network called a Fused Network, which captures a broader range of coordinated sharing behaviors, and applied machine learning algorithms to classify these accounts’ similarities and predict their future participation in IOs.
Empowering Users to Assess Misinformation
While Luceri’s work focuses on platform-level detection of influence campaigns, MIT Professor David Karger and his former student Farnaz Jahanbakhsh SM ’21, PhD ’23, have proposed a decentralized approach that empowers individual users to flag misinformation and identify others they trust to assess online content. Their solution, the Trustnet browser extension, is platform-agnostic and works for any content on any website, including posts on social media sites, articles on news aggregators, and videos on streaming platforms.
The Trustnet Extension allows users to assess content accuracy and specify trusted users whose assessments they want to see. The extension automatically checks all links on the page a user is reading and places indicators next to links that have been assessed by trusted sources, fading the text of links to content deemed inaccurate.
A two-week study conducted by the researchers found that untrained individuals could use the tool to effectively assess misinformation, and participants reported that having the ability to assess content and see assessments from others they trust helped them think critically about it.
As researchers continue to develop innovative solutions to combat online misinformation, both platform-level detection of influence campaigns and user-empowering tools like the Trustnet Extension will play crucial roles in protecting users from the flood of inauthentic content. By combining the efforts of researchers at institutions like USC and MIT, we can work towards a future where trustworthy information prevails, and users are better equipped to navigate the complex digital landscape.