AI Disclosure Labels Reduce Trust in True Science Posts While Boosting False Ones

big yellow ai warning sign

Slapping a label on AI-generated content is the regulatory world’s current favourite answer to the misinformation problem. Transparent, scalable, required by law in China and under the EU AI Act, endorsed by Meta and X. The logic seems obvious enough: tell people a machine wrote something and they’ll scrutinise it harder. They didn’t, as it … Read more

Recommendation Algorithms Change How You Learn, Not Just What You See

When people are fed suggested content online, they tend to narrow their focus and learn less.

Recommendation algorithms don’t just show you more of what you like. They actually change what you learn and how confident you feel about it, according to a new study in the Journal of Experimental Psychology: General. Researchers at Ohio State University tested this with 346 people learning to categorize fake aliens: crystal-like creatures with six … Read more

Universities Race To Rewrite Curricula For A World Remade By AI

Classroom of the future illustration

Generative AI is pushing higher education into a profound moment of reckoning. A new study published in Frontiers of Digital Education examines how rapidly advancing AI systems are reshaping what students must learn and how universities must teach. Led by researchers from Lanzhou Petrochemical University of Vocational Technology and collaborating institutions, the paper argues that … Read more

Deepfakes Just Got Scarier as Researchers Break AI Watermarks

Figure 2:Images watermarked by StableSignature— non-semantic (top), and StegaStamp— semantic (bottom). The rightmost figures display the differences between the original and watermarked images that correspond to the changes encoding the watermarks. StableSignature’s modifications are restricted to existing (high-frequency) edges such as wrinkles, hair, mustache, and intersections of multiple components. StegaStamp’s watermark is distributed across the image. The magnified area shows how it manipulates the consistency (texture), injecting gradual (low-frequency) changes that manifest as wrinkles at this location.

As deepfake images and videos become ever more convincing, tech companies have turned to invisible watermarks to help identify what’s real. But a new study from researchers at the University of Waterloo shows that even the best digital fingerprints can be erased—silently, universally, and without insider knowledge. Called UnMarker, the technique is the first practical, … Read more

AI Makes Lies Look Like the Truth—and Harder to Spot

#StopTheSteal AIPasta Stimuli: Profile images, usernames, and handles constructed by Jalbert et al. 2025. Profiles do not represent real users and were created from stock images and with handles that are not currently in use.

In a new study published in PNAS Nexus, researchers show that AI-generated paraphrases of disinformation—dubbed “AIPasta”—can make false claims appear more credible and widely shared. Unlike traditional copy-and-paste propaganda, AIPasta boosts perceptions of social consensus while flying under the radar of existing AI-detection tools. When repetition meets AI, falsehoods gain subtle power Repetitive messaging is … Read more