Quantcast

Researchers test detection methods for AI-generated content

Artificial intelligence generators can craft text that mimics a human author, improving the user experience in platforms such as medical chatbots, online customer service and virtual psychotherapy sessions.

Often times, these realistic texts are difficult to distinguish from those generated by humans. But as these techniques become more sophisticated and prevalent, so do the opportunities for fraudulent use.

In an effort to combat malicious use of these neural text generators — for example, an adversary generating fake news to share on social media — researchers at Penn State’s College of Information Sciences and Technology analyzed eight different, state-of-the-art, natural language generators (NLG) to identify whether each had a distinct writing style that could be discovered by machine classifiers.

“Imagine the near future where we are surrounded by thousands of different machine writers, and each is trained with different initial data and capable of producing a high-quality text, whether it is a news article, SAT essay or market analysis report, with subtly different styles,” said Dongwon Lee, associate professor of information sciences and technology and principal investigator of the project. “Then, it becomes critically important to be able to tell first, which text is written by a machine writer and which is by a human writer; and second, which text is written by which machine writer among thousands of candidate machine writers. Our project, Reverse Turing Test, is trying to address these challenges.”

The current AI method to distinguish machine-generated texts from human-written ones — known as the Turing Test — may not be sufficient, according to the researchers. Instead, a more critical solution would be to determine authorship attribution, an approach that seeks to identify which natural language generation method among many candidates has generated a given text in question.

The researchers investigated three versions of authorship attribution among one human writer and eight neural machine generators, which include CTRL, GPT, GPT2, GROVER, XLM, XLNET, PPLM and FAIR. They collected more than 1,000 recently-published political news titles and contents from media outlets to represent human-written texts and provided each generator with an identical prompt message from a current political headline, generating the same number of articles from each. Then, they developed five simple neural models to use as baselines in their three experiments in comparison with other existing machine classifiers.

In their experiments, the researchers first tested several detection models to determine if two separate texts were generated by the same method — NLG or human. Then, they tested each model’s capability to detect the differences between human and machine writing. Finally, they tested how the models can exploit both similarities within and differences across human and machine writings to identify which NLG method produced the particular text.

“Knowing each generator’s writing signature or style moves us closer to quenching the security threats that they may introduce,” the researchers write in their paper.

Through the comprehensive experiments, the researchers found that not all neural text generation methods create high-quality human-mimicking texts “yet.” Most of the generators were able to be classified as having the text written by a machine, simply by looking at word counts and linguistic features in the text.

“We figured out that the machine texts in general aren’t as sophisticated as human beings, as you well know,” said Adaku Uchendu, doctoral student of information sciences and technology. “When humans write, they don’t use the word ‘the’ at the start of multiple consecutive sentences; they’re going to use some transition word. Machines don’t really do that, they go straight to the point.”

However, other NLG models — such as GROVER, GPT2, and FAIR — turned out to be much more difficult to distinguish and often confused machine classifiers in all three experiments.

“We do know that as more language models are made, including the more sophisticated ones like GPT3 that is passing everyone’s expectations, it’s going to be harder to figure out if a machine generated an article,” said Uchendu. “So, we have to improve our detection models even further.”

She added, “The ultimate goal of this work would be to [hopefully] enforce some kind of disclaimer on an article that states that it is machine generated. If people are aware, then they can do more fact checking, when an article is machine generated, for instance.”

Also collaborating on the project were Thai Le, doctoral student of information sciences and technology at Penn State; and Kai Shu, an assistant Professor at Illinois Institute of Technology. They presented their work at the flagship NLP conference — Empirical Methods in Natural Language Processing (EMNLP) — in November 2020.




The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.