If you read online reviews before purchasing a product or service, you may not be reading the truth. Review sites are becoming targets for “opinion spam” – phony reviews created by sellers to hype their own products and slam the competition.
The bad news: Human beings are lousy at identifying deceptive reviews.
The good news: Cornell researchers are developing computer software that’s pretty good at it. In 800 Chicago hotel reviews, their software was able to pick out 90 percent of deceptive reviews. In the process, the researchers uncovered some key features to help determine if a review was spam, and even evidence of a correspondence between the linguistic structure of deceptive reviews and fiction writing.
The work was recently presented by Myle Ott, Cornell doctoral candidate in computer science, at the annual meeting of the Association for Computational Linguistics in Portland, Ore. The other researchers include Claire Cardie, Cornell professor of computer science, Jeff Hancock, Cornell associate professor of communication and information science, and Yejin Choi, a recent Cornell computer science doctoral graduate.
“While this is the first study of its kind, and there’s a lot more to be done, I think our approach will eventually help review sites identify and eliminate these fraudulent reviews,” Ott said.
The researchers created what they believe to be the first benchmark collection of opinion spam by asking 400 people to deliberately write false positive reviews of 20 Chicago hotels. These were compared with an equal number of randomly chosen truthful reviews.
As a baseline, the researchers submitted a subset of reviews to three human judges – volunteer Cornell undergraduates – who scored no better than chance in identifying deception. The three did not even agree on which reviews were deceptive, reinforcing the conclusion that they did no better than chance. Historically, Ott notes, humans suffer from a “truth bias,” assuming that what they are reading is true until they find evidence to the contrary. When people are trained at detecting deception they become overly skeptical and report deception too often, generally still scoring at chance levels.
The researchers then applied statistical machine learning algorithms to uncover the subtle cues to deception. Deceptive hotel reviews, for example, are more likely to contain language that sets the scene, like “vacation,” “business” or “my husband.” Truth-tellers use more concrete words relating to the hotel, like “bathroom,” “check-in” and “price.” Truth-tellers and deceivers also differ in their use of certain keywords, punctuation, and even how much they talk about themselves. In agreement with previous studies of imaginative vs. informative writing, deceivers also use more verbs and truth-tellers use more nouns.
To evaluate their approach, the researchers trained their algorithms on a subset of the true and false reviews, then tested them on the rest. The best results, they found, came from combining keyword analysis with the ways words are used in combination. This combined approach identified deceptive reviews with 89.8 percent accuracy.
Ott cautions that the work so far is only validated for hotel reviews, and for that matter, only reviews of hotels in Chicago. The next step, he said, is to see if the techniques can be extended to other categories, starting perhaps with restaurants and eventually moving to consumer products. He also wants to look at negative reviews.
“Ultimately, cutting down on deception helps everyone,” Ott says. “Customers need to be able to trust the reviews they read, and sellers need feedback on how best to improve their services.”
Review sites might use this kind of software as a “first-round filter,” Ott suggested. If one particular hotel gets a lot of reviews that score as deceptive, the site will know to investigate further.