First-of-kind study shows model can be used to rate courtroom psychiatric experts performance

Court cases across America often feature expert testimony that offers conflicting conclusions. When this happens in cases involving psychiatric expertise, does it mean that one side or the other is necessarily being less than honest?

A new study from the University of Cincinnati College of Law says the answer is no, and, for the first time, offers up mathematical modeling methods to back up that conclusion.

The study — led by Douglas Mossman, MD, director of the UC College of Law’s Glenn M. Weaver Institute of Law and Psychiatry and the forensic psychiatry fellowship at the UC College of Medicine — showed that a group of psychiatrists who evaluated mental competence from case files of 156 criminal defendants performed at a strikingly high level of accuracy.

In an average of 29 out of every 30 cases, the psychiatrists could distinguish competent defendants from incompetent defendants. That’s a level of performance that exceeds standard diagnostic performance in other areas of medicine, such as spotting breast cancer in mammograms or using advanced imaging methods to detect Alzheimer’s disease.

It also points out one of the basic truths of the justice system, even when dealing with a topic as definitive as expert testimony: ultimate decisions still come down to judgment calls.

“These results help us see how courtroom experts can be quite accurate in distinguishing competence from incompetence, but still reach different conclusions,” says Mossman of the study, which was published online in “Law and Human Behavior,” the journal of the American Psychology-Law Society. “It’s a matter of where experts draw the line on the issue of competence.”

Continues Mossman: “Experts may disagree with each other even though they are very good at making all the right distinctions. You’re apt to get disagreement when you ask experts for a ‘yes’ or ‘no’ answer, as the courts do, on issues that can have gray areas, like competence to stand trial.”

Many people assume that when experts disagree, it’s because they are merely “hired guns” who testify to whatever opinion they are paid to advance. The methods used in the new study dispute that assumption, and may also provide clear evidence supporting the abilities and skills of mental health experts.

“When opposing experts disagree, courtroom cross-examination often becomes an intensive effort to question the integrity of psychiatric diagnoses and to discredit all mental health expertise,” says Mossman, who worked with colleagues from Wright State University’s Boonshoft School of Medicine and the University of Wisconsin School of Medicine and Public Health on the study.

The problem is there is no independent, infallible “gold standard” to establish conclusions in forensic psychiatry, as there is in most other areas of medicine.

“If there were some way, however, to measure accuracy without a ‘gold standard,’ mental health experts might be more credible,” Mossman says. “Over the last two decades, statisticians have developed mathematical techniques that — in some cases ? make it possible to estimate diagnostic accuracy without gold standards.”

These techniques ? which have been successfully used in areas as diverse as imaging liver cancer and detecting infections in dairy cattle ? form the backbone of the study. Using statistical methods known as latent class modeling, the study looked at the performance of psychiatrists who made evaluations based on the 156 case files presented to them.

“The techniques are applicable to lots of questions in law and mental health,” Mossman says. “There are many, many other kinds of cases where courts depend on mental health experts’ opinions. If you have the right kind of data, these methods would allow you to evaluate the accuracy of court evaluations.”

Mossman, himself an experienced psychiatric expert from dozens of court cases, says that by using this method to establish the accuracy of experts, the value of their opinions can be demonstrated and even assigned a mathematical quantity. But experts are still going to reach different conclusions.

“The legal system asks experts to give ‘yes’ or ‘no’ answers, but that’s not how things usually are in medicine,” he says. “Very often, a physician’s diagnostic judgment really is a probability, an in-between answer. In courtroom testimony, experts are supposed to provide a clear opinion, not an ambiguous answer, even when the problem involves a shade-of-gray kind of question. That’s where the real opportunity for difference of opinion comes into play.”


Substack subscription form sign up