In television crime dramas, savvy lawyers are able to overcome improbable odds to win their cases by presenting seemingly iron-clad scientific evidence. In real-world courtrooms, however, the quality of scientific testimony can vary wildly, making it difficult for judges and juries to distinguish between solid research and so-called junk science.
This is true for all scientific disciplines, including psychological science, which plays an important role in assessing such critical pieces of testimony as eyewitness accounts, witness recall, and the psychological features of defendants and litigants.
A new, multiyear study published in Psychological Science in the Public Interest (PSPI), a journal of the Association for Psychological Science (APS), finds that only 40% of the psychological assessment tools used in courts have been favorably rated by experts. Even so, lawyers rarely challenge their conclusions, and when they do, only one third of those challenges are successful.
“Although courts are required to screen out junk science, legal challenges related to psychological-assessment evidence are rare,” said Tess M.S. Neal of Arizona State University, one of the authors of the report. The other authors are Michael J. Saks of Arizona State University, Christopher Slobogin of Vanderbilt University Law School, David Faigman of the University of California Hastings School of Law, and Kurt F. Geisinger of the University of Nebraska-Lincoln.
“Although some psychological assessments used in court have strong scientific validity, many do not. Unfortunately, the courts do not appear to be calibrated to the strength of the psychological-assessment evidence,” said Neal.
The new APS report examines more than 360 psychological assessment tools that have been used in legal cases, along with 372 legal cases from across all state and federal courts in the United States during the calendar years 2016, 2017, and 2018.
These findings are also presented at the 2020 American Association for the Advancement of Science (AAAS) meeting in Seattle.
Psychological scientists provide expert evidence in a variety of court proceedings, ranging from custody disputes to disability claims to criminal cases. In developing their expert evaluation of, for example, a defendant’s competence to stand trial or a parent’s fitness for child custody, they may use tools that measure personality, intelligence, mental health, social functioning, and other psychological features. A number of federal court decisions and rules give judges the latitude to gauge the admissibility of evidence, largely by evaluating its empirical validity and its acceptance within the scientific community.
For their review, Neal and her colleagues gathered results from 22 surveys of psychologists who serve as forensic experts in legal cases. They reviewed the 364 psychological assessment tools that the respondents reported having used in providing expert evidence. They found that nearly all of those tools have been subjected to scientific testing, but only about 67 percent are generally accepted by the psychological community at large. What’s more, only 40% of the tools have generally favorable reviews in handbooks and other sources of information about psychological tests.
The scientists also found that legal challenges to the admission of assessment evidence are rare, occurring in only about 5% of cases they reviewed. And only a third of those challenges succeeded.
According to the report: “Attorneys rarely challenge psychological expert assessment evidence, and when they do, judges often fail to exercise the scrutiny required by law.”
In an accompanying commentary, David DeMatteo, Sarah Fishel, and Aislinn Tansey, psychology and legal scholars at Drexel University, call for more research on whether trial court judges are functioning as effective gatekeepers for expert testimony. They point to studies indicating that many judges admit evidence from methodologically flawed studies and others that show attorneys and jurors lack the scientific literacy necessary to scrutinize scientific evidence. The Drexel scholars also called on forensic psychologists to ensure they use scientifically sound assessment tools when providing expert evaluations in legal settings.
The problem started when psychology, medicine, sociology, economics, statistics, math . . . got upgraded the colleges of arts to colleges of science. Why? No logical reason except the median scientist is paid more than the median artist.