Quantcast

The Need for Transparency and Interpretability at the Intersection of AI and Criminal Justice

“No human can calculate patterns from large databases in their head. If we want humans to make data-driven decisions, machine learning can help with that,” Cynthia Rudin explained regarding the opportunities that artificial intelligence (AI) presents for a wide range of issues, including criminal justice.

On November 15th, Rudin, Duke professor of computer science and recipient of the 2021 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity. joined her colleague Brandon Garrett, the L. Neil Williams, Jr. Professor of Law and director of the Wilson Center for Science and Justice, for “The Equitable, the Ethical and the Technical: Artificial Intelligence’s Role in The U.S. Criminal Justice System.” The panel was moderated by Nita Farahany, the Robinson O. Everett Professor of Law and founding director of Duke Science & Society. At the event, there was representation from numerous House and Senate congressional offices as well as the Departments of Transportation and Justice, National Institutes of Health (NIH), American Association for the Advancement of Science (AAAS) and the Duke community.

Rudin started off the conversation by providing listeners with a simple definition, “AI is when machines perform tasks that are typically something that a human would perform.” She also described machine learning as a type of “pattern-mining, where an algorithm is looking for patterns in data that can be useful.” For instance, an algorithm can analyze an individual’s criminal history to identify patterns and could be used to help predict whether that person is more likely to commit a crime in the future.

Garrett added that AI applications pose a potential solution for human error – we can be biased, too lenient, too harsh, or “just inconsistent” – and these flaws can be exacerbated by time constraints and other factors. When it comes to AI in the criminal justice system, an important question to consider is whether AI has the potential to provide “better information to inform better outcomes” and better approaches to the criminal system, especially considering the presence of racial disparities.

However, applying AI tools to the criminal justice system should not be taken lightly. “There are a lot of issues that we need to take into account as we are designing AI tools for criminal justice,” said Farahany, “including issues like fairness and privacy, particularly with biometric data since you can’t change your biometrics, or transparency, which is related to due process.”

What does it mean for an algorithm to be fair? Rudin estimated that about “half the theoretical computer scientists in the world are working to define algorithmic fairness.” So, researchers like her are looking at different fairness definitions and trying to determine whether the risk prediction models being used in the justice system satisfy those definitions of fairness.

When it comes to facial recognition systems there is “generally a tradeoff between privacy, fairness and accuracy,” Rudin stated. When software searches the general public’s pictures, it invades individual privacy, however, because the model collects pictures of everyone, it’s extremely accurate and unbiased.  Similarly, Garrett noted that the federal government is a heavy user of facial recognition technologies and there is no law that regulates it, pointing to the federal FACE database. “One would hope that the federal government would be a leader in thinking carefully about those issues and that hasn’t always been true,” however, he also praised the National Institute of Standards and Technology (NIST) and Army Research Lab for their work in the space.

Throughout the conversation, the speakers emphasized the importance of transparency and interpretability, as opposed to “black box AI” models.

“A black box predictive model,” said Rudin, “is a formula that is too complicated for any human to understand or it’s proprietary, which means nobody is allowed to understand its inner workings.” Likening the concept to a “secret sauce” formula, Rudin explained that many people believe that, due to its secretive nature, black box AI must be extremely accurate. However, she pointed out the model’s limitations and occasional inaccuracies, whereas interpretable and “understandable to humans” models can perform just as well.

“Interpretation also matters, because we want people like judges to know what they are doing,” explained Garrett, “and if they don’t know what something means, then they may be a lot less likely to rely on it.”

In the discussion, Garrett also gave his thoughts about legislation currently being considered in Congress. He mentioned the recently introduced Justice in Forensic Algorithms Act, which seeks to allocate additional resources to NIST. Regarding the legal landscape of AI and criminal justice, he recommended that the federal government provide “resources for NIST to be doing vetting and auditing of these technologies, and they should not be black box, they should be interpretable and all of that information should be accessible to all of the sides – the judge, prosecution and defense – so that they can understand the results that these technologies are spitting out and so they can be explained to jurors and other fact finders.”




The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.