Bannière Cookies

En poursuivant votre navigation sur ce site, vous acceptez l’utilisation de cookies pour réaliser des statistiques de visites.

Nouvelles technologies

Exploring the true added value of artificial intelligence in the legal system

Ryan calo

Nationality American

Year of selection 2017

Institution Exploring the true added value of artificial intelligence in the legal system

Country United States

Risk Nouvelles technologies

AXA Awards

1 year

250000 €

How would you feel about a justice system where court rulings were made by algorithms? On paper, the outlooks of artificial intelligence in legal procedure seem promising. They hold the potential for improved legal decision-making and greater efficiency. In practice however, "efforts to introduce algorithmically generated insights into the legal system to date have appeared to fall short of promises," says Dr. Ryan Calo, a professor at the University of Washington School Of Law. "There are significant challenges inherent to reproducing legal values in the context of machines." An expert in law and emerging technology, Dr. Calo aims to tackle these issues by bringing legal experts in criminal, administrative, and civil procedure into deeper dialogue with technical experts in AI. The overall objective of the project is to deliver frameworks for translating legal and humanistic values in parameters for AI-aided decision-making in legal contexts.

"We are at a moment in contemporary western democracies where judges, lawyers, and other participants in the justice system are relying upon the judgments of machines," Dr. Calo observes. In the United States, for example, courts and corrections departments use algorithms to determine a defendant's likelihood to commit another crime. In the context of issues as sensitive as the law, where machine learning systems are to be entrusted with taking critical decisions concerning human lives, this calls for systems that reflect appropriate procedural guarantees. Among the many challenges such a requirement raises is "the fact that AI designers are unlikely fully to appreciate the goals and assumptions of the legal context for which they are designing their system," the professor points out. Hence, the project’s main course of action : putting the heads of legal and technical experts together to ensure that values already embedded in legal procedure are translated into specific parameters for AI-aided decision-making.

A cautionary analysis, with an optimistic twist

The first step of the project will be to identify the values embedded in legal procedure, Dr. Calo explains. The second involves an effort to assess which of these values should be reproduced in the context of AI. Indeed, another aspect Dr. Calo and his collaborators, Danielle Keats Citron (University of Maryland), and Andrea Simoncini (University of Florence), are working on is the actual added value of these AI tools. "With Danielle and Andrea, we are asking very simple questions: What is it that courts are trying to do? What are these machines good at, what is it that they do better than us?" They are arguing that maybe machines should not exercise certain types of judgment, but instead stick to issues where we know they will not deny people benefits, but instead, improve the efficiency of the court and improve access to justice. "Our approach is cautionary, of course, but also helpful, because we’re trying to show that these technologies offer great opportunities", Dr. Calo insists. "We ought to be looking at this powerful set of tools as an invitation to accomplish our goals better, and not as replacement for everything humans used to do. Simple things like real-time translation for people who don’t speak the language used in the court would be very useful, for example. More elaborated systems, such as risk-assessment algorithms, come from a good place, but they come with a host of problems, not the least of which is transparency". Indeed, algorithms are developed by private businesses, which often means the algorithm is "black boxed", meaning only the owners and developers really know how the software makes decisions.

This much is sure, asking whether judges or lawyers can be replaced by machines isn’t asking the right question. In putting together this project, Dr. Calo and his collaborators are taking a proactive, while realistic approach. In addition to the one or more frameworks that will be developed for the design of procedurally sufficient AI-aided decision-making, the project aims to generate proofs-of-concept, i.e., one or more models of actual systems co-designed by legal and technical experts. The ultimate objective will be to disseminate this output among policymakers and stakeholders, including academics, judges and other government officials, and industry.

How would you feel about a justice system where court rulings were made by algorithms? On paper, the outlooks of artificial intelligence in legal procedure seem promising. They hold the potential for improved legal decision-making and greater efficiency. In practice however, "efforts to introduce algorithmically generated insights into the legal system to date have appeared to fall short of promises," says Dr. Ryan Calo, a professor at the University of Washington School Of Law. "There are significant challenges inherent to reproducing legal values in the context of machines." An expert in law and emerging technology, Dr. Calo aims to tackle these issues by bringing legal experts in criminal, administrative, and civil procedure into deeper dialogue with technical experts in AI. The overall objective of the project is to deliver frameworks for translating legal and humanistic values in parameters for AI-aided decision-making in legal contexts.

"We are at a moment in contemporary western democracies where judges, lawyers, and other participants in the justice system are relying upon the judgments of machines," Dr. Calo observes. In the United States, for example, courts and corrections departments use algorithms to determine a defendant's likelihood to commit another crime. In the context of issues as sensitive as the law, where machine learning systems are to be entrusted with taking critical decisions concerning human lives, this calls for systems that reflect appropriate procedural guarantees. Among the many challenges such a requirement raises is "the fact that AI designers are unlikely fully to appreciate the goals and assumptions of the legal context for which they are designing their system," the professor points out. Hence, the project’s main course of action : putting the heads of legal and technical experts together to ensure that values already embedded in legal procedure are translated into specific parameters for AI-aided decision-making.

A cautionary analysis, with an optimistic twist

The first step of the project will be to identify the values embedded in legal procedure, Dr. Calo explains. The second involves an effort to assess which of these values should be reproduced in the context of AI. Indeed, another aspect Dr. Calo and his collaborators, Danielle Keats Citron (University of Maryland), and Andrea Simoncini (University of Florence), are working on is the actual added value of these AI tools. "With Danielle and Andrea, we are asking very simple questions: What is it that courts are trying to do? What are these machines good at, what is it that they do better than us?" They are arguing that maybe machines should not exercise certain types of judgment, but instead stick to issues where we know they will not deny people benefits, but instead, improve the efficiency of the court and improve access to justice. "Our approach is cautionary, of course, but also helpful, because we’re trying to show that these technologies offer great opportunities", Dr. Calo insists. "We ought to be looking at this powerful set of tools as an invitation to accomplish our goals better, and not as replacement for everything humans used to do. Simple things like real-time translation for people who don’t speak the language used in the court would be very useful, for example. More elaborated systems, such as risk-assessment algorithms, come from a good place, but they come with a host of problems, not the least of which is transparency". Indeed, algorithms are developed by private businesses, which often means the algorithm is "black boxed", meaning only the owners and developers really know how the software makes decisions.

This much is sure, asking whether judges or lawyers can be replaced by machines isn’t asking the right question. In putting together this project, Dr. Calo and his collaborators are taking a proactive, while realistic approach. In addition to the one or more frameworks that will be developed for the design of procedurally sufficient AI-aided decision-making, the project aims to generate proofs-of-concept, i.e., one or more models of actual systems co-designed by legal and technical experts. The ultimate objective will be to disseminate this output among policymakers and stakeholders, including academics, judges and other government officials, and industry.