How does the European Commission’s regulatory approach apply to predictive opinion software?

Technologies that leverage artificial intelligence (AI) and machine learning (ML) have become omnipresent in today’s society, impacting almost every sector of the economy. Indeed, the perceived economic and social benefits of AI, coupled with a narrative that frames AI as being an inevitable and a “sublime force” means that even the justice sector is beginning to explore ways in which AI can be incorporated into decision-making processes. For example, various companies are beginning to leverage machine learning to analyze massive data sets of legal precedents with the belief that properly trained algorithms will then be able to accurately predict the legal outcome of any given case based on the statistical patterns detected in the data. Within the legal services industry, this technology is becoming commonplace. Based in Menlo Park California Lex Machina, a pioneer of this technology and leader in the field, understands legal analytics and its ability to predict future decisions as necessary for the practice of law. In their own words:

Lex Machina combines data and next-generation technology to provide the winning edge in the highly competitive business and practice of law. Our unique Lexpressions engine mines and processes litigation data, revealing insights never before available about judges, lawyers, parties, and the subjects of the cases themselves, culled from millions of pages of litigation information.

Apart from the competitive advantages offered by this “predictive opinion” software, proponents have also argued that it will improve access to justice by increasing efficiencies and, in turn, reducing the cost of law for both governments and individual users. Indeed, as a legal research tool, such software can analyze far more cases, far more quickly, than any lawyer or judge possibly could, so there is intuitive sense to this argument; fewer hours spent on legal research means fewer hours billed to the client.

Yet even if this argument is sound – one can query whether any cost savings would actually be passed on to those with legal needs – it does not address the concerns and risks that this type of technology poses to democratic governance. AI systems have been heavily criticized for, among other things, lacking transparency and reducing accountability: two key characteristics of a healthy democratic system. Moreover, there is a huge potential for these systems to do harm by reinforcing discriminatory biases and inequalities that exist within their training data.

These concerns are particularly relevant within the justice sector where basic principles of procedural fairness demand that individuals understand how a decision is made and be given an opportunity for review it. There is also a concern about how delegating decision-making authority to AI systems could hinder the development of the law by blocking the creation of new precedents and limiting the ability to overturn bad law. In response to these kinds of issues, numerous ethical frameworks have been developed to guide the design of AI systems, but these frameworks – often industry-led – are subject to a critique of being simply performative, merely existing to reassure the public and deflect criticism. If such criticisms are valid, then the only real way to ensure that AI systems, such as predictive opinion software, are safely deployed is through some form of positive regulation.

Harmonising the rules on AI

Some guidance on how to approach predictive opinion software from a regulatory perspective may be found in the European Commission’s (EC) recently proposed draft regulation on the harmonization of rules pertaining to AI. This document aims to fill a legislative gap at the European Union level in the regulation of this emerging technology by establishing a “future-proof” definition of AI. The EC’s objective is to promote the uptake of AI while addressing risks of certain uses of the technology through a contextual and proportionate framework.

In other words, instead of directly regulating specific use cases (e.g. autonomous cars or facial recognition software) this framework categorizes AI systems as either posing an unacceptable risk, a high risk, or a minimal risk. For example, AI systems that exploit the vulnerabilities of children are categorized as unacceptable and therefore are prohibited. Conversely, AI systems that negatively impact fundamental rights are permitted but considered high risk and will thus have more regulatory oversight and transparency obligations than minimal risk systems. This approach is seen by the drafters as striking the right balance between two competing narratives: one that says AI systems need to be encouraged in order for a country to be competitive in the global market, and the other that expresses caution, believing that there are serious risks associated with this technology.

Attempting to walk the fine line between these two narratives may prove problematic as some systems are not so easy to categorize. The example of predictive opinion software illustrates this point nicely. Under the proposed framework, any AI system that is intended to assist judicial authorities in researching and interpreting facts and law is considered a high-risk system due to their potential impact on such things as the rule of law, individual freedoms, or trial fairness. However, if the same software – such as that being developed by Lex Machina – is being used by lawyers to assist with the provision of legal advice, it would likely be categorized as a minimal risk system.

This is curious since legal needs research has shown that most legal problems never enter into the formal system and even among those cases that do enter the formal system, most settle long before they reach final adjudication. In this context, the legal advice provided by a lawyer has the potential to be far more impactful on an individual’s ability to achieve an effective remedy or vindicate a right than does a court. If predictive opinion software is built using a faulty data set that reinforces existing systemic biases, a lawyer relying on this system for their research may directly harm the interests of their clients. In theory, a lawyer (like a judge) would be using this software as a tool and nothing more and their professional obligations would require them to review the algorithmic output and ensure that it is legally sound.

Indeed, Lex Machina sees their product as a legal research tool to assist lawyers in crafting winning legal strategies and increase time savings. However, research into how we interact with AI suggests otherwise. As we increasingly rely on AI systems, they become more authoritative and sometimes viewed as infallible.  Moreover, the reality of legal work and the pressure of having many clients means that lawyers are not likely to put in the time necessary to properly review algorithmic legal opinions. This is not entirely the fault of lawyers as clients are reluctant to pay for many hours of legal research; but this dynamic reinforces the danger of a lawyer delegating their obligations to an algorithm. Finally, there is the practical limitation of a lawyer’s ability to understand how an algorithm came to its decision since AI systems are often characterized as black boxes that are inscrutable. Given this context, it is hard to characterize predictive opinion software as anything other than high risk.

Companies such as Lex Machina are developing powerful tools that could be used to improve access to justice, however, they also pose a serious risk to principles democratic governance. As AI systems become more common within the justice sector, serious consideration should be given to how they can be regulated such that these risks are minimized. The EC’s proposed regulation on AI systems is one way in which a regulatory framework could operate. The difficulty with this approach, however, is that in trying to straddle two competing narratives, the regulation creates space for potentially harmful systems to slip through the cracks. Some organizations will, no doubt, heavily litigate how their software is categorized under this regime by finessing the intended use of the system or misrepresenting its purpose. This difficulty is not insurmountable and by no means should be interpreted in favour of less regulation or a more laissez-faire approach. Rather, it simply illustrates the challenge inherent in regulating potentially dangerous systems in an environment that sees them as inevitable and necessary.

 

Matthew Dylag is an assistant professor in the Schulich School of Law at Dalhousie University. He obtained his PhD from Osgoode Hall Law School at York University in Toronto. Prior to joining Dalhousie, he was a Max Weber Fellow at the European University Institute researching the growing use of artificial intelligence within the legal services market and the effect of this phenomena on equality and fairness. As an access to justice scholar, Matthew has a particular interest in how emerging technologies are being integrated into the justice sector. 

This is the third contribution of PERC’s series on the Silicon Valley ideologyEach week for the next two months, experts from the fields of political economy, political science, economic history, cultural studies and law will share their research perspectives on the recent trends that have animated the Silicon Valley bubble. If you wish to get involved or would like to pitch an idea for a contribution, get in touch with our editor Carla Ibled (c.ibled[at]gold.ac.uk).