Simultaneous increases of accessible data volume (Open data), available computing power (Big data) and artificial intelligence technologies (AI) gave rise in the legal field to powerful predictive analytics.
This convergence already allows to develop the first technologies likely to achieve an accurate treatment of thousands of text laws, jurisprudences and judgments. This “LegalTech” field is active and significant achievements have been made in the past two years to address the technical challenges to reach an unseen level of performances and simplicity.
Emergence of law predictive analytics and the LegalTech
In France, the opening of legal data has been fostered by recent laws related to the ‘digital republic’ project which compels any judicial decision to be public and free of assess (ref1). In 2017, this ongoing process contributes to build and supply a tremendous database of case laws. Blossoming new companies provide tools exploiting such as data mining. These companies include Case Law Analytics and Predictice in France or the more famous Lex Machina in the US.
The first and most direct use of such a database is to provide statistical analysis of the decisions. This use seems obvious to lawyers and legal experts, quite used to reading and analysing law cases and court decisions (ref4). The LegalTech now allows them to analyse legal decisions related to a given case, location, court, judge, etc. For instance, the American platform Premonition, operating data to find which lawyer usually wins over which judge.
A more elaborate use requires semantic and textual analysis. News tools are emerging, fostered by the explosion of the computing power and models of semantic analysis. Combined to the last machine learning methods, they are able to assess likeliness of legal options and quantify its consequences modulated by local and global context elements (ref 3). For instance, Neota Logic developed a platform implementing applications to reduce risks, improve performances and guarantee compliance combining rules and complex reasoning, documents and processes.
Formalise and secure the decision-making process: a technical challenge
Some law firms are even beginning to create “pre-artificial intelligence” systems, such as Dentons which set an agreement with IBM to use its technology “Watson” to develop “NextLawLabs”. Precisely assessing pros and cons of a given procedure, such a system is expected to favour negotiations over costly trials in cases where justice consists in a strict application of law texts. Such tools would streamline commons courts by fast-tracking unnecessary procedures.
Nevertheless, in order to go any further, systems will have to operate in cases extending beyond the scope of strict application of text laws if we consider that legal decisions are one part formal, one part specific. This conviction has recently been demonstrated by researchers from the College of London who developed an AI to study 584 decisions from European Court of Human Rights (ECHR). Their model matched in 80% of the cases assessing predictability of decisions. More interestingly, they demonstrated that judges gave priority to case by case examination to strict application of the texts (ref2). A crucial point to solve the 74,150 pending cases at the ECHR but, more importantly, to better understand all the expectations about justice itself.
 Variety of statistical techniques from predictive modeling, machine learning, and data mining that analyze current and historical facts to make predictions about future or otherwise unknown events*
Yoann, Consultant, Leyton France