For years, the use of artificial intelligence (AI) in everyday law practice has been long on promise and short on delivering real-world impact for lawyers, but that has begun to change. Over the last couple of years, the application of AI to the practice of law has taken focus.
Overview of the Use of AI in Law Practice
Current applications of AI in the legal industry fall into six categories:
- Due diligence: Litigators use AI software to uncover background information. Examples include contract review, legal research, and electronic discovery.
- Analytics and prediction: AI data tool uses past case results, such as win/loss ratios and judicial history, to forecast litigation outcomes.
- Document automation: Law firms use document templates to create new documents based on user input.
- Expert and knowledge systems: Lawyers use AI software to answer legal questions using robotic process automation and natural language processing.
- Intellectual property: AI tools guide attorneys in analyzing IP portfolios and drawing insights.
- Billing: Lawyers use AI software to automatically track and bill time.
If you follow product development in the legal AI space, it seems like a new startup company and platform is launching every month. Examples of the use of AI in law practice are legion, but let’s look at a couple of standouts.
Successful Examples of Legal AI Technology
Luminance is an AI document review platform that claims to have more than 120 customers in more than 40 countries, including 14 of the top 100 global law firms. Integrating a rules-based approach with machine learning, Luminance allows lawyers to analyze documents for standard clauses, locate patterns using pre-configured templates, and identify patterns based on clustering. Luminance claims that by using its technology, lawyers can reduce document review time by up to 85%.
At Artificial Lawyer’s Inaugural Legal Innovators conference in London this October, CEO Emily Foges framed Luminance’s value proposition simply: “For 25% of the cost of an associate, you will realize an 800% increase in speed of review.”
Ravel is an AI-powered legal analytics platform, acquired in 2017 by Lexis Nexis and renamed Context. Originally spun out of Stanford University’s Law School and Computer Science Department with the support of CodeX (Stanford’s Center for Legal Informatics), lawyers can use Ravel/Context to compare forums and law firm outcomes, predict results, and craft winning arguments. The platform also permits lawyers to view profiles of judges by keyword and motion type to gain a better understanding of how judges think, write and rule, and to “pinpoint language that persuades.”
A law firm customer, quoted in the Wall Street Journal, noted: “While using Ravel, I learned a judge made it clear she does not like sports analogies. I definitely would never use ‘moving the goal post’ in front of her.”
If you weren’t following these developments closely, you can be forgiven for being caught off-guard by the recent strength of AI’s success in applying itself to legal practice. In many ways, AI has opened the proprietary gate that lawyers and law firms have historically drawn around legal expertise. By externalizing this institutional knowledge through the manipulation of data, an understanding of how the legal system works can become more transparent, democratic and accessible.
Regulatory Backlash Against Legal AI
The recent success of AI in legal tech has created a backlash of sorts through laws and regulations intended to blunt its impact.
In France, Article 33 of the Justice Reform Act, which took effect in March 2019, prohibits anyone from linking a judge’s identity to his or her decision-making history. The law reads: “The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.”
The penalty for violating France’s new law? Five years in prison.
Article 33 appears to be squarely aimed at legal tech companies, like Ravel/Context and many others, that specialize in court and judicial analytics and prediction. And, with potential incarceration as a deterrent, it is sure to have a chilling effect on legal tech innovation.
In Germany, a local bar association won a court battle in October 2019 that effectively prevented a consumer-facing contract platform from offering its services to clients without lawyers involved in the process. The platform asked questions to tailor the contract to each customer’s needs. The court remarked: “The system cannot question the value and veracity of the user’s answers and cannot judge whether questions offered in the interest of the user are not asked.”
Similarly, Article 22 of the EU’s General Data Protection Regulation (GDPR) prohibits the use of AI for automated decision-making based on personal data. The idea is to prevent computers from making important decisions that impact people’s lives without any human intervention.
The backlash against the use of AI in legal tech is not limited to Europe. Over the past decade, bar associations across the United States have vigorously litigated whether software illegally engages in the unauthorized practice of law in clashes with LegalZoom and AVVO.
New state laws also directly address the use of AI.
In California, the Bolstering Online Transparency (BOT) Act (SB 1001) became law on July 1, 2019. The law states: “It shall be unlawful for any person to use a bot to communicate or interact with another person… with the intent to mislead the other person about its artificial identity… to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.” Stated simply, individuals must now be informed that they’re interacting with a bot.
It is perhaps unsurprising that California’s BOT law took effect approximately one year after Google demonstrated its “Duplex” AI voice assistant that tricked merchants into thinking they were interacting with real customers, stoking fears of potential fraud and identity theft.
California Gov. Gavin Newsom also signed landmark legislation making California the largest state in the country to block law enforcement agencies from using facial recognition on officer body cameras. The law takes effect on January 1, 2020.
Conclusion
As with all new technology, the use of AI in the practice of law has opened doors to opportunity and potential misuse. Laws can and should work to prevent bad actors from using AI to harm our personal and legal interests. However, in regulating the use of legal AI, we must be careful not to slam the doors shut with draconian measures.
About the Author
Tom Martin is a legal tech advocate, lawyer, author, and speaker. He is CEO and founder of LawDroid Ltd., an AI and automation company for the legal industry. Contact Tom on Twitter @lawdroid.