BY JANNA TAY
Artificial intelligence (AI) is changing the very nature of how we live and work. As law firms start taking advantage of the latest technology, recent developments in New Zealand surrounding financial “robo-advice” raise questions as to how automation might affect the legal industry.
Financial robo-advice refers to the use of algorithms which generate financial advice without human input. Robo advisers gather information about clients, recommend investments, and manage portfolios. As the law stands, only a natural person can give personalised financial advice to individuals. Although the government has announced an amending bill with a proposal to remove the robo-advice restriction, these changes will not take effect until 2019. To jump ahead of this, the Financial Markets Authority has been consulting on a proposed exemption that will allow the development of personalised robo-advice models.
Personalised robo-advice is attractive because it costs far less than a human adviser. Robo advisers can thus expand the market by including those who have been unable to approach financial advisers because their investments are too small to meet threshold requirements. AI, then, has the potential to increase access to elite areas. Because finance, like law, involves sophisticated reasoning, it may offer insight into what will happen to the legal industry. Daniel Martin Katz notes that although finance has not gone away, the role of prediction has changed radically and the machines have outperformed the humans.
Legal Technology: 1.0 and 2.0
Oliver Goodenough divides legal tech innovation into three stages. In stage 1.0, technology empowers current lawyers to do their jobs more efficiently, such as optimising the retrieval of legal information through search and e-discovery technologies. In stage 2.0, technology becomes increasingly disruptive in taking over routine jobs, like reviewing documents. Goodenough thinks we are quickly approaching stage 3.0, where technological advancements will begin challenging the human lawyer’s role as the centre of the system.
Law firms in New Zealand use technology to retrieve legal information for due diligence, contract review, and discovery in litigation. LawFlow is a New Zealand e-discovery service which integrates the High Court and District Court discovery rules to make it easier to manage many documents. While New Zealand has online legal companies, such as Online Lawyers NZ and NZ Law Online, these still involve interactions with human lawyers. We have yet to establish any online legal technology like that of American websites, LegalZoom and Rocket Lawyer, which allow users to build almost any legal document online, and print and sign immediately. These charge by monthly subscription, and allow users to live chat with attorneys. In New Zealand, a similar but far more limited application is Buddle Findlay’s “Back of a Napkin” website, which allows team members working on a project to quickly and easily record an agreement in writing.
Within the New Zealand industry, the use of AI seems to be primarily in stages 1.0 and 2.0. Law firms are mainly using AI in “search-and-find type tasks”. Even overseas, Steve Lohr contends that AI will not replace lawyers just yet. Rather, law firms are seeking to automate more routine work, as corporate clients become less willing to pay for hours of manual discovery or due diligence. Junior lawyers, instead of being made obsolete, can spend their time on more complex tasks. Dana Remus and Frank Levy surveyed the potential for automation in legal work. They estimate that the technology now would result in a 2.5 percent reduction in lawyers’ hours annually over the next five years. Ultimately, they think that the argument that automation will replace lawyers is overstated. 
Legal Technology: 3.0 and Beyond
But what about technologies of greater disruption? Joshua Browder’s “DoNotPay” chatbot has helped users overturn over 160,000 parking tickets in London and New York. In appealing over $4 million worth of parking tickets, the bot has a success rate of 64 percent. The chatbot asks a series of questions to figure out if it is possible to appeal the ticket. Because it is free, the chatbot demonstrates the potential for greater access to legal aid. Browder has been adding further capabilities to DoNotPay, including helping people to apply for emergency housing and helping refugees apply for immigration and asylum support. The chatbot asks questions to determine which forms need to be filled out and whether the person is eligible for asylum protection.
Another AI application which has received great attention overseas is “ROSS Intelligence”. Where it took 10 hours for a human lawyer to search online databases and find a factually similar case, ROSS found the same case almost instantly. ROSS can also answer legal questions and reply the next day with a summarised answer and a longer explanatory memo. But Jimoh Ovbiagele, ROSS’s chief technology officer, says ROSS is no good at writing—ROSS’s draft still requires humans to turn it into the final memo. 
In New Zealand, Community Law Wellington has developed “Wagbot”: a chatbot that uses Facebook Messenger to answer students’ questions about their rights at school, such as when preparing for meetings for student suspensions. The chatbot uses “machine learning” algorithms and call logs from a student rights phone line to improve its understanding and ability to give advice. In February 2017, MinterEllisRuddWatts and Goat Ventures founded McCarthy Finch, a start-up with the aim of training an AI legal advisor to provide affordable legal advice on its own. To gather data, they have set up a website featuring a live chat with a human lawyer who will give free legal advice. Like ROSS and Wagbot, this involves machine learning. Machine learning refers to computer programmes that learn from inputs and past experience to refine their behaviour and performance. It allows computers to use non-intelligent algorithms to perform tasks that we typically think would require higher-order human cognition. Computers sift through data and detect patterns to produce useful results. This is particularly relevant for the legal field where it is often claimed that lawyering requires advanced cognitive abilities that machines cannot replicate.
Perhaps the most crucial legal skill is the prediction of judicial outcomes: a sophisticated, intuitive ability that comes from reading many cases and learning to reason by analogy. Though AI has yet to replicate human cognition, it may be able to achieve better outcomes without reproducing the thought process. Machine learning algorithms look at recorded data—such as key facts, past outcomes, amounts of settlement, judges involved—and detect patterns. Nikolaos Aletras, with collaborators, built models based on the European Court of Human Rights (ECtHR) which looked for similarities between past judgments and current applications lodged. Their models predicted ECtHR decisions with 79 percent accuracy. Theodore Ruger, Pauline Kim, Andrew Martin, and Kevin Quinn similarly set out to predict the judgments of the United States Supreme Court for a Term. They used six variables to predict whether individual justices would affirm or reverse the lower courts’ decisions. The model had a 75 percent accuracy rate, as compared with legal experts who correctly predicted 59.1 percent of the outcomes.
However, as these models operate by analysing past data, they have limits. The models will only be helpful insofar as future cases resemble past data sets. And yet the whole point of prediction is that lawyers can tackle new, unique fact situations. The models may also exhibit bias towards past idiosyncrasies which may not arise again, meaning that the patterns detected are no longer useful. It is likely, though, that technology will improve and refine these issues. When lawyers engage in legal prediction, they look for reference points in cases, and they do this by looking for similarity and dissimilarity. Consequently, there is room for machines to help with that search for similarity and for developers to pinpoint what lawyers look for when they sift through cases.
Yet there seems to be something about lawyers’ intuitive ability to predict legal outcomes that will make it harder to build models of greater accuracy in the same way that similar models can diagnose illnesses or give financial advice. As was the case for the United States Supreme Court model, prediction does not look at the internal content of the judgments and the reasons for the decision. Humans take a causal approach. We ask: how have certain variables in the case affected the outcome, and how would the outcome change if the facts were different? By contrast, predictive models look at the recurrence of those variables and ask: if this fact pattern reappeared, what is the likelihood that the same outcome would follow? Whereas lawyers look at the outcome and assign causal weight to the variables, predictive models use the observables to arrive at the outcome. Though machines can approximate similar or better outcomes, their process is essentially the inverse of human reasoning. In automating this human art, we must consider what value we are gaining and losing.
Like the finance sector, the number of lawyers will likely continue to decline as automation increasingly streamlines legal work. The industry must find ways to embrace and figure out exactly how the technology should be developed and integrated. Law schools must start to adapt and train future lawyers for this new landscape. AI seems inevitable now, and machines certainly enhance human performance. Though machines can replicate our biases, they may also be able to expose bias by picking up subtle patterns in the way cases have been decided.
However, it is less clear that AI will or should take over entirely. Even where machines may be able to outperform humans in the outcomes—as in some legal predictions—something important is lost in bypassing human cognition to reach those outcomes. At the heart of law and judicial reasoning is the answer to the “why” question: why, in a certain case, should a judge reach a particular outcome? This kind of analysis can then affect the laws we develop in the first place. At present, machines cannot answer these questions. But law is more than likelihood, and any AI developer seeking to bypass that understanding risks undermining law itself.
The views expressed in the posts and comments of this blog do not necessarily reflect those of the Equal Justice Project. They should be understood as the personal opinions of the author. No information on this blog will be understood as official. The Equal Justice Project makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The Equal Justice Project will not be liable for any errors or omissions in this information nor for the availability of this information.
 Financial Markets Authority Consultation Paper: Proposed Exemption to Facilitate Personalised Robo-Advice (June 2017) at 3.
 Financial Advisers Act 2008, s 18.
 Beehive.govt.nz “Group appointed to develop new code of conduct for financial advice” (press release, 21 June 2017).
 Jill E Fisch and John A Turner “Robo Advisers vs Humans: Which Make the Better Financial Advisers?” (March 2017) at 13.
 Daniel Martin Katz “Quantitative Legal Prediction—or—How I Learned to Stop Worrying and Start Preparing for the Data-Driven Future of the Legal Services Industry” (2013) 62 Emory L.J. 909 at 948.
 Oliver R Goodenough “Legal Technology 3.0” (2 April 2015) HuffPost <www.huffingtonpost.com>.
 “Overview” LawFlow <www.lawflow.co.nz>.
 Back of a Napkin <www.backofanapkin.co.nz>.
 Steve Lohr “A.I. is Doing Legal Work. But It Won’t Replace Lawyers, Yet.” The New York Times (online ed, United States of America, 19 March 2017).
 Dana Remus and Frank Levy “Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law” (27 November 2016) <www.ssrn.com> at 46-47.
 Samuel Gibbs “Chatbot lawyer overturns 160,000 parking tickets in London and New York” The Guardian (online ed, United Kingdom, 28 June 2016).
 Elena Cresci “Chatbot that overturned 160,000 parking fines now helping refugees claim asylum” The Guardian (online ed, United Kingdom, 6 March 2017).
 Lohr, above n 9.
 John Gerritsen “‘Wagbot’ gives school kids instant legal info” Radio New Zealand (online ed, New Zealand, 18 May 2017).
 Get Free Legal Advice “FAQs” <getfreelegaladvice.co.nz>.
 Harry Surden “Machine Learning and Law” (2014) 89 Wash L Rev 87 at 89.
 At 95.
 At 87.
 At 104.
 Nikolaos Aletras and others Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective (PeerJ Computer Science, October 2016) at 7.
 At 2.
 Theodore W. Ruger and others “The Supreme Court Forecasting Project: Legal and Political Science Approaches to Predicting Supreme Court Decisionmaking” (2004) 104 Colum. L. Rev. 1150 at 1162-1163.
 At 1155.
 Surden, above n 16, at 106.
 Katz, above n 5, at 956.
 Ruger, above n 22, at 1162.
 Katz, above n 5, at 952.
 Surden, above n 16, at 109.