In recent years, artificial intelligence (“AI”) has surged in popularity, quickly becoming an integral part of many industries. As AI algorithms continue to advance, legal professionals should consider how they can integrate AI into their practice. This post discusses two key ways AI seems poised to transform legal research—and one important warning.
Faster and More Efficient Searches
AI-powered tools can comb through expansive databases in seconds, rapidly identifying relevant legal authorities and extracting and presenting key insights in a concise and easily digestible format. When used strategically, this transformative technology not only saves valuable time but permits legal researchers to redirect their expertise towards more strategic and nuanced aspects of their work. For example, many AI algorithms can quickly identify the elements of different offenses, legal standards for motions, and “key” cases on a range of legal topics.
To see how this technology works in practice, let’s ask ChatGPT,1 an AI-based platform, a question about pleading standards in federal court.
While these facts are relatively basic, they permit legal professionals (after ensuring the AI’s accuracy, as discussed below) to focus their time on higher-level work, including strategic analysis, brief writing, counseling clients, and the like. One study conducted by the National Legal Research Group in 2018 suggests that AI tools can decrease research time by over 20%,2 a figure likely to grow as AI techniques improve.
AI’s potential in legal research transcends speedy searches. By leveraging machine learning techniques, AI algorithms can analyze patterns and trends within legal data to make predictions about case outcomes, judicial decisions, and potential arguments. This predictive analysis empowers legal professionals to develop stronger litigation strategies, enabling them to anticipate possible hurdles, gauge risks, and make well-informed decisions based on data-driven insights.
For example, one promising AI-based program, LexisNexis’ Context platform, allows its users to pinpoint the specific language and cases judges rely on most often. With Context, legal professionals can enter the judge presiding over their case and instantly access helpful information about the judge, how they often rule on different motions, which cases they find persuasive, and even which fellow judges they often cite.
Using Context, let’s see what we can learn about a random U.S. district judge. Upon entering her name, we retrieve a graphical representation of her rulings on various motions, including motions for summary judgment, motions to dismiss, and discovery motions. Reviewing this graph, we learn that this judge has ruled on 963 motions to dismiss and has granted roughly 48% of those motions. Taking this a step further and filtering by practice area, we learn this judge has granted less than 25% of motions to dismiss in trade secret cases.
Moving on to this judge’s citation patterns, we learn that she cites Strickland v. Washington, 466 U.S. 668 (1984), more often than any other case, and that she cites a former district (and current Eleventh Circuit) judge more than any other judge. Using filters, we can obtain more specific information; for example, in her rulings on motions to dismiss in health care cases, this judge most frequently cites United States ex rel. Clausen v. Lab. Corp. of Am., 290 F.3d 1301 (11th Cir. 2002).
While legal professionals could spend hours tracking down this valuable information, AI based–platforms can do it in seconds.
A Warning & Reminder
While AI’s potential to transform traditional legal research is immense, it still has significant limitations, which give rise to ethical and other concerns.3 For example, the accuracy and impartiality of AI algorithms hinge entirely upon the quality and fairness of the data they are trained on. If an AI model is founded on flawed or biased information, it follows that those inaccuracies and biases will inevitably seep into its results. For example, when asked for leading cases on civil conspiracy in Florida, ChatGPT produced two helpful cases, but one case that didn’t deal with civil conspiracy at all. Lawyers must acknowledge and address the flaws that can be embedded in AI models. Without careful attention, a lawyer may inadvertently propagate a skewed or erroneous standpoint derived from an AI model. Thus, while AI holds immense promise in improving legal research, lawyers must diligently oversee AI’s output and independently verify its accuracy.
1 ChatGPT is available at https://chat.openai.com/.
2 The National Legal Research Group, The Real Impact of Using Artificial Intelligence in Legal Research (2018). Available at https://www.lawnext.com/wp-content/uploads/2018/09/The-Real-Impact-of-Using-Artificial-Intelligence-in-Legal-Research-FINAL2.pdf
3 See, e.g., Jonathan Grabb, Lawyers and AI: How Lawyers’ Use of Artificial Intelligence Could Implicate the Rules of Professional Conduct, Fl. Bar News (Mar 13, 2023); David Lat, The Ethical Implications of Artificial Intelligence, Above the Law (June 15, 2018).
See more »
DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.
© Zuckerman Spaeder LLP | Attorney Advertising
Refine your interests »
Back to Top
Explore 2023 Readers’ Choice Awards
Copyright © JD Supra, LLC
Leave a Reply
You must be logged in to post a comment.