AI, Law, and Legal Education

December 15, 2023

AdobeStock_288689736 (1)In 2023, a man named Roberto Mata sued Avianca, a Latin American airline based out of El Salvador, claiming he was injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in 2019.

The airline asked a Manhattan federal judge to throw out the lawsuit on the grounds that the statute of limitations had expired. Mata’s attorney objected, filing a 10-page brief that cited over half a dozen precedents that the case could proceed, including Martinez v. Delta Air Lines, Varghese v. China Southern Airlines, and Zicherman v. Korean Air Lines.

The problem? None of those cases existed.

The legal risks of AI

It turned out the cases were all made up by ChatGPT, which Mata’s attorney had used while preparing the brief.

The story made a huge stir in the legal community (The New York Times headline was “Here’s What Happens When Your Lawyer Uses ChatGPT”). The legal profession is currently reckoning with how, whether, and if generative AI tools can be used effectively and ethically. And, to be fair, generative AI raises a lot of provocative questions not just about false information and plagiarism, but also about client confidentiality and the loss of proprietary information.

In June a judge ordered the lawyers who filed the brief against Avianca and the firm they work for to pay a $5,000 fine.

“There is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” wrote Judge P. Kevin Castel, Senior Judge of the United States District Court for the Southern District of New York. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

He wrote that the law firm and its attorneys “abandoned their responsibilities when they submitted nonexistent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”

Professor Catherine Cameron at Stetson Law notes that, at the end of the day, no matter how much computer assistance you have, human beings are still responsible for making the complicated interpretive decisions required by any law case.

“I was a law clerk for a judge,” Professor Cameron adds, “and we ran into the human equivalent [of AI errors] all the time. I would get a brief, and I would read it, and most of the time the cases were real, but a lot of times people would get the wrong thing out of the case: they’d read it wrong… and they would cite it for a rule that was nowhere in the case. So one of my jobs as a law clerk was to check all those cases and make sure they actually stood for what the attorneys said they stood for before I passed them along to my judge.”

“I’ve always taught my students you’ve got to check everything,” says Cameron. “You can’t just trust the attorneys on the other side, or your law clerk if you become a judge, or an intern if you work at an office. You’ve got to check everything, because it’s your bar license on the line.”

AI and the future of academia

When ChatGPT was launched at the end of 2022, it sent ripples through pretty much every field, including law and academia. Stories instantly popped up about students using ChatGPT to write their papers, professors banning the tool on their syllabi for the Spring semester, and grand, soul-searching questions about whether or not generative AI is ushering in the end of high school and college English classes as we know them.

Others have argued that professors should encourage their students to use ChatGPT selectively in the classroom and in some of their written work both to acquaint their students with important new technological developments but also to teach them about its limitations

Is AI a threat to the legal profession?

In April a report from Goldman Sachs estimated that 44% of legal jobs could be subject to automation in coming decades  more than in any other sector except administrative work. 

However, this isn’t the first time people have predicted that the legal profession would be rocked by new technological developments. A decade ago people were making similar predictions about things like the internet, personal computers, and smartphones. But employment in the legal profession has only grown faster faster than the American workforce as a whole.

As one New York Times technology reporter puts it, “The impact of the new technology is more likely to be a steadily rising tide than a sudden tidal wave. New A.I. technology will change the practice of law, and some jobs will be eliminated, but it also promises to make lawyers and paralegals more productive, and to create new roles. That is what happened after the introduction of other work-altering technologies...”

Meanwhile, according to one report, the AI legal software market is due for rapid expansion. Currently estimated at $1.3 billion globally, it’s expected to grow to $8.7 billion by the year 2030. $377.6 million of that is in the United States, where it’s expected to reach $1.3 billion by the end of the decade.

How AI is being used in law

Some judges are banning the use of AI entirely in their courtrooms, while others are demanding that lawyers commit to transparency in their use of the technology in their work. 

In Florida specifically, the state bar association is currently considering a proposal that lawyers must have clients’ approval before using any generative AI over the course of their case. 

“Lawyers should be cognizant that generative AI is still in its infancy,” the proposed advisory opinion says. “A lawyer may ethically utilize generative AI technologies but only to the extent that the lawyer can reasonably guarantee compliance with the lawyer’s ethical obligations,” such as “the duties of confidentiality, avoidance of frivolous claims and contentions, candor to the tribunal, [and] truthfulness in statements to others.”

While the use of ChatGPT to produce a legal brief that gets turned into a court unedited obviously poses some serious problems, AI tools are currently already being used to enhance and accelerate different kinds of legal work that used to take much longer.

A report from the Brookings Institution notes that AI makes discovery more efficient, it accelerates the production of first drafts, it could broaden access to legal services it could even be used to analyze court transcripts in real time to point out avenues of inquiry for attorneys that might not have been readily apparent.

AI in law school admissions

Even as some judges have moved to ban AI from their courtrooms while others are attempting to establish new rules and parameters for its use, some law schools have banned prospective students from using AI tools in the admissions process, while others are explicitly allowing them.

Earlier this year in July 2023, the University of Michigan Law School banned the use of ChatGPT in writing personal statements. Within a week, Arizona State University officially announced that they’d allow it. ASU administrators argued that as the tool becomes increasingly commonplace in legal work environments, it makes sense to permit incoming students to use it in the admissions process.

Other admissions professionals have pushed back, however, saying that openly encouraging the use of AI in writing personal statements is a step too far. And so far, most incoming law students agree with them. A survey by Kaplan released in October found that 66% of pre-law students thought generative AI shouldn’t be allowed in writing personal statements. Only 14% supported its use the rest were undecided.

The primary objection was that the tool provides too much support for weaker writers especially for a portion of the admissions process that is specifically intended for evaluating students’ writing abilities.

On the flip side, some commenters have argued that law school admissions departments should begin using AI themselves in the interest of expanding services and improving efficiency. AI bots could make certain department services available around the clock, and they could speed up reviews of applications. Whether a student would feel comfortable knowing that their application was rejected before a human being ever saw it, though that’s just one of the many conundrums introduced by this new technology.

Apply to Stetson Law

At Stetson University College of Law, a new generation of lawyers and legal scholars are wrestling with these issues and advancements every day. Stetson students go on to play critical roles in shaping legal debates all over the country. Find out more here about Stetson and apply to the J.D. program today!

Stetson professors weigh in on AI in Real Cases

In this month’s episode of Real Cases, we sit down with professors Catherine Cameron and Kelly Feeley to discuss the limits of AI tools and online databases, the enduring importance of legal interpretation and analysis, and the unexpected ways new technologies can replicate structural biases. Both professors regard AI not as transformative, but as an extension of tools for search and automation that have been shaping the discipline for decades.

 

Topics: Real Cases Podcast