Real Cases Podcast: AI Is Changing Law School: What Future Lawyers Need to Know

May 7, 2026

 

Transcript: 

speaker-0 (00:02.958)
And what I'm trying to encourage my students to think about is developing a set of skills that helps you become more mature at your legal judgment out the door. Right? In other words, thinking about how do I become more sophisticated as a lawyer, more mature as a lawyer in my thinking before I go out the door, because you want to come in at a level where you can supervise a machine doing that kind of work.

or understand where a machine might be useful when you're doing that kind of work as opposed to sort of coming out the door like I did. Okay, you know, what are we doing here? Give me a task. And I think now I have to be able as a young lawyer to anticipate the tasks, to understand where my judgment is going to matter, to understand the things that are important about that judgment, like how do I develop it? How do I use it?

speaker-1 (00:55.598)
This is Real Cases, a legal podcast presented by the Stetson University College of Law. We'll sit down with Stetson Law faculty and students to examine today's critical cases and debates in environmental, international, elder, and business law, plus the role of social justice in these fields. Join us as we open the case file. Episode 41, How AI is Changing Law School, What Future Lawyers Need to Know.

I'm Daniel O'Keefe, Master of English Literature from Indiana University. Today we're speaking with Professor Kirsten Davis. In addition to her JD, Dr. Davis holds a PhD in human communication from the Hugh Downs School at Arizona State University. For the past few years, she's been studying the impact of generative artificial intelligence on legal communication, legal ethics, and higher education. And her course, Legal Writing with Generative AI, was one of the first of its kind to be offered to law students.

speaker-0 (01:52.632)
you

speaker-1 (01:53.932)
I asked her how her background in communications and rhetoric gives her a unique perspective into legal writing and pedagogical issues surrounding it.

speaker-0 (02:03.534)
So I went to law school way back in the 90s, but I had been a rhetoric undergrad and I had a fabulous advisor, professor advisor who had really encouraged me to go get a PhD in human communication. But I told her, no, I was going to law school. So off I went to law school. And then later on, after clerking and practicing for a bit, I found myself at Arizona State University. I actually found myself out in Phoenix and decided I would apply to their

graduate program in human communication based on that strong interest in rhetoric. Around the same time, I started a position at the law school there and deferred that for a bit, going into the PhD program, but eventually started going at night and just really fell in love with continuing the study of rhetoric and having gotten the law degree in between those two rhetoric events made me realize that everything I was learning about rhetoric was

clearly applicable to the law, right? Law is a place where we use language to create what we are doing almost exclusively, right? The law is almost primarily a language-based profession and domain. And so I was teaching legal writing and of course the training of writing is a rhetorical field as well. And so I spent...

a lot of time in my PhD program, developing expertise with the theories of rhetoric, all the way from classical to postmodern, not expert in all of those, but studying all of those. And then when AI came out in the fall of 2022, I went on sabbatical in the spring of 23. And when I touched the tools for the first time, I thought this really is going to have a direct impact on how we as humans

Create meaning from language that there is a co-creation activity even with with a non I'm gonna be careful here. I want to say a non rhetorical actor, but that's not what I actually think I think the machine does in fact at least generate the artifacts of rhetoric even if it does not have rhetorical intent and so we are you know now we are engaged in what I see the tools as primarily rhetorical tools and that's what I that's the view I take of them so I

speaker-0 (04:22.944)
I'm not as interested in the underlying technology, although I think you have to know what it is in order to understand what's going on rhetorically, but I'm very interested in how we interact with the tool to create and give meaning to text, right? And I maybe create and give is the same word, but to create meaning in text. And then that leads me to the practical questions as a legal writing teacher.

you know, what is the practical impact of that interaction between lawyer and client and machine and context, right? How is that changing what we're doing? How is that changing how we need to approach it? So I think it is really paradigm shifting. We've never seen anything like this with respect to the production of text that we can use that is persuasive on its own, that feels human in many ways, but is not human.

does not include the motivations of humans except as those motivations are embedded in the text itself, right? And so those all lead to some really interesting questions about the, you know, the ethical and practical use of that language that's coming out of the machine.

speaker-1 (05:37.144)
That's fascinating what you were saying.

speaker-0 (05:40.16)
It's totally fascinating. know, there are plenty of people writing in this space that have talked for a long time about human-machine communication. This is just for, you for someone who has not been in that world as a scholar for a long time, for the rest of us, this is a huge change, right? And so I'm now sort of catching up with more of the literature in the sort of the digital rhetoric, although I think you have to create a new

Avenue. It's a new name, this sort of the AI rhetoric interaction. But yeah, I've been for a long time interested in sort of digital rhetoric and, you know, the ways in which technology impacts the way we create meaning and language. But this is just new. This is to me so new and so terrain shifting that it got very excited. Probably the most exciting thing to happen in the 26 years I've been teaching.

in teaching legal writing in law schools.

speaker-1 (06:42.14)
One of the things that it makes me think of is that

People are starting to bend over backwards and twist themselves into pretzels in order to prove that they're not using AI in certain circumstances. Right. And the example that always comes up for me is the dash, right. Which was this type of punctuation that I've long loved that I've always found to be extremely useful in certain sorts of cases for making your writing more conversational. Right.

And along comes chat GPT and the algorithm gets tweaked in just the right way. And next thing you know, it's showing up everywhere. And now there are people out there, including people who had never even heard of an dash before in their life, who the moment they see it, they think a robot wrote this, not a human being. And so now you have people, including people who would want to use dashes and who know better, who are saying, don't use dashes because it's going to make it appear as though this thing.

regardless of whether it was written by a person or a robot, seem like it was written by AI. it like just, just a little thing like that. just makes me think of all of the potential ways in which AI is going to change the way, like the occasions upon which people choose to use certain sorts of language to signal its authenticity or to, you know, not necessarily to prove that they're not using AI, but like it's, it seems like it does so many things to change.

what seems like appropriate language when in.

speaker-0 (08:15.99)
I think that's absolutely right. In fact, it's funny that you say that because I had this conversation with a student yesterday and we were talking specifically about the dash. So the first thing it makes me think about is what I need you to do as a young lawyer or lawyer at all is use judgment about your writing, right? So I still like you. I still think the dash is a fabulous way, particularly in persuasive writing to bring attention to something you think is important.

Therefore, it's used sparingly and with purpose. And so what I want you to be able to do is exercise judgment when the AI spits out a NIMDASH or not, right? That you're able to say, this furthers my goals as a writer or not, or it doesn't further my goal as a writer. I would like the judgment to be broader than whatever sort of the current rule of the moment is. Don't use, do use, but everything.

can have a place, right, if you can exercise judgment about what you're doing. again, even if you never produce the dash yourself, you are going to have to be able to understand what it does, why it works, how it can be used in order to make decisions about text generated by a tool or by your colleague who has written something for you to review, right, and submit, right? So that kind of, I think that. So judgment, sort of not this rule bound, this is the way it.

way it is writing, lawyering, persuasion. It's all about, you know, what the the Greeks would have called Kairos, right? Right timing and do measure, right? That in every circumstance, we're asking those questions. But the second thing that you bring up is the question of authenticity, which is the one that keeps me up at night. Right. So right now we are thinking about all the time.

whether or not the text can be trusted. And can it be trusted not just with respect to is it not fictional, right? Is it not fictional? But has a human been responsible for it? Is it the human that I think is responsible for it? And of course, that's always been a question, right? That would be a ghost writing question. Someone has written something on my behalf.

speaker-0 (10:34.07)
And so I spent some time thinking about this and trying to write about this as well, this question of authenticity, because I think it's going to be the real question for the future. In other words, how do we as audiences evaluate the authenticity? And I'll put that in the air quotes, right? The authenticity of a document that we're reading and how does an author signal that? And how do they, for lack of a better term, sleep at night about that?

So the thing I think is not, did I use AI or didn't I use AI and one of those is inauthentic and one of those is authentic. It's in combination, can I take ownership over this text? And you can perceive that I've taken ownership over this text. Now, maybe in a different context, right? I'm not going to speak to, for example, creative writers. Maybe that is a different domain where different considerations are happening.

But for lawyers, are instrumental writers. We are writing to get a job done. What should make that text authentic is that no matter who or how the text was produced, I have taken ownership of it. I have read it. I have ensured that it reflects my thinking, that it reflects accurate information, whether that be facts or cases or statutes, whatever is accurate, that the reasoning is consistent with legal logic. I have yet to solve the problem of how we're going to do that.

But the first thing is, is to kind of have an ethics built around it. What is the acceptable, noble, appropriate community, appropriate way to approach this process, right? And then readers have to be able to see enough in that document to feel that it is trustworthy. I mean, and that's in many ways, that's one of our big concerns about authenticity in this context, is that it's trustworthy, that someone believes in it enough.

to submit it, that it hasn't been submitted without thought or ownership. And you see that coming out a little bit in the cases about the courts where they receive a document that either has a quotation that doesn't exist, a case that doesn't exist. They are telling lawyers over and over again, your ownership of this document matters. They're not saying it with the word ownership, but that's how I interpret it, is that.

speaker-0 (12:58.986)
you know, no matter who produces this, you're accountable for it. You have to read what you cite. have to, you know, you have to understand what your document says. I they're, they're offering that. But these, you know, these questions of authenticity have come up in the past, right? So, like I said, ghost writing has been an ethical concern in the past for lawyers. Many, ethics rules have addressed this question in ethics opinions and cases. You know, can I, can I, as a lawyer,

write a complaint for you as a pro se party, a non-represented party going into court and not sign my name to it, right? We've had different ways of addressing that over time. For a long time, it was considered to be completely unethical to do that. And now in some jurisdictions, it is ethical to, you know, ghostwrite a pleading in court for someone who's representing themselves.

So that's just one example of ways that we've been struggling with those same kind of ethics questions about authenticity for a long time. And we'll just continue to struggle with that. But that is the big question. The audience question of can I receive this as authentic work sponsored by a human, owned by a human? We still care about that. We really still care about that and we should.

speaker-1 (14:21.742)
Could you talk a little bit about your course, Legal Writing with Generative AI? What prompted you to develop that and what was your thinking around how AI needs to be worked into the legal education curriculum when you did so?

speaker-0 (14:35.758)
Yeah, so I was on sabbatical in the spring of 2023 when we really sort of, know, fall of 2022 was the first iteration of chat GPT by open AI. And I got around to sort of taking a look at this in January, February. And as soon as I touched the tool, I thought there is nothing about writing that is going to remain the same. And I probably should qualify that thinking about legal writing that I teach for practice. Right. I was certain.

that this was going to change the way in which we would need to, that we would use language, that we would prepare and develop language, the way we would edit it, think about it, right? Think about our role in it. And then of course, the way we would need to teach it. And so I wanted to be able to reach those students who were already past the first year curriculum and begin to think about, okay, you know something about how legal writing works. How are we going to integrate that knowledge?

with the arrival of this tool. And so that's what I was thinking about and trying to figure out how to operationalize the skill set that frankly no one knows what to do with, right? We're trying to figure out how would an AI human hybrid writing process work? So I've done it for now as a one credit hour course, you know, we meet every other week and we delve into these topics and we try our hand at different things.

So we start out by thinking about the ethical constraints. And of course, between now and when this started, things have developed quite a bit. Then we begin to think about what does it mean to prompt? How would we do that? So we're trying to figure out what inputs we need to put into the machine. What kinds of documents should I attach? And then begin to what I call vibe, right? V-I-B-E, vibe, right? What kind of interactions can I have to get the best text?

I can get out of the machine, and then how do I evaluate that text and work with it? I call it analog, right? How am I doing that on my own? And so trying to figure out that process and what those thinking skills steps are in doing that. So we spent some time experimenting with that and then thinking more deeply about things that we wanna know. Can we develop a little simple chat bot that will help us do something particular?

speaker-0 (16:56.204)
And now that those have rolled out and we've had ways to access those, we've been able to do that a little bit as well. So what motivated me was to try to say, you know something about legal writing, let's add on to that. And I'll teach that in the fourth iteration this fall. So I'm really excited about that.

speaker-1 (17:13.23)
wow, I didn't realize it was the fourth iteration too.

speaker-0 (17:15.576)
Great. Can you believe that? Right. started. You know, that's one of the wonderful things about Stetson being a bit. We tend to be sort of nimble in that regard. We we have a lot of support for can you create something new? Can you innovate in the space? And we had an advanced legal writing course that often took different forms. And I spoke with my deans and they I said, here's what I want to try. And they're like, let's go.

Let's give it a go. And so I'm really pleased with that. And I think the students who have taken it have found it to be of real value because we're learning together and they're anticipating a future of having to use the tool.

speaker-1 (17:59.254)
So much of the concern that I hear around AI, especially when it comes to students, is that it is going to slowly degrade their ability to think critically or to write well on their own. And I think that that issue really seems to come up when it comes to instruction and education and how do we incorporate AI into education in a way that's practical.

that is going to train people for the actual job market that's future oriented, that is geared toward what the world is going to look like in 10 or 20 years, but yet at the same time that also develops the foundational skills that students need in order to succeed and in order to perform their job without this technology if they find themselves in that situation. And so guess I'm curious to ask like how...

how you see that folding into the legal writing program. Like what is the proper way to make this a part of the legal writing program while at the same time keeping all of its foundational strengths.

speaker-0 (19:07.256)
So, you you have asked the million dollar question, which requires me to take a deep breath and calm my nerves, because this is really, really the question that we're all asking. So how do you prepare students for this future with this technology while allowing its presence in law school not to erode their ability to learn? Because that is what you're worried about. And I tend to call it, you know, avoiding pressing the big button.

you know, the big easy button, I guess, right from an old commercial, that you're pressing, you know, you're pressing the button, pushing out the information, and you haven't taken ownership of it authentically. That's going to be a problem in practice, but it's really a problem in law school. One of the things I've said from the beginning when I talked to folks is you need lawyer intelligence to use artificial intelligence because you're going to have to be able to evaluate the content it puts out.

and also add the human value on top of the text generated. I we do have to keep in mind that as sophisticated as they are at text generation, that is what it is, right? It is a text generation tool. It is not a tool that thinks for you, but it can assist in the process of the lawyer's thinking and communication. So I have been thinking, in addition to the upper level class, I've been thinking about this all the time. Just had a conversation with students today

about this question of how do we fold in AI to foundational skills training, right? So some things that I have emphasized in anticipation of the nature of writing changing altogether is I have really focused the start of my class on critical legal reading. So we start with about four weeks of focusing on how do I read legal texts? And of course, that is common in law school, but sometimes we don't.

call out the method and actually having an operationalized method. So I think that feeds into the skill set one needs when they're going to use AI to produce communication. And that is, can you critically read the text from a lawyer's perspective? So for me, going and reading medical documents generated by AI, I'm not going to do a very good job.

speaker-0 (21:25.484)
being critical of those texts, right? I would need to have some experience in the area, understand what I'm doing. So I think those foundational reading skills are really important. And I've made a shift to trying to do more learning, writing to learn activities as opposed to learning to write. So if I want students to, if I anticipate that much of the generation of text may be done with AI in the mix,

Can I get students to use writing to solidify their understanding of the concepts that go into using that tool? So if I need them to think about, know, let's try to draft a rule, AI might be able to draft that rule for you, but you try it and then reflect on it. Respond to me about it. Let's get AI to generate three rules and you reflect on that and respond to me and explain your thinking.

behind what has been generated. So we know that writing does help us learn and it helps us consolidate knowledge. And I'm really concerned about the loss of that process. So I'm doing more writing to learn as opposed to learning to write the documents themselves. Now, of course, we're breaking those down. We're practicing those documents. But I'm trying to anticipate both. You may have occasions where you're generating and occasions where it's being generated and given to you.

And then the other thing, so those are just a couple of small changes. And what I found is in that first semester, I am doing more of that foundational work on building the mental skillset than I am introducing AI. So I'm introducing those foundational ideas, and then I'm putting on a little bit of AI more so at the end, right? We're working it in kind of along the way in small ways, but at the end, we're going to do a little bit more. So now that you know something,

Right? We can start to work with this tool to assist you. And sometimes that's a hard sell because there is this concept that will AI will do this for me. Just as 20 years ago, the question was, won't my assistant do this for me? And it's the same thing, right? You're ultimately responsible for what's coming from whomever or whatever is assisting you. And so you have to have enough knowledge to be able to assess what is going on in front of you. And we're seeing that in practice where

speaker-0 (23:49.454)
Lawyers are filing briefs and motions that contain information that is from AI or likely from AI that is just entirely fictional. So we've got to have that work ethic and that skill set and that knowledge. But the really big problem, if you want me to answer a little bit more, is assessment. So AI use can mask what you know and what you don't know.

Right. So in many cases, AI can produce the kind of response a strong law student can produce. Right. So when we have a novice, it can do novice work because the patterns in the language are pretty well rehearsed. Right. And so it can draw up on those. So if you're using AI outside of class with or without my permission and you're not engaged in generating that content,

I can't know whether you've learned something. So I'm trying other things. What can I get reflection? Can I have you go step by step through something? Are there other ways to help that? But really, I think it's driving us to do more assessment in proctored settings, in the classroom, in exam taking. So this is the first time in 26 years of teaching that I have a time on the final exam schedule.

So mostly we produced, you know, out of class written documents, just like a lawyer would, and we're still doing some of that. So that's not gone at all from my curriculum. But it's now being augmented by more testing of whether you have retained in your brain the kinds of things that I think you'll need to have foundationally to use AI. So that is a big challenge across legal writing programs across the country is how do we assess so that we're comfortable the student has actually learned?

the mental skill set and the practical skill set that I want them to have on their way out the door.

speaker-1 (25:49.474)
What are some of the ways the availability of AI changes your approach to teaching and assessment?

speaker-0 (25:56.014)
So what I have to build in is more process, right? Which is also much calls upon the professor a lot more. So I have to really make it process centered. Maybe I need to meet with you more often to talk about how it's going. Maybe I ask you to have AI generate 10 ideas for a paper topic. You identify one or two that you think are workable and you come talk to me about it or you write to me about it.

I collect more drafts, I ask you to present more information. And all of this is actually good pedagogy in general, right? It's not something that isn't already, you know, we're not reinventing the wheel here. These are not new ideas. The question is, how do I use those to ensure that AI is being used responsibly instead of misused for the purposes of learning? And that may be different on the side of lawyers. know, perhaps...

We aren't as concerned with how the information gets generated as much as we are with the quality of the end result, right? So as lawyers have done forever, they've used a form to start with. They've borrowed from someone else's brief. They've looked at those examples and that tends to be acceptable practice, right? And how that becomes useful and good, we don't care about as much. We worry that the end result...

is good and that demonstrates your competency. I think here in law school, we care very deeply about the process, right? Because the process is the learning process. So I want to just backtrack for just a second and say, I think to the extent that we're developing junior lawyers in practice, we may also be very much concerned about the process as well. And that may put more pressure on law firms to understand and teach and educate better about how to do process.

Because we want young lawyers to continue to learn. We want all lawyers to continue to learn. But I think the concerns are more deep in law school with that part.

speaker-1 (27:56.398)
I know this is something kind of jumping ahead to thinking a little bit about the job market. I know this is something that has been brought up in a general way about the job market now that AI is being more more widely adopted. But that also seems like it's particularly, you know, the sort of thing that that early career lawyers are worrying about too, the fear that the positions that are being affected the most are the ones that you would get right out of college or right out of law school.

where those are some of the abilities and skills that people are able to kind of either do or at least fudge their way through more effectively than ever using some of these tools. yeah, that's just like a little bit of thought about it. Because I think that's what you were saying about the idea that institutions that want to keep a pipeline of good attorneys.

are going to have to be thinking a little bit more about how they play a role in the development of people's abilities as they start to move through the early stages of career.

speaker-0 (29:06.19)
I think that's such a good point and I will just say at the beginning that we are getting some benchmarking studies that suggest AI tools can be better than humans with respect to something that can be demonstrated objectively. So there is at least one study that looks at some law specific AI tools and compares outcomes on objective questions.

with lawyers. So we're taking out sort of the subjective analysis and that is a big part of this and so I don't want to overstate this. But in those studies, right, in things like summarizing documents or answering questions about document content, that the AI tools will outperform humans. AI tools don't fatigue. They don't have trouble, you know, looking across a set of documents, you know, those, or many, many, many pages, right?

So there may be places where the AI tool, the things I would have cut my teeth on as a young associate will be supportive of that, or very possibly that under my supervision may do a lot of that work rather than me doing that directly. So I think we need to be mindful that no matter what happens, the advancing of technology, just like e-discovery, requires that we think about

how lawyers are gaining the skills. So someone my age might think about, I've got to sit and go through all the documents in the warehouse in order to learn how to identify attorney-client privilege issues. I got to go through one by one so I learn how to develop that skill. And I will say there are other ways of thinking about that, that the AI tools now, the e-discovery tools that have been around a while,

do a lot of that review for you under supervision and in specific context that is defining how the tool's going to do it, running experiments on some subset of that content. And we have to identify that that's how young lawyers may develop their skills at understanding what attorney client privilege is, being able to direct others later on, being able to think about what that information means.

speaker-0 (31:24.526)
So maybe those rote ways of learning things, and I call them rote for me, are just not going to be available. So are you setting up more training to get at those ideas in other ways? Are you setting up training that meets the young lawyer where they are with the kind of skill set that they're trying to develop and then aiding that critical thinking? But I think there is a reasonable concern that a lot of entry-level type work, just as sort of e-discovery sort of replaced

some of that entry level work or changed how it was gonna be allocated. I think that's a real concern to have too. So what I'm trying to encourage my students to think about is developing a set of skills that helps you become more mature at your legal judgment out the door, right? In other words, thinking about how do I become more sophisticated as a lawyer, more...

more mature as a lawyer in my thinking before I go out the door, because you want to come in at a level where you can supervise a machine doing that kind of work or understand where a machine might be useful when you're doing that kind of work as opposed to sort of coming out the door like I did. Okay, you know, what are we doing here? Give me a task. And I think now I have to be able as a young lawyer to anticipate the tasks, to understand where my judgment is going to matter.

to understand the things that are important about that judgment. Like how do I develop it? How do I use it? Something as simple as citation, know, AI tools in the legal domain will give us, you know, a roughly correct citation, sometimes better than others, right? But do you

speaker-1 (33:06.158)
familiar with the phenomenon, not specifically in the legal context, but in other...

speaker-0 (33:10.07)
And I'm not even talking about sort of, you know, the general frontier model because that would be a bad idea. But, you know, a reputable legal database can now give you a pretty decent citation that if you want it to be Blue Book or all when compliant, you need to check it. But, you know, it'll start you. It'll give you something to start with. And so I'm realizing, you know, citation form, I have to teach it. Students have to understand it because they have to read it.

They have to know what those citations mean and what underlies that. But I'm spending a lot more time talking about things like, OK, here's a case that's been affirmed in part and reversed in part. You see that in the citation. We can reproduce that citation. But let's talk more about the kinds of things you need to think about when you cite this. Do you add some explanation in a parenthetical? Do you cite this case at all? Or do you go find another case?

you know, do you know how to find the information in both the first case and the affirming and reversing case? What are the kinds of things you need to think about and beyond just reproducing the citation? Now we've done that, but you know, I'm accelerating that, right? Moving that up earlier in the education.

Right, that kind of conversation as opposed to, you know, thinking about maybe that's a little bit later down the road or that's a little more advanced into second semester. I just need to get you to reproduce a citation. Now I'm trying to say, okay, what are all of the things that go with this, assuming that the tool will give you the citation to start with, right? It's going to give you a starting point for something that I would have had to have done by hand, right? And that is, you know, write out the citation form.

So I mean, when you've been asking questions about this, I mean, you're recognizing that there's a lot of conflict and tension because there's some really good things and some things that aren't so good and then things we don't want to leave out because they're important to understand, but we realize they're novice, that they're really novice skills that maybe an AI can do some of at least. And so I've got to push you a little bit further. The complexity is significant with this tool now about how to approach the learning.

speaker-1 (35:21.518)
What do you think are some of the most promising uses of AI in the legal field that people don't yet widely know about?

speaker-0 (35:28.556)
You know, from my perspective as someone who's really invested in writing and communication, I think it can actually improve our ability to communicate well if we use it well. So, you know, it might give us more ideas about how to approach a problem. It might give us alternatives for how to express something. It might give us a first draft or help us edit a draft. mean, I'm seeing, you know, the

that there are ways that we can improve our communication across the board. And that's from people who may consider themselves to be weaker writers and communicators to people who are stronger. And I think you'll see that some of the some well-known lawyers talking about this on blogs and talking about the ways in which AI can improve or tackle content.

So I mean, think the danger is outsourcing our judgment to the tool and not recognizing that it is a machine that produces language for our use. Yeah. And I'm sure there are other improvements to e-discovery tools as a result. We're seeing on legal research some really interesting things. So deep research modules that can kind of advance, more accelerate your research.

I've seen it built into other tools where I can hit a button and summaries inside a case will pop up, allowing me to skim those summaries for content if I want, different ways to search the law, not just by asking the AI a question, but by tools that allow you to imagine what a judge might say, and it's designed to find that language in some way.

summarizing what cases have done in terms of validating your case law. that's, you know, I've seen that feature built in and we're just seeing this rapid increase in features coming up like that as to, you know, to help you navigate and move through content. So those are things that I think are useful and helpful. There may be more in terms of, you know, I hear lawyers talk about the ability to search large volumes of content for information.

speaker-0 (37:48.64)
the video to text interaction. So for example, body cameras, right? We may be able to take body camera footage and ask questions about that body camera footage, right? So, you know, do you see any Fourth Amendment violations, right? So to be able to talk.

to the tool as it analyzes the footage that might give us insight into claims or defenses that we might not have had otherwise, right? And sort of prompting thought as opposed to taking over the thought. So those are some things I think are kind of interesting and they really do revolve around sort of helping promote, I hope, deeper thinking as opposed to outsourcing the thinking.

speaker-1 (38:40.526)
Could you say a little bit how about how you see Stetson leading in the field of legal writing and in the incorporation of AI into that study?

speaker-0 (38:49.506)
Yeah, I sure can. First, will speak on behalf of, well not on behalf of him, but about him, our Dean, Ben Barros, who is leading in this space himself, right? So he is involved in national leadership on this matter, and he is very interested in innovation. He is reaching out and working with alums in this space. And so he is bringing a lot of energy institutionally to this. So I want to commend him for that.

We've started to introduce into orientation some background and training for everybody, including on how to avoid AI sort of interfering with your learning as we've talked about already and really focused on that and really getting entry level students thinking about sort of good uses of AI and this idea that an ethical framework in the law already exists around AI use, right? And those community standards are really helpful to us.

And then in the upper level, we're starting to spin out more courses. So as we talked about, I'm teaching an advanced class in the writing context. We have a wonderful adjunct who is experienced in this space, who is teaching an AI in the law course. We offer e-discovery courses that also involve AI. We have librarians who are teaching AI research skills.

we're starting to develop more and more of a sort of institutional approach in the upper level. And that sort of leaves the first year, right, which we're still trying to figure out. So for me, a lot of the responsibility for this, and people see this differently, so I'll just share my view. A lot of the responsibility for this does fall inside the legal research and writing classroom. And I have determined for me that it requires a significant revision of what we're doing.

So there are a handful of us who are really taking this on and thinking about this. And we're starting to think about, is there kind of a cross-discipline, a cross-course approach to how we approach it in the first year? And as I kind of suggested here, this is a really challenging topic because you're introducing novices to all of these foundational skills and then asking, how does AI impact the learning of those?

speaker-0 (41:07.232)
And what AI skills do we need in the first year, right before we move on to the upper level? So that, think, know, we're leading in that way. I also think, you know, we have some really interesting leadership going on in our advocacy program. So our Educating Advocates Conference, the folks there, led by Liz Bowles, they're thinking about how they integrate AI tools into the training of trial advocates, for example.

leading their peers in terms of how AI impacts competitions and coaching, right, in that advocacy space. And I will say on behalf of myself, one final way that I think we're leading is I co-lead and founded with another professor from another institution, we call it the Legal Writing and Generative AI Convo Group. We're now up to 550 faculty members across the country.

Where we meet once a month to talk about these issues. our upcoming topic, which actually happens tomorrow. This is April now, we're in April. We're going to talk about what a rigorous legal writing experience means now. because the ABA standard for accreditation is rigorous, right? You need to have two rigorous legal writing experiences.

What does that mean now in the context of generative AI? So we'll get a bunch of lawyers or a bunch of law professors together tomorrow, and we'll talk about these issues and explore them together. And we've been doing that now under Stetson's leadership, I guess, through me, for almost three years. So that's, I think, something, a way that Stetson is helping lead the conversation about this.

speaker-1 (42:50.178)
Thank you so much.

speaker-0 (42:51.224)
Yeah, thank you.

speaker-1 (42:53.612)
This has been Real Cases. Thank you for listening. Check back for more episodes about an array of legal topics presented by the Stetson University College of Law. Learn more at stetson.edu.


Topics: Real Cases Podcast