191. How generative AI is changing the legal industry

ai and big data business strategy for lawyers tech trends Feb 21, 2024

Large Language Models and Generative AI are going to have the biggest impact on word heavy industries.

This means that the legal industry is on the brink of major changes. This will affect how lawyers work, what clients expect and the future profits of the industry.

In this episode, you will learn from Damien Riehl: lawyer, musician, and software developer, who counts Elon Musk as a fan.

His TED talk was viewed by 2 million people, he has litigated for JP Morgan and he has also copyrighted every melody in the world.

Listen to this episode to learn:

  • How using generative AI can help lawyers increase profit margins while charging clients less
  • How what's happening in the legal industry will repeat on other professional services
  • Why ChatGPT and other generative AI tools make things up
  • Practical tips for how to get started with generative AI

FREE training

Get Tech Clients! Introduction to Tech for Lawyers

 

Listen here on Apple

Listen here on Spotify

Watch on YouTube

--- 

We love hearing from our readers and listeners. So if you have questions about the content or working with us, just get in touch on [email protected]

 

Say hi to Sophia on Twitter and follow her on LinkedIn.

Following us on YouTubeFacebookInstagram and TikTok will make you smarter. 

 

Interview Transcript 

Sophia Matveeva 
Damien Reel, welcome to the Tech For Non-Techies podcast.

Damien Riehl 
Thank you so much for having me.

Sophia Matveeva 
You have an incredibly diverse background. I've never actually seen anybody like you. So you're a lawyer and you're also a musician, which, you know, I can see that combination, but then you're also a software developer altogether. So how do you combine these three things in your actual life?

Damien Riehl 
So there's a book that has come out that I just finished reading called Range that really talks about there are two types of people. There are the people that really focus hyper focus on a particular thing. The Tiger Woods since age two, he's been golfing. And then there are people that do lots of things, say they're in 10 or 20 different sports. And then they maybe at age 18 or 20, they land on one thing that they really do well. And so the argument of Range, the book, is that maybe the people number two are actually better than people number one.

that if you hyper focus in this, they are actually outperformed by people that do many different areas. And so I think that's true in my career is that, you know, I've, it was a bachelor's degree in music, and I was going to teach choir. And while I was conducting a Brahms piece, two of my tenors started punching each other in the face. And I thought, maybe I'll go to law school. And so I then went to a law school and I worked for a very large law firm worked for some judges, and was represented best by and much of their commercial litigation represented victims of Bernie Madoff.

I also sued JP Morgan over the mortgage backed security crisis. So I did high stakes bet the company litigation. But I've also been a coder since 1985, since age 10. So that law plus tech background led me to pitch Thompson Reuters, one of the biggest legal tech companies in the world. I said, here's legal tech that I think can change the world. You should build it and hire me. And they were dumb enough to do that. So I did that for a while, left that cool job to do cybersecurity, where the biggest thing I did was that Facebook hired me and my company to investigate Cambridge Analytica.

So I spent a year of my life on Facebook's campus with Facebook's data scientists and my former FBI, CIA, NSA people that worked with me to figure out how bad guys use Facebook data. I did that 57 weeks in a row, Monday through Friday. And the stuff I would do on Monday would often make the New York Times and the Times of London by Friday. So I did that really kind of heavy, amazing work for a while. But then I left that cool job to do my current cool job, where I have a billion legal documents. That is a billion statutes, regulations, judicial opinions.

from the United States and worldwide. So I have that in the UK, European Union, Latin America, Africa, Asia, and I'm running large language models like ChatGPT across those billion legal documents to be able to answer the questions across multi-jurisdictional queries. So anyway, so I would say that the three things that I've done are serving me well in my current way that Tiger Woods that is focused on any one of those things has not been able to do. So I think that multi-

Damien Riehl
Multitalented folks that are jacks of many trades, I think, will continue ruling their world in our large language model future, which is going very deep in specific. Large language model can go very deep, but we need people with generalized capabilities to be able to harness that.

Sophia Matveeva 
Well, thank you so much, Damien and Tech for Non-Techies listeners. I bring you really interesting guests. So if this does not warrant a five-star review, I don't know what does. So please get on with it with all the love in the world. Now, Damien, I'm curious. I have all sorts of questions to ask you, but...

I just had this thought. So you know how music is actually quite mathematical. So I'm not as a fantastic musician as you are, but I have studied music to an extent and statistics and you know, people and music is very mathematical just because you know, the way it's written and the times and so on. And so as a developer, because that's also a very mathematical science, do you think that these two skills kind of coincide?

Damien Riehl 
They do, as a matter of fact. And even to the point that while I was finishing a 14-hour day at Facebook, I said to my colleague Noah, who was the best developer I've ever known, I said, Noah, because music is just mathematics, I said, do you know how we could be able to brute force a password by going A-A-A-A-A-A-B-A-A-C? And he said, yeah. I said, what if we could do that with music? What if we could brute force every single melody, do-do-do-do, do-do-do-do-re, do-do-do-do-me, do-do-do-do-fa, until you mathematically exhaust every melody that's ever been?

and every melody that ever can be. And he said, F yeah. But he didn't say F yeah, he said the other thing. But then that night we made 3,800 different melodies in a prototype. And to date we've now made 481 billion with a B melodies, exhausting every melody that's ever been and every melody that ever can be. And once they're written to disc, as we've done, they are copyrighted. So we've copyrighted 471 billion melodies. And then we placed everything in the public domain.

be able to protect you stole my melody lawsuit defendants. I've given a TED talk that's been seen 2 million times at this point. Before my TED talk, every defendant in those cases has lost. After my talk, every defendant has used my arguments and has won. So Katy Perry, Led Zeppelin, Ed Sheeran twice, once in the UK, once in the United States, all of them have argued what I argued in my TED talk, who was saying that perhaps the melodies at issue are unoriginal, therefore uncopyrightable.

So these cases should go away. And the correlation is not causation, but I would say there's pretty strong correlation that after my talk, everyone has won.

Sophia Matveeva 
How wonderful. And so actually this leads me to one of the things I wanted to talk to you about. Oh, and by the way, everybody, I did see that talk and I forgot that I saw it. So I thought I was coming up with a really intelligent and original question, but apparently it had been put into my head by your talk that I saw a couple of months ago. So congratulations. Um, but one of the issues I wanted to cover with you today was this issue of copyright law and generative AI.

because there are some people who are saying this is going to be a disaster. There are other people that are saying, well, if it's in the public domain or, you know, or it's going to help creatives more. And, you know, I'm seeing both merits of the argument, but of course I'm not a lawyer. So what do you think?

Damien Riehl 
So I happen to also be a copyright lawyer, so I've taught copyright law and so that's part of my expertise and within copyright law There's an idea called the idea expression distinction. That is ideas are Are uncopyrightable you can't copyright an idea You can only copyright the expressions of those ideas and then and if a human makes the expression if that That expression is original. There's if it is unoriginal uncopyrightable, but if it's original that expression is copyrightable

And so the way that large language models work, and I know this is for a non-techie, so I'll make it very, very simplistic, but it's really, it's extracting ideas from the text, from the corpus of text. So it's extracting the ideas of Bob Dylan-ness, or the extracting the idea of Ernest Hemingway-ness, or extracting the idea of Picasso-ness. And so those ideas are uncopyrightable. And the reason for that is that if you and I were to sound in the style of Bob Dylan,

Could Bob Dylan sue us for copyright infringement? And the answer is no, because if he could, every single songwriter since the 1970s would be sued by Bob Dylan, because they all sound like Bob Dylan, right? So the style of any particular artist is not copyrightable. It is merely an idea that is uncopyrightable. And so argument number one for these types of cases is that by extracting just merely ideas, and then the output of the large language model is a machine created expression from those ideas that is new and distinct from the original Bob Dylan song.

So, that's right. It's exactly what artists do. I read Bob Dylan lyrics and I make other lyrics that are kind of similar to Bob Dylan. He can't sue me, right? Because that's just the way art works, is that if I really like Picasso and I make a Picasso-like thing, he can't sue me for that, right? So really what the large language models and what the generative AI tools are is extracting the essence, the ideas of the things, just like humans extract the essence and ideas from the things. So that's argument number one.

Sophia Matveeva 
So it's a bit like inspiration for an artist or a fashion designer.

Damien Riehl 
Argument number two is that you might have heard about the New York Times suing OpenAI and Microsoft over the tool. And you see these very dramatic things saying, here is the New York Times article, and then here is the output from the large language model. But what you don't see necessarily is how the New York Times got there. And what they've done is they take the first eight paragraphs of the article and then say finish the after the eight paragraphs. And the large language model, what it's doing is it's predicting what's statistically likely to happen next.

And of course, what's statistically likely after you end put eight paragraphs is the rest of the article. What they were doing there is what's called in cybersecurity, red teaming. That is they're trying to get a system to do a bad thing that the system was not designed to do. And under copyright law, red teaming is actually the person that is hacking the system, which is what the New York Times did. Those hackers are the bad actors. And under copyright, at most the people that, the tool that's being used, in this case, ChachiBT,

is at most a vicarious infringement. Much like the VCR in the 1980s, people could do bad things with the VCR. You could be able to violate copyright. But in the United States, the US Supreme Court said, is that VCR capable of substantial non-infringing uses? That is, could you also record your kid's baseball game or your kid's soccer game, right? Or your kid's football game. And so the US Supreme Court said, because the VCR is capable of substantial non-infringing uses, yeah, bad actors might do bad things, but...

for the most part, VCRs are good. So in the same way, large language models, are they capable of substantial non-infringing uses? Of course they are. Think of all the ways that it's being used. So the fact that a hacker, a red hat hacker, red teaming the system in putting the eighth paragraphs, just because they do that, doesn't necessarily mean that it's infringing.

Sophia Matveeva 
So I know that there are a few investors listen to this podcast. And so when they are thinking about investing in generative AI companies, is essentially this legal licensing risk, is that not a risk in your opinion that they should, that they should really seriously consider?

Damien Riehl 
So I think that, yeah, you're right to ask about risk, because really all the law is assessing risk, much like risk, much like an insurance company is assess risk. Because this is not, you know, I should not be worried or I should totally be worried. There's a spectrum of worry that one should do. On that spectrum is, is this particular, are these litigants going to make the arguments that I just made? Or will they essentially forget those arguments and not make them?

Will the judges, if they do make the arguments, will they land on the judge's brain enough to come out in a decision? There's a risk that they will not or there's a risk that they will. And then that is maybe the United States, but how does that apply to UK law? How does that apply to Latin American law, African law, Asian law, right?

So there are all, there are myriad risks that, so I would not say that you should not be worried about such things, but I will say that under current United States copyright law, and I think under your current UK law and arguably, which are somewhat aligned. I think the idea expression dichotomy is a very, very good argument. And the idea of red teaming is a very, very good argument. And so that if those are argued in the courts, I would say that they should win, but whether they will or not win is up to the litigants and up to the court.

Sophia Matveeva
Thank you. And early on, you mentioned the tool that you are working on. Could you tell us a little bit about that and how it works and who it's for?

Damien Riehl 
Sure. So we, because we have a billion with a B, legal documents worldwide, not just in the United States, but also in the United Kingdom, also in the European Union, both as an umbrella organization, as well as the component states in Latin America, Africa, Asia. We have a billion legal documents from a hundred countries worldwide. Thank you for it. So my friends at VLex, my colleagues at VLex would be very happy that you asked that questions. V as in Victor, L-E-X, V-lex.

Sophia Matveeva
And who's we?

Damien Riehl 
And what Villex does is we have a product called Vincent. And what Vincent is, is you as a user will ask a legal question, much like you ask a legal question of a lawyer. And we will search through all the corpus of legal documents that we have and be able to create a memorandum based on non-hallucinated cases, based on non-hallucinated statutes, based on non-hallucinated regulations. And we'll give a legal memorandum, often in two to three minutes, that might've taken a lawyer 10 hours to be able to research in a very comprehensive way.

Sophia Matveeva 
And the reason why you're bringing up this non-hallucinating situation is, could you tell us, because I know what you're talking about, but maybe not all of the listeners do. It's a sorry tale.

Damien Riehl 
We do that.
Sure. Poor Steve Schwartz. Yeah. So poor Steve Schwartz out of New York is a lawyer that he asked Chachi Petit a question and it gave him very good looking cases that were right on point for his analysis. But sadly Chachi Petit made up those cases. And the term in large language bottles they call hallucinated the cases. It made them up. So the court was not very happy with Mr. Schwartz and there have been others, dozens of others that have gotten in the same problem.

So the problem that he did that many people are worried about is if you ask ChatTPT out of the box a legal question, it will hallucinate. But if you instead ask my tool, which has actual cases and actual statutes and actual regulations, and what my tool does is it first finds those relevant cases, statutes, and regulations and say maybe there are five of those cases. And then we say to the large language model, summarize those five cases and tell me how it answers that question.

The rate of hallucination is next to zero. So between asking, go ahead.

Sophia Matveeva
So why, why does chat GPT hallucinate in the first place? And what is it that you have in your model that means that yours doesn't?

Damien Riehl 
Sure. So the chat tbt is hallucination. Think of it like if I were to ask you a question about something that you remember reading in college in university, and maybe 1520 years ago. And then I say, without looking at any books. Tell me what that says. You will kind of give a vague recollection that might be right, or it might be off. Right, because you don't actually have the source material at your fingertips. So that is really what large line. That's what large language models like chat tbt are doing.

is that they want to please you. So they will maybe be able to, much like you might say, oh, yes, Smith v. Jones is a great case on that. But actually it wasn't, yeah, that's right. That's right. They're sycophants that they want to please the user in a sycophantic way. So because it wants to please you, it will say Smith v. Jones, where actually the case was Smith v. Johnson. And so it will do that. And much like a human would do that if you're trying to remember something 20 years ago.

Sophia Matveeva 
So their people pleasers, my God, they need to go to a life coach and, you know, get work on their confidence.

Damien Riehl 
So the difference with my tool is that I don't ask it something from 20 years ago. I will put in front of it and say, here are cases and here are statutes, read through these things and then to summarize how they answer the question. So much like a human would actually do that research and be able to say, oh yeah, this is it. The human is not going to get it wrong and neither does the large language model. So that's how you can ask someone what happened 20 years ago, it will hallucinate, ask someone to read these documents and summarize it and it will not.

Sophia Matveeva 
So essentially, this is the closed large language model. And from what I understand, big law firms are working on their own versions. Or they're saying that they're working on their own versions. So presumably that they would get the cases and all of the information that they have, because if it's a huge international law firm, they are going to have lots and lots of information. And then they will essentially create their own tool, which is only going to be available to their lawyers. Is that about right?

Damien Riehl 
Yes. So what you're touching on is that a law firm will have lots of data within their walls. That is, they will have contracts that are confidential that I don't have in my dataset. They also have settlements. How much did these cases settle for? That is not public. And so those settlement numbers. So there are a lot of... If you think data is the new oil, there's a lot of private oil that my dataset does not have. But I have a lot of public oil that those law firms do not have. That is, I have cases that just were handed down yesterday.

I have regulations that were just passed yesterday. I have statutes that were just passed yesterday. So those law firms are not going to be collecting those cases, statutes, and regulations in the way that I will. So really the beauty of most law firms, what they're doing is they're collecting their private oil and they're getting it in good shape. And then they're connecting it with my public oil to say for this new contract that comes down the pike, does this violate any regulatory things? Like does it violate any privacy obligations or privacy obligations in the United Kingdom?

or in the European Union, et cetera. So this is a way that private oil, a contract, can connect with public oil, regulatory, case law, et cetera, to be able to do better than either one of them separately.

Sophia Matveeva 
I wonder how this kind of model of connecting your own closed LLM with another one, with a public one, how that can translate to other industries, because I would assume, for example, an investment bank or a large investment manager is also going to have lots and lots of their own private data, which they could use to create some sort of tool.

but then they would also need some sort of public tools. I think there is this model is reputable across other industries. What are your thoughts?

Damien Riehl 
Yeah, I think that's right. And so really the question is, what is your corpus of private data? And there's been a lot of bandying about of the term like, I train an LLM on my private data. And I think that if you ask the biggest law firms in the world, and I've worked with the biggest law firms in the world, they will under hushed tone say, we're not actually creating a trained LLM for our private data. And there's two reasons for that. Reason number one is because their corpus is much too small.

They don't have a billion legal documents like I have, and my billion legal documents will actually make a pretty good corpus of legal. They might have in the millions, and a million is just not enough to be able to give a robust. So that's reason number one is why they're not doing it. Reason number two is because it's very expensive to train your own model. And so that expense is not something that people want to be able to do easily. And thing number three is that there is evidence that a trained corpus on a specific vertical

So for example, a medical corpus called MedPalm is something that Google had created. So they ingested all sorts of medical information and they tried to make a medical large language model. And what a bunch of researchers a few months ago did is they compared MedPalm's performance with GPT-4 just out of the box and with smart prompting. And it turns out that GPT-4 with smart prompting beat MedPalm out of the box. The train specifically on medical questions.

And it then did better at GPT-4, did better at the medical questions than Medpalm. So even if you have a big corpus, which they don't, even if you want to go through the millions of dollars of training that corpus, maybe it'll be like Medpalm and actually be not as good as GPT-4 and maybe certainly not as good as GPT-5 or GPT-6 or Mixedrill or Google's Gemini or all the other things that are coming down the pike. So there are lots of reasons why any particular organization might not want to go through the expense to be able to jump on this train right now.

And that's especially true because in the business context, Bloomberg had created a financial data set that had created their financial LLM. And there have been reports that because it was trained in 2023, it is already largely outdated that they now have to add 2024 data to make it actually useful. And so, does any particular company have Bloomberg's resources to be able to do that training and do that every year going forward?

Sophia Matveeva 
Interesting. So it's that old conundrum of build or buy, which essentially, I mean, we definitely studied that at business school and that's something that investors are always thinking about. And company leaders are always thinking about no matter what the technology is. And the reason I'm pointing this out is I think there are so, so many headlines and, you know, so much justified interest in generative AI that I think people sometimes just forget the fundamentals of business, which fundamentally is about.

making money. So making stuff that people want to buy and making a profit so you can reinvest it. And so tell me about this company, this company that you're working for, how did it come about and how is it funded and how did it manage to create this Vincent? How did it manage to create Vincent?

Damien Riehl 

Yes, so the particular company I have, Villex, has for, it was actually a combination of three companies. And each of those three companies for the last 35 years and 25 years, what they've done is they've collected statutes and they've collected regulations and they've collected judicial opinions. Fast Case, the predecessor company did that in the United States. Villex has done that in Spain.

and all the Latin American countries and all the EU countries. And then Justice is a company based in the UK that is now part of Villex too. They've done that for the United Kingdom and all the Commonwealth countries, Australia, Canada, et cetera. So between taking Fastcase from the US, Villex from the European Union, and then Justice from the UK, now we together have arguably what is the biggest legal data oil field in the world. That is the largest corpus of a billion legal documents.

Before generative AI and large language models, people didn't care as much about that oil. But now everyone realizes, oh wait, that oil is actually very, very valuable because you need that data to run the large language models across to be able to answer, does this contract violate any privacy or privacy obligations, regulations, statutes, et cetera? So we have been for 35 years collecting data that people didn't think was very valuable, but now people realize it's very, very valuable.

Sophia Matveeva 
And how do you think your product is going to change the legal industry?

Damien Riehl 
There is a real question as to what an hourly billing lawyer, what value they're providing. I billed by the hour. I was a litigator for about 15 years. The dirty little secret amongst lawyers is that if you bill by the hour, the slowest lawyer wins the race. Because if I'm able to bill 10 hours for something you do in five hours, I make more money if I can get the client to be able to convince the client that it was actually worth 10 hours. The problem is the information differential between you as a client.

and me as a lawyer, I know more than you. So I can try to convince you that the 10 hours was actually very much necessary versus the five hours. So that paradigm is changing with large language models because I can do a 10 hour task of a legal research memo and it literally takes two minutes. And we are selling not only to law firms, but we're also selling to in-house counsel. So if in-house counsel can now do that, what used to be a 10 hour task in now two minutes.

They could be able to say, why exactly did it take 10 hours? I will not pay you 10 hours for this task. I will pay you the two minutes for this task. And so another one of those is that if you upload a complaint, for example, in litigation, what did I do as a litigator? I would say, OK, what are the claims in the complaint? What are my potential defenses in the complaint? What facts should I ask my client to be able to respond to those defenses? What are legal research questions I should ask to be able to defend against this?

All the things I just asked are about a 40-hour worth of work. My tool that we just rolled out a few weeks ago will take a complaint and do all of those things and it will do it in literally less than five minutes. So the real, we've talked to in-house corporations, that is in-house counsel, and they've said, we're going to do two things with that. Thing number one is we're going to take every complaint that has come against us in the past that we've given to law firms and we're going to run it through your tool. And we're going to see what did the law firm do atop.

what your machine can do. If the answer is nothing, that says something. That's thing number one. Thing number two, what we'll do is every complaint going forward, before we give it to a law firm, we're gonna run it through your tool. And then we will give the output of the tool to the firm and say, what can you do atop this? Because that's all we're going to pay you for. And so really this idea of paradigm of the slowest lawyer wins the race, where a 10 hour task can be shrunk down to two minutes.

Damien Riehl 
And the in-house counsel being able to say, we'll only give you a top what the machine will do. I think that will move us away from the hourly billing model and more to the flat fee model. Because now I, as a lawyer will say, okay, I'm not gonna bill you by the hour, I'm going to be aligned with you and I'm going to be very efficient in the way that I do things. And with that, if you think about business school, you think about revenue minus cost equals profit. If you press down your cost, which is what large language models do, you increase your profit.

And so really that flat fee model is the only way forward, I think, with law firms to be able to say, use large language models to shrink your cost to increase your profit margins. And that way both people win. The law firm wins because they make more money and the clients win because maybe they pay less than they would have otherwise.

Sophia Matveeva 
And also, I guess as a client, you're not going to get a very surprise, uh, legal bill, which is much higher than what you originally expecting. And, and so actually speaking on the, of the client side, can people use your tool? You know, if I, if I desperately need to see somebody today.

Damien Riehl 
That's exactly right.


Yeah, so there is, there are unauthorized practice of law statutes that are certainly the case in the United States, but are less of a case in the United Kingdom in other countries. And so we're very careful to say that if you are a lawyer, you could be able to use our tools, right? So if you're an in-house counsel in a corporation, of course you can use our tools to do this. But there's a real question as to whether you need to be a lawyer in the United Kingdom, or whether you as someone off the street could be able to ask that same legal question, much like you could ask a lawyer.

and be able to get that memorandum in two minutes. Do you need to be a lawyer or not be a lawyer? In the UK, there's really good arguments that you do not need to be a lawyer. Also in the United States, there are also very good arguments that what we're providing is legal information, not legal advice. So we don't run afoul of the unauthorized practice of law statutes. Because we're doing the same things that OpenAI is doing with ChatTPT. We're doing the same thing that Microsoft is doing with all of their chat. You could ask Microsoft,

Give me the legal, what are the elements of breach of contract claims in the Southern District of New York? You could ask that of Microsoft, you could ask that of Google, you could ask that of Facebook and Meta, you could ask that of OpenAI. Are they all committing the unauthorized practice of law? And if so, maybe the Bar Association should sue Microsoft and see how that goes, or sue Google and see how that goes. So all that's to say is that we currently are focused on the lawyer market, because that is very clearly fine, but you can imagine that maybe won't be the case forever.

Sophia Matveeva
Well, interesting. I have been called up for jury duty. So maybe, uh, in London, so this is taking place in a couple of weeks. So maybe I'll be using your tool to find out what's, you know, what do I think of this, how the barrister saying, are they telling me the truth? Um, so what I

Damien Riehl 
Good luck. First, good luck on jury duty. I worked for judges in the United States and we as judges working with the judges, we as the court would often say to the juries, do not do your own legal research. Only be a look at what has been brought before you. I would imagine that the UK is probably the same. So I would say that maybe you don't want to do the thing you just described.

Sophia Matveeva 
Oh, okay. Well, great. Thank you. Thank you. Thank you for my preparation. I mean, I'm just hoping that it's not going to be one of those, you know, really big trials that last four months, because I actually do have things to do, you know, just some sort of, I don't know, pickpocketing. Okay. Well, so I am curious in terms of what you're seeing, I'm curious what you think.

essentially generative AI, how these kind of models are going to affect other companies in the service professional services industry, because with the law it's, um, maybe because we're talking about it and you're so good at explaining it, maybe because I've been working with law firms, I got to know a little bit about how lawyers think, but it seems like, okay, I can, I can wrap my head around this, but what about other professional services company, uh, companies, industries like consulting?

like investing, like banking. What do you, do you see the same kind of pattern playing out, the same kind of dynamics, or do you think this is only very specific to law?

Damien Riehl 
I think that there are many modes of generative AI that the large language models and others are good at. One mode is text. Another is visual information. And a third mode is just working out in the world. And so when you think about the companies and professions that will be disrupted most, it will follow that step. That is, is it text only, or is it text plus images, or is it robots working out in the world? And that's the progression. And so if you think about the text only,

Um, all the law is, is words. The only thing that we as lawyers do for every single lawyer, we ingest words, we analyze the words and then we output words. And those three things are things that large language models are able to do very, very well. Uh, chat GPT-4, GPT-4 beat 90% of humans on the United Cities bars again. Um, that's GPT-4, uh, chat, the open AI is making GPT-5, uh, and we'll be making GPT-6.

So does that 90% number go to 95%, go to 99%, go to 100%. So these tools are the worst that they will ever be. So as to your question, which professions are going to be most affected, I would say that anything that has to do with words, so lawyers, also to some extent medical folks, to the extent here are my symptoms, what are potential maladies that I have, that is maybe just an advisory textual question that maybe could be done.

Is the large language model going to put hands on the person to feel a lump? No, that goes into in the world, right? And is it going to be able to look at the, you know, the x-ray to be able to see if there's a break there? That goes into the visual. So there are certain textual aspects of medical that will go away. Also consultancies to the extent that these are just textual advisory. If the tool has enough data in its corpus, that is private oil and public oil.

then using retrieval augmented generation to be able to work through that corpus is a way that textual problems will be solved. So that's a long way of saying that if text, soon. If images, soonish. And if out in the world, we're not going to be replacing plumbers anytime soon. We're not going to be replacing electric electricians anytime soon. These are things in the world that are much harder than images and much harder than text.

Sophia Matveeva
And so what would be your advice to smart business leaders to learn about generative AI? Is it literally just, you know, get a free account at chat GPT and try to get it to do something, or is there anything else?

Damien Riehl 
Yes, certainly experiment. And I would say that the free account of ChatEPT beat about 10% of humans on the bar exam, whereas the $20 a month version beat 90% of humans on the bar exam. So this the significant change is significant. So I would say that if you get the free account and you're like, it's not that impressive, I would say that's because you're not paying for the real one that actually beat 90%. So

That's a problem that I've seen lots of people try the free one and they're not impressed. And they're like, oh, chat GPT isn't quite there yet. A lot of lawyers say that. But then I show them my tool and they're like, that is exactly what I do every day. And I say, yes, that's because I'm using GPT-4, which beat 90% of humans on the bar exam. I'm not using GPT-3.5 that beat only 10% of humans on the bar exam like you did. So yes, pay for one and whether you pay for the GPT version or you pay for the Gemini version, which just came out.

Sophia Matveeva 
Mm-hmm.

Damien Riehl 
or whether you pay for Claude, which is Anthropics, or whether you pay for another. Pay for something because the proof is in the, you get what you pay for, right? The proof is in the paid for pudding. That's thing number one. And thing number two, think about everything you do that has to do with text. That is the information you receive, whether it be email or otherwise, and then the information you output. And think about how can that text, input and output, be helped with large language models?

And I would bet that you'll find many ways that large language models can help. Blockchain is something that is a solution in search of a problem, where there's not many problems that blockchain actually changes. But large language models is exactly the opposite. Where large language models are the solutions to so many problems that we've had throughout history, that we're still discovering other problems that the large language models can actually remedy. So the way to be able to discover the problems that need to be remedied is to play with it.

And to say every time that I'm either ingesting words or outputting words, maybe the large language model can help with that.

Sophia Matveeva 
Thank you so much, Damien. This has been a wonderful conversation. And when people want to learn more from you, which I'm sure they will, where could they find you?

Damien Riehl 
I'm very active on LinkedIn. After Twitter slash X has seen its downfall, it seems like Twitter is a place I go less for things. LinkedIn is the place I go more because the smartest people doing large language model things, either in the legal space or in other spaces, are doing their work in LinkedIn. So find me on LinkedIn.

Sophia Matveeva 
Yeah, and your posts are really good. I read some of them today. All right. Well, have a wonderful day and thank you very much.

Damien Riehl 
Thank you.
Thank you, Sophia.

 

Sign up to our mailing list!

Be the first to hear about offers, classes and events