Friday, May 09, 2025

Chat GPT - Don't Trust, But Verify

Chat GPT as a large language model AI that is supposed to make your research easier.

Just for fun, I decided to give it a try and asked it a legal question regarding a point of law on a case I'm working on. 

The answer is known to lawyers quite well as it is a pretty basic property principal that a possessing joint owner of a property does not owe rent to a non-possessor joint owner except under certain circumstances.

Finding Michigan state case law on this principle is somewhat difficult - because everyone knows the rule so there really hasn't been cases litigating it (there's some good cases on this situation when they are tenants in common but that's not the same as a joint tenancy).

So, I asked Chat GPT the question and it came back with the rule as expected, that rent isn't owed except under certain circumstances, and very quickly, too. 

It also came back with two Michigan cases with quotes supporting this rule - complete with very nice authoritative and proper legal cites to the two cases it helpfully cited.

Which do not exist.

Nor do the quotes themselves exist

Yes, I checked.

The citations, while looking completely correct, do not go to the case names Chat GPT identified as the cases, but to other totally unrelated cases, and those case names themselves either do not exist or are cases with completely different citations and for completely unrelated issues.

Chat GPT completely made up two case quotes and legal citations to them, probably trying to be helpful.

It's a good thing I did not blindly accept such good sounding quotes as factual but tried to verify them on a proper legal research tool.  I'm rather suspicious like that, and as an attorney you can never use made up cases to support your position.

Chat GPT is certainly not ready for prime time when it comes to legal research and I would be rather skeptical of it being an authoritative source for anything else without using it as a springboard to find verifiable and authoritative sources. 

Caveat usor. 

3 comments:

Rick T said...

From what I've learned, the ONLY way an LLM could be useful for legal research is for someone spend $millions to ingest a legal reference database like Lexis/Nexis for use as the Retrieval Augmented Generation baseline.

LLMs are trained to generate grammatical text outputs. There is no concept of correctness or requirement that any reference to external sources exist. Without adding RAG ChatGPT and all the other LLMs cannot be trusted.

Midwest Chick said...

Some lawyers in New York got sanctioned for bogus citations in a court case (https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/). I believe Michael Cohen used Gemini with the same fake results but for some reason didn't get sanctioned.

Aaron said...

Rick T: Yep, there's some LLMs out there that claim to be suitable for attorneys but I remain rather skeptical.

Midwest Chick: Yes I knew about that one and some other bad incidences too, which is why I didn't trust it from the outset and did it more as a lark than anything else.

I'm impressed and indeed appalled that Chat GPT literally invented two cases, complete with full legal citations that simply don't exist in reality. That it can simply make stuff up to answer a question makes it of very questionable worth.