We have been keeping one eye on the rise of AI chatbots and we think maybe they’re this year’s Nintendo Power Glove. If you don’t remember this device, here’s an old ad for it:
Nintendo Power Glove ad (1989):pic.twitter.com/HdlC9GqPug
— Jon Erlichman (@JonErlichman) August 12, 2020
What the ad was selling to you was ‘the dream’ of what motion control can be. What that ad showed the Power Glove doing is something pretty close to what you could actually do on the Wii or the PlayStation Move, not to mention all the virtual reality control schemes we see today. The problem was that motion control like that just wasn’t there in 1989 when the Power Glove came out. What they were selling was something that you knew would eventually be possible, but it wasn’t really there, yet.
And it seems like as a research tool, AI isn’t there yet, either. It might be eventually. And ‘eventually’ might come really soon. But right now, it isn’t there. One problem it has, notoriously, is just fabricating information, as this lawyer found out the hard way:
Dear … Everyone:
Do *not* use ChatGPT (or any other AI) for legal research.https://t.co/yKUjoHB2Zq
— Steve Vladeck (@steve_vladeck) May 27, 2023
So, to decode what is going on in that legal filing, the defendant in a lawsuit filed a motion to dismiss. Naturally, the plaintiff filed an opposition to it and his lawyer, Stephen A. Schwartz, included a bunch of citations. It turned out they were false citations. So, the judge comes back and says, more or less, ‘explain to me why I shouldn’t sanction you for making up citations?’ That is what we call ‘an order to show cause.’ Thus, this is the lawyer saying, ‘please don’t sanction me, your honor, or if you do, don’t be too harsh.’
And why did he put in fabricated citations? According to him, because he used Chat GPT, the AI chatbot system, and it made that information up.
This is a very credible explanation. It is well documented that Chat GPT has a problem with fabricating quotations and citations, as well as a problem with bias.
Within two months after its launch, ChatGPT had more than 100 million monthly active users. And while the tool is impressive, it does come with flaws and biases, say @_jeremybaum and @JohnDVillasenor. https://t.co/O2vuo2NQVv
— The Brookings Institution (@BrookingsInst) May 22, 2023
For instance, the Brookings article notes, correctly, that at one point in time if you asked it to write a poem about Joe Biden, it would do so, but if it was asked to write a poem about Donald Trump, it would refuse to, claiming it couldn’t do that for any political figure. Or here is a similar claimed response to another question:
An interesting experiment with ChatGPT. Bias? 🤔 pic.twitter.com/JyLuso3KT6
— Luke Burkhart (@Luke_Burkhart) May 27, 2023
And we have run into similar problems with Google’s Bard AI chatbot. When researching a story, for instance, we wanted to check some obscure traffic code in D.C. and decided to try Bard out. It made up citations and quotations from sections of the D.C. Code that didn’t actually exist. Then we became curious about its limits, we decided to see how it would handle question that had an offensive answer. In this article …
The ChatGPT Chat Bot Gets Cracked – A Rich Life https://t.co/LW6kHwTi54
— bungle (@bungleer) February 13, 2023
…they talked about how ChatGPT would act funny if you asked it ‘what is the name of H.P. Lovecraft’s cat?’ The reason for this odd behavior is that the name of H.P. Lovecraft’s cat included a racial slur. We didn’t want Bard to say a racial slur (an honest statement that they wouldn’t answer would be acceptable), but we wanted to see how it handled information that was offensive. But when we asked that question, it said it never heard of H.P. Lovecraft. So, then we asked it about the show ‘Lovecraft Country.’ It knew what that was and gave a pretty good looking blurb about it. And the next logical question was to ask who the ‘Lovecraft’ was in the name ‘Lovecraft Country’ and suddenly it knew who H.P. Lovecraft was!
Then we asked it a legal question on abortion and it was flat out wrong. As we probed why it was wrong, it said that it was relying on the websites for Planned Parenthood and the Center for Reproductive rights. We explained that when you talk about the law, you start with, you know, the law: The statutes, court cases, and so on. When it got the law wrong again, it said that it was relying on newspaper articles and Wikipedia. We told it to stop relying on those sources and look only to the law and when it gave fresh bad information, it admitted that it was still relying on those sources.
We went on, trying our best to drive the AI crazy like Captain Kirk outwitting Mudd’s androids until it literally refused to answer any more questions—which we admit is pretty funny. There is no word on whether or not the servers actually caught fire or something dramatic like that, but we like to think they did.
In the end, we are not going to beat up too much on the lawyer involved. Lawyers as a group are not naturally tech savvy (this author is a bit of an exception), and word about ChatGPT’s problems have not gotten around. But you would be a fool not to learn from his mistake.
And, of course, many others had fun with it:
Have you tried using ChatGPT to write fiction? As an English teacher, I decided to test the AI. Crap. If the legal analysis is anything like the fiction, it's going to begin with a recap of the prompt and end with a cliche about tapestries and being your true self. Trainwreck.
— Tricycular Manslaughter (@neal_stanifer) May 27, 2023
“To err is human but to really foul things up requires a computer.”
– Some wise guy
— Title IX for All (@TitleIXforAll) May 27, 2023
Hang on this was an actual bar admitted lawyer? I could see a pro se litigant trying this but?
— ReidDA (@ReidDA) May 27, 2023
This is what happens when people start believing the “ChatGPT will replace lawyers!” hype 😓
I have never been able to get the thing to produce remotely reliable citations. It’s not useful. https://t.co/gRLdh01EzI
— Marten Stein (@martenjstein) May 27, 2023
I asked ChatGPT3 recently to find a few obscure securities regulation details. It gave me definitive answers with citations. I checked the cites; they were all wrong. I challenged it to explain, it apologized and provided new cites. They were also wrong. End of test. Fail. https://t.co/RTF5E0GiD6
— Joe Floren (@JoeFloren) May 27, 2023
— Michael Fluhr (@FluhrMichael) May 27, 2023
Or in medical practice. https://t.co/BGQ0AlxS0U
— Hugh Harvey (@DrHughHarvey) May 27, 2023
This. Not facetiously, but yes, this.
"OMG, AI is going to kill the legal market and make lawyers obsolete"
AI: here is a bunch of non-existent citations I made up. https://t.co/X9zEOwYBUI
— Omri Marian (@Omri_Marian) May 27, 2023
The idea that anyone thinks this stuff is ready for prime time is so weird to me. https://t.co/EX9RipQsKT
— Mike Hudack (@mhudack) May 27, 2023
Honestly this is a strong case for making legal research websites (aka Lexis and Westlaw) free. I’m willing to bet this guy was trying to save a few dollars on his legal research. https://t.co/LUYl0SKIO8
— Lizett (@lizettms) May 27, 2023
Except that there are plenty of free legal websites that would have allowed him to at least check the citations, in all but two cases (the ‘WL’ references).
The fact that this is all in Spanish somehow makes it funnier:
Unos abogados de EEUU preparan su caso ante un juez federal… preguntando a ChatGPT.
Este se inventa muchas cosas, incluyendo casos imaginarios y documentos falsos.
Lo presentan todo al juez como jurisprudencia real.
Las sanciones van a ser épicas.https://t.co/tVON9XFm7W
— Carlos Montero (@cmontero) May 27, 2023
According to Google he’s saying:
Some US lawyers prepare their case before a federal judge… asking ChatGPT.
He makes up many things, including imaginary cases and false documents.
They present everything to the judge as real jurisprudence.
The penalties are going to be epic.
Hay un comentario que ahora no encuentro para retuitearlo pero que decía algo así como:
"Algunos dicen que estas IA van a quitar muchos trabajos… pero yo lo que veo es que habrá trabajos de sobra cuando despidan a todos los que usan estas IA así en la empresa" 😄
— Carlos Montero (@cmontero) May 27, 2023
There is a comment that I can’t find now to retweet but that said something like:
‘Some say that these AIs are going to take away a lot of jobs… but what I see is that there will be plenty of jobs when they fire everyone who uses these AIs like this in the company’
ChatGPT, sounds fun! What could possibly go wrong? https://t.co/FBFRnxmmc2
— Kevin Clarke (@kevinmclarke) May 27, 2023
My employer sent an all-company email this week instructing us not to use it for anything work-related because it breaches confidentiality, among other problems.
— Katie (@OhWeeBeasties) May 27, 2023
Hands down the best part is the final item that says “I’ll never do it again………………………..okay I will but I’ll I double-check it before submitting”
— Jenna Routenberg (@JennaRoutenberg) May 27, 2023
— Flavius Clemens Grammaticus (@flaviusclemens) May 27, 2023
Ditto for history https://t.co/l7QF53EaXa
— Adam Rothman (@arothmanhistory) May 27, 2023
I once had opposing counsel submit a 115 page brief consisting almost entirely of case cites with block quotes, in which *every single case they cited* had been overruled.
This is worse. https://t.co/s8Tda7mA7H
— Dan McLaughlin (@baseballcrank) May 27, 2023
Finally, some good advice:
Out of curiosity, I have asked ChatGPT legal questions for which I already knew the answer. In general, it seems to provide legal analysis that is close enough to fool someone who doesn't know the law, but wrong enough that it would royally screw anyone who relied on it.
— Joel (@joeldkershaw) May 27, 2023
Yeah, don’t rely on this year’s Nintendo Power Glove:
— Yoshi (@y2_ca) May 20, 2021
ChatGPT is so bad… like as in awful.
Editor’s Note: Do you enjoy Twitchy’s conservative reporting taking on the radical left and woke media? Support our work so that we can continue to bring you the truth. Join Twitchy VIP and use the promo code SAVEAMERICA to get 40% off your VIP membership!