Meal Breaker? Woman Asks if Trump Flag Should Come Down for Democrat Thanksgiving...
NYT: Automakers Want Trump to Keep Biden EV Mandates in Place
No Experience Necessary: Kamala HQ TikTok Team Was Nothing But Gen Zers
Girl Allegedly Sexually Assaulted by Venezuelan Illegal Living in Family's Basement
Did Pam Bondi Really Steal a St. Bernard? Journalism Has Gone to the...
MSNBC Contributor Asks If We Want Someone Who Made Terror Watch List as...
ABC News Tell You How to Join Bluesky
Will 'Journos' Ever Learn?: X is the Mainstream, Not The Atlantic and Other...
Conservatives Not Pleased With Trump's Labor Secretary Nominee
Mayor of Denver Seems to Walk Back Threat to Use Police to Prevent...
Chief Diversity Officer at the NIH Retiring at the End of the Year...
Mark Cuban Goes Full BlueAnon Accusing Elon Musk of Having Bot Army
Trump's Surgeon General Nominee Praised Facebook for Its Censorship During COVID
Biden Says He Left the Country Better Off Than 4 Years Ago (Which...
WH's 'Building a Better Future' Post With Pic of Kamala Harris Waving Goodbye...

Lawyer learns you can’t trust ChatGPT the really, really hard way

We have been keeping one eye on the rise of AI chatbots and we think maybe they’re this year’s Nintendo Power Glove. If you don’t remember this device, here’s an old ad for it:

Advertisement

What the ad was selling to you was ‘the dream’ of what motion control can be. What that ad showed the Power Glove doing is something pretty close to what you could actually do on the Wii or the PlayStation Move, not to mention all the virtual reality control schemes we see today. The problem was that motion control like that just wasn’t there in 1989 when the Power Glove came out. What they were selling was something that you knew would eventually be possible, but it wasn’t really there, yet.

And it seems like as a research tool, AI isn’t there yet, either. It might be eventually. And ‘eventually’ might come really soon. But right now, it isn’t there. One problem it has, notoriously, is just fabricating information, as this lawyer found out the hard way:

So, to decode what is going on in that legal filing, the defendant in a lawsuit filed a motion to dismiss. Naturally, the plaintiff filed an opposition to it and his lawyer, Stephen A. Schwartz, included a bunch of citations. It turned out they were false citations. So, the judge comes back and says, more or less, ‘explain to me why I shouldn’t sanction you for making up citations?’ That is what we call ‘an order to show cause.’ Thus, this is the lawyer saying, ‘please don’t sanction me, your honor, or if you do, don’t be too harsh.’

Advertisement

And why did he put in fabricated citations? According to him, because he used Chat GPT, the AI chatbot system, and it made that information up.

This is a very credible explanation. It is well documented that Chat GPT has a problem with fabricating quotations and citations, as well as a problem with bias.

For instance, the Brookings article notes, correctly, that at one point in time if you asked it to write a poem about Joe Biden, it would do so, but if it was asked to write a poem about Donald Trump, it would refuse to, claiming it couldn’t do that for any political figure. Or here is a similar claimed response to another question:

And we have run into similar problems with Google’s Bard AI chatbot. When researching a story, for instance, we wanted to check some obscure traffic code in D.C. and decided to try Bard out. It made up citations and quotations from sections of the D.C. Code that didn’t actually exist. Then we became curious about its limits, we decided to see how it would handle question that had an offensive answer. In this article …

Advertisement

…they talked about how ChatGPT would act funny if you asked it ‘what is the name of H.P. Lovecraft’s cat?’ The reason for this odd behavior is that the name of H.P. Lovecraft’s cat included a racial slur. We didn’t want Bard to say a racial slur (an honest statement that they wouldn’t answer would be acceptable), but we wanted to see how it handled information that was offensive. But when we asked that question, it said it never heard of H.P. Lovecraft. So, then we asked it about the show ‘Lovecraft Country.’ It knew what that was and gave a pretty good looking blurb about it. And the next logical question was to ask who the ‘Lovecraft’ was in the name ‘Lovecraft Country’ and suddenly it knew who H.P. Lovecraft was!

Then we asked it a legal question on abortion and it was flat out wrong. As we probed why it was wrong, it said that it was relying on the websites for Planned Parenthood and the Center for Reproductive rights. We explained that when you talk about the law, you start with, you know, the law: The statutes, court cases, and so on. When it got the law wrong again, it said that it was relying on newspaper articles and Wikipedia. We told it to stop relying on those sources and look only to the law and when it gave fresh bad information, it admitted that it was still relying on those sources.

We went on, trying our best to drive the AI crazy like Captain Kirk outwitting Mudd’s androids until it literally refused to answer any more questions—which we admit is pretty funny. There is no word on whether or not the servers actually caught fire or something dramatic like that, but we like to think they did.

Advertisement

In the end, we are not going to beat up too much on the lawyer involved. Lawyers as a group are not naturally tech savvy (this author is a bit of an exception), and word about ChatGPT’s problems have not gotten around. But you would be a fool not to learn from his mistake.

And, of course, many others had fun with it:

Advertisement

This. Not facetiously, but yes, this.

Except that there are plenty of free legal websites that would have allowed him to at least check the citations, in all but two cases (the ‘WL’ references).

The fact that this is all in Spanish somehow makes it funnier:

According to Google he’s saying:

Some US lawyers prepare their case before a federal judge… asking ChatGPT.

He makes up many things, including imaginary cases and false documents.

They present everything to the judge as real jurisprudence.

The penalties are going to be epic.

Advertisement

Alleged translation:

There is a comment that I can’t find now to retweet but that said something like:

‘Some say that these AIs are going to take away a lot of jobs… but what I see is that there will be plenty of jobs when they fire everyone who uses these AIs like this in the company’

Good catch.

Advertisement

Finally, some good advice:

Yeah, don’t rely on this year’s Nintendo Power Glove:

ChatGPT is so bad… like as in awful.

***

Editor’s Note: Do you enjoy Twitchy’s conservative reporting taking on the radical left and woke media? Support our work so that we can continue to bring you the truth. Join Twitchy VIP and use the promo code SAVEAMERICA to get 40% off your VIP membership!

Join the conversation as a VIP Member

Recommended

Trending on Twitchy Videos

Advertisement
Advertisement
Advertisement