Michael Cohen, the former lawyer for Donald Trump, admitted to citing fake, AI-generated court cases in a legal document that wound up in front of a federal judge, as reported earlier by The New York Times. A filing unsealed on Friday says Cohen used Google’s Bard to perform research after mistaking it for “a super-charged search engine” rather than an AI chatbot.

I… don’t even. I lack the words.

18 points
*

The problem is breathless AI news stories have made people misunderstand LLMs. The capabilities tend to get a lot of attention, but not so much for the limitations.

And one important limitation of LLM’s: they’re really bad at being exactly right, while being really good at looking right. So if you ask it to do an arithmetic problem you can’t do in your head, it’ll give you an answer that looks right. But if you check it with a calculator, you find the only thing right about the answer is how it sounds.

So if you use it to find cases, it’s gonna be really good at finding cases that look exactly like what you need. The only problem is, they’re not exactly what you need, because they’re not real cases.

permalink
report
reply
13 points

While the individuals have a responsibility to double check things, I think Google is a big part of this. They’re rolling “AI” into their search engine, so people are being fed made up, inaccurate bullshit by a search engine that they’ve trusted for decades.

permalink
report
reply
11 points

That’s not what they’re talking about here. Unless this so different in the US, only Microsoft so far shows LLM “answer” next to search results.

permalink
report
parent
reply
8 points

Google may not be showing an “AI” tagged answer, but they’re using AI to automatically generate web pages with information collated from outside sources to keep you on Google instead of citing and directing you to the actual sources of the information they’re using.

Here’s an example. I’m on a laptop with a 1080p screen. I went to Google (which I basically never use, so it shouldn’t be biased for or against me) and did a search for “best game of 2023”. I got no actual results in the entire first screen. Instead, their AI or other machine learning algorithms collated information from other people and built a little chart for me right there on the search page and stuck some YouTube (also Google) links below that, so if you want to read an article you have to scroll down past all the Google generated fluff.

I performed the exact same search with DuckDuckGo, and here’s what I got.

And that’s not to mention all the “news” sites that have straight up fired their human writers and replaced them with AI whose sole job is to just generate word salads on the fly to keep people engaged and scrolling past ads, accuracy be damned.

permalink
report
parent
reply
5 points
*

I mean I kind of see your point but calling those results AI is not accurate unless you’re just calling any kind of data collation/wrangling or even just basic programming logic “AI”. What Google is doing is taking the number of times a game is mentioned in the pages that are in the gaming category and trying to spoon feed you what it thinks you want. But that isn’t AI. the point of the person you were replying to is that it wasn’t as if he had intended to perform a Google search and was misled, you have to go to Google bard or chatgpt or whatever and prompt it, meaning it’s on you if you’re a professional who’s going to cite unverified word salad. The YouTube stuff is pretty obvious, it’s a part of their platform. What was done has nothing to do with web searches.

permalink
report
parent
reply
4 points
*

It was kinda funny to me when everyone freaked out about misinformation and “death of search” when I see a lot of people already never leave Google and treat Instant Answers as the truth, like they do with Chat-GPT, despite being very innacurate and out of context a lot of times.

permalink
report
parent
reply
8 points

Well to be fair Michael Cohen is not a lawyer, so how could he have known?

permalink
report
reply
71 points

That’s the second time a lawyer has made this mistake, though the previous case wasn’t at such a high level

permalink
report
reply
43 points

I work for a law firm, and yeah, this happens a lot. The stupidity and laziness of our clients’ in-house attorneys is making us a lot of money.

permalink
report
parent
reply
6 points
*

Why is there not an automated check for any cases referenced in a filing, or required links? It would be trivial to require a clear format or uniform cross-reference, and this looks like an easy niche for automation to improve the judicial system. I understand that you couldn’t interpret those cases or the relevance, but an existence check and links or it doesn’t count.

I assume that now it doesn’t happen unless the other side sys a paralegal for a few hours of research

permalink
report
parent
reply
1 point

I think the issue is we’re still in pretty uncharted territory here. It’ll take time for stuff like that to become the norm. That said… The lawyers should be doing those kind of checks anyways. They’re idiots if they don’t.

permalink
report
parent
reply
19 points

So, AI is… checks notes… making you a lot of money, by association?

permalink
report
parent
reply
7 points

I do get profit sharing. :)

permalink
report
parent
reply
57 points

Not even close to the second time. It’s happening constantly but is getting missed.

Too many people think LLMs are accurate.

permalink
report
parent
reply
6 points

Problem is that these llm answers like this will find their way onto search engines like Google. Then it will be even more difficult to find real answers to questions.

permalink
report
parent
reply
1 point

Have found, not will find.

There are so many spam sites with LLM content.

permalink
report
parent
reply
4 points

Some LLMs are already generating answers based on other llm generated contant. We’ve gone full circle.

I was using phind to get some information about edrum sensors, (not the intended usecase, but I was just messing around) and one of the sources was a very obvious AI generating article from a contant mill.

Skynet is going to be so inbred

permalink
report
parent
reply
32 points

This is what you get when the political system favours lies above truth

The more these people lie and get away with it, the more it will become the culture. China levels of big brother oppression are only a decade or so away if this keeps on going.

permalink
report
reply

Not The Onion

!nottheonion@lemmy.world

Create post

Welcome

We’re not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from…
  2. …credible sources, with…
  3. …their original headlines, that…
  4. …would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Comments must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

Community stats

  • 7.5K

    Monthly active users

  • 1.1K

    Posts

  • 38K

    Comments

Community moderators