Michael Cohen, the former lawyer for Donald Trump, admitted to citing fake, AI-generated court cases in a legal document that wound up in front of a federal judge, as reported earlier by The New York Times. A filing unsealed on Friday says Cohen used Google’s Bard to perform research after mistaking it for “a super-charged search engine” rather than an AI chatbot.
I… don’t even. I lack the words.
The problem is breathless AI news stories have made people misunderstand LLMs. The capabilities tend to get a lot of attention, but not so much for the limitations.
And one important limitation of LLM’s: they’re really bad at being exactly right, while being really good at looking right. So if you ask it to do an arithmetic problem you can’t do in your head, it’ll give you an answer that looks right. But if you check it with a calculator, you find the only thing right about the answer is how it sounds.
So if you use it to find cases, it’s gonna be really good at finding cases that look exactly like what you need. The only problem is, they’re not exactly what you need, because they’re not real cases.
While the individuals have a responsibility to double check things, I think Google is a big part of this. They’re rolling “AI” into their search engine, so people are being fed made up, inaccurate bullshit by a search engine that they’ve trusted for decades.
That’s not what they’re talking about here. Unless this so different in the US, only Microsoft so far shows LLM “answer” next to search results.
Google may not be showing an “AI” tagged answer, but they’re using AI to automatically generate web pages with information collated from outside sources to keep you on Google instead of citing and directing you to the actual sources of the information they’re using.
Here’s an example. I’m on a laptop with a 1080p screen. I went to Google (which I basically never use, so it shouldn’t be biased for or against me) and did a search for “best game of 2023”. I got no actual results in the entire first screen. Instead, their AI or other machine learning algorithms collated information from other people and built a little chart for me right there on the search page and stuck some YouTube (also Google) links below that, so if you want to read an article you have to scroll down past all the Google generated fluff.
I performed the exact same search with DuckDuckGo, and here’s what I got.
And that’s not to mention all the “news” sites that have straight up fired their human writers and replaced them with AI whose sole job is to just generate word salads on the fly to keep people engaged and scrolling past ads, accuracy be damned.
I mean I kind of see your point but calling those results AI is not accurate unless you’re just calling any kind of data collation/wrangling or even just basic programming logic “AI”. What Google is doing is taking the number of times a game is mentioned in the pages that are in the gaming category and trying to spoon feed you what it thinks you want. But that isn’t AI. the point of the person you were replying to is that it wasn’t as if he had intended to perform a Google search and was misled, you have to go to Google bard or chatgpt or whatever and prompt it, meaning it’s on you if you’re a professional who’s going to cite unverified word salad. The YouTube stuff is pretty obvious, it’s a part of their platform. What was done has nothing to do with web searches.
It was kinda funny to me when everyone freaked out about misinformation and “death of search” when I see a lot of people already never leave Google and treat Instant Answers as the truth, like they do with Chat-GPT, despite being very innacurate and out of context a lot of times.
Well to be fair Michael Cohen is not a lawyer, so how could he have known?
That’s the second time a lawyer has made this mistake, though the previous case wasn’t at such a high level
I work for a law firm, and yeah, this happens a lot. The stupidity and laziness of our clients’ in-house attorneys is making us a lot of money.
Why is there not an automated check for any cases referenced in a filing, or required links? It would be trivial to require a clear format or uniform cross-reference, and this looks like an easy niche for automation to improve the judicial system. I understand that you couldn’t interpret those cases or the relevance, but an existence check and links or it doesn’t count.
I assume that now it doesn’t happen unless the other side sys a paralegal for a few hours of research
Not even close to the second time. It’s happening constantly but is getting missed.
Too many people think LLMs are accurate.
Problem is that these llm answers like this will find their way onto search engines like Google. Then it will be even more difficult to find real answers to questions.
Some LLMs are already generating answers based on other llm generated contant. We’ve gone full circle.
I was using phind to get some information about edrum sensors, (not the intended usecase, but I was just messing around) and one of the sources was a very obvious AI generating article from a contant mill.
Skynet is going to be so inbred
This is what you get when the political system favours lies above truth
The more these people lie and get away with it, the more it will become the culture. China levels of big brother oppression are only a decade or so away if this keeps on going.