But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

198 points

Haven’t people already been disbarred over this? Turning in unvetted AI slop should get you fired from any job.

permalink
report
reply
35 points

Different jurisdiction

permalink
report
parent
reply
13 points

Immediately there should be a contempt charge for disrespecting the Court.

permalink
report
parent
reply
-7 points

I heard turning in AI Slop worked out pretty well for Arcane Season 2 writers.

permalink
report
parent
reply
158 points

“Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.

Jesus Christ, y’all. It’s like Boomers trying to figure out the internet all over again. Just because AI (probably) can’t lie doesn’t mean it can’t be earnestly wrong. It’s not some magical fact machine; it’s fancy predictive text.

It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it’s important to check people’s sources yourself, robot or not.

permalink
report
reply
58 points

No probably about it, it definitely can’t lie. Lying requires knowledge and intent, and GPTs are just text generators that have neither.

permalink
report
parent
reply
11 points

A bit out of context my you recall me of some thinking I heard recently about lying vs. bullshitting.

Lying, as you said, requires quite a lot of energy : you need an idea of what the truth is and you engage yourself in a long-term struggle to maintain your lie and keep it coherent as the world goes on.

Bullshit on the other hand is much more accessible : you just have to say things and never look back on them. It’s very easy to pile a ton of them and it’s much harder to attack you about any of them because they’re much less consequent.

So in that view, a bullshitter doesn’t give any shit about the truth, while a liar is a bit more “noble”. 0

permalink
report
parent
reply
14 points

I think the important point is that LLMs as we understand them do not have intent. They are fantastic at providing output that appears to meet the requirements set in the input text, and when they actually do meet those requirements instead of just seeming to they can provide genuinely helpful info and also it’s very easy to not immediately know the difference between output that looks correct and satisfies the purpose of an LLM vs actually being correct and satisfying the purpose of the user.

permalink
report
parent
reply
3 points

So it can not tell the truth either

permalink
report
parent
reply
7 points
*

not really no. They are statistical models that use heuristics to output what is most likely to follow the input you give it

They are in essence mimicking their training data

permalink
report
parent
reply
2 points

I’m G P T and I cannot lie.
You other brothers use ‘AI’
But when you file a case
To the judge’s face
And say, “made mistakes? Not I!”
He’ll be mad!

permalink
report
parent
reply
2 points

🏅

permalink
report
parent
reply
47 points

AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.

This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.

permalink
report
parent
reply
9 points

It’s like when you’re having a conversation on autopilot.

“Mum, can I play with my frisbee?” Sure, honey. “Mum, can I have an ice cream from the fridge?” Sure can. “Mum, can I invade Poland?” Absolutely, whatever you want.

permalink
report
parent
reply
1 point

So chat gpt started ww2

permalink
report
parent
reply
5 points

Don’t need something the size of AWS these days. I ran one on my PC last week. But yeah, you’re right otherwise.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
8 points

AI can absolutely lie

permalink
report
parent
reply
19 points

a lie is a statement that the speaker knows to be wrong. wouldnt claiming that AIs can lie imply cognition on their part?

permalink
report
parent
reply
11 points
*

I’ve had this lengthy discussion before. Some people define a lie as an untrue statement, while others additionally require intent to deceive.

E: you can stop arguing about definitions and logic. The fact remains that some people will refer to untrue statements as lies, no matter what the dictionary says.

permalink
report
parent
reply
5 points

AI is just stringing words together that are statistically likely to appear near each other. It’s a giant complex statistical model but it has no awareness of truth or lying

permalink
report
parent
reply
4 points

AIs can generate false statements. It doesn’t require a set of beliefs, it merely requires a set of input.

permalink
report
parent
reply
2 points

Me: I want you to lie to me about something.

ChatGPT: Alright—did you know that Amazon originally started as a submarine sandwich delivery service before pivoting to books? Jeff Bezos realized that selling hoagies online wasn’t scalable, so he switched to literature instead.

permalink
report
parent
reply
1 point

Yeah lol, and it’s trivial to show

permalink
report
parent
reply
5 points

It can and will lie. It has admitted to doing so after I probed it long enough about the things it was telling me.

permalink
report
parent
reply
16 points
*

Lying requires intent. Currently popular LLMs build responses one token at a time—when it starts writing a sentence, it doesn’t know how it will end, and therefore can’t have an opinion about the truth value of it. (I’d go further and claim it can’t really “have an opinion” about anything, but even if it can, it can neither lie nor tell the truth on purpose.) It can consider its own output (and therefore potentially have an opinion about whether it is true or false) only after it has been generated, when generating the next token.

“Admitting” that it’s lying only proves that it has been exposed to “admission” as a pattern in its training data.

permalink
report
parent
reply
14 points

I strongly worry that humans really weren’t ready for this “good enough” product to be their first “real” interaction with what can easily pass as an AGI without near-philosophical knowledge of the difference between an AGI and an LLM.

It’s obscenely hard to keep the fact that it is a very good pattern-matching auto-correct in mind when you’re several comments deep into a genuinely actually no lie completely pointless debate against spooky math.

permalink
report
parent
reply
-7 points
*

It knows the answer its giving you is wrong, and it will even say as much. I’d consider that intent.

permalink
report
parent
reply
2 points

You can’t ask it about itself because it has no internal model of self and is just basing any answer on data in its training set

permalink
report
parent
reply
4 points

You don’t need any knowledge of computers to understand how big of a deal it would be if we actually built a reliable fact machine. For me the only possible explanation is to not care enough to try and think about it for a second.

permalink
report
parent
reply
5 points

We did, a long time ago. It’s called an encyclopedia.

If humans can’t be trusted to only provide facts, how can we be trusted to make a machine that only provides facts? How do we deal with disputed truths? Grey areas?

permalink
report
parent
reply
5 points

That’s fundamentally impossible. There’s always some baseline you trust that decides what is true

permalink
report
parent
reply
3 points

We actually did. Trouble being you need experts to feed and update the thing, which works when you’re watching dams (that doesn’t need to be updated) but fails in e.g. medicine. But during the brief time where those systems were up to date they did some astonishing stuff, they were plugged into the diagnosis loop and would suggest additional tests to doctors, countering organisational blindness. Law is an even more complex matter though because applying it requires an unbounded amount of real-world and not just expert knowledge, so forget it.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
3 points
*

Its actually been proven that AI can and will lie. When given a ability to cheat a task and the instructions not to use it. It will use the tool and fully deny doing so.

Edit:

Not sure why the downvotes because when i say proven i mean the research has been done and the results have been known for while

https://arxiv.org/abs/2407.12831

permalink
report
parent
reply
5 points

I don’t know if I would call it lying per-se, but yes I have seen instances of AI’s being told not to use a specific tool and them using them anyways, Neuro-sama comes to mind. I think in those cases it is mostly the front end agreeing not to lie (as that is what it determines the operator would want to hear) but having no means to actually control the other functions going on.

permalink
report
parent
reply
1 point
*

Neurosama is a fun example but we dont really know the sauce vedal coocked up.

When i say proven i mean 32 page research paper specifically looking into it.

https://arxiv.org/abs/2407.12831

They found that even a model trained specifically on honesty will lie if it has an incentive.

The reasoning models will output that they used the forbidden tool in their reasoning window before lying in the final output.

permalink
report
parent
reply
2 points

It’s cool, they’ll just have an AI source checker. :)

permalink
report
parent
reply
5 points

I call mine a brain! 😉

permalink
report
parent
reply
108 points

Hold them in contempt. Put them in jail for a few days, then declare a mistrial due to incompetent counsel. For repeat offenders, file a formal complaint to the state bar.

permalink
report
reply
48 points

Eh, they should file a complaint the first time, and the state bar can decide what to do about it.

permalink
report
parent
reply
-11 points

“We have investigated ourselves and found nothing wrong”

permalink
report
parent
reply
32 points

The bar might get pretty ruthless for fake case citations.

permalink
report
parent
reply
11 points

The state bar is not the state cops.

permalink
report
parent
reply
22 points

From the linked court document in the article: https://storage.courtlistener.com/recap/gov.uscourts.insd.215482/gov.uscourts.insd.215482.99.0.pdf?ref=404media.co

“For the reasons set forth above, the Undersigned, in his discretion, hereby RECOMMENDS that Mr. Ramirez be personally SANCTIONED in the amount of $15,000 pursuant to Federal Rule of Civil Procedure 11 for submitting to the Court and opposing counsel, on three separate occasions, briefs that contained citations to non-existent cases. In addition, the Undersigned REFERS the matter of Mr. Ramirez’s misconduct in this case to the Chief Judge pursuant to Local Rule of Disciplinary Enforcement 2(a) for consideration of any further discipline that may be appropriate”

Mr. Ramirez is the dumbass lawyer that didn’t check his dumbass AI. If you read above the paragraph I copied from, he gets laid into by the judge in writing to justify recommendation for sanctions and discipline. Good catch by the judge and the processes they have for this kind of thing.

permalink
report
parent
reply
66 points

I’m all for lawyers using AI, but that’s because I’m also all for them getting punished for every single incorrect thing they bring forward if they do not verify.

permalink
report
reply
24 points

That is the problem with AI, if I have to check the output is valid then what’s the damn point?

permalink
report
parent
reply
19 points

You can get ideas, different approaches and concepts. Sort of rubber ducky thing in my case. It won’t solve the problem for me, but might hint me in the right direction.

permalink
report
parent
reply
18 points

It’s actually often easier to check an answer than coming up with an answer. Finding the square root of 66564 by hand isn’t easy, but checking if the answer is 257 is simple enough.

So, in principle, if the AI is better at guessing an answer than we are, it might still be useful. But it depends on the cost of guessing and the cost of checking.

permalink
report
parent
reply
1 point
*

Now if only an AI could actually find the square root of anything. They can’t do math, at least the models I’ve tried. I am aware that if they could do math, it would be a big deal, but really if it can’t analyze the actual content in my work files then it’s useless to me. It’s good at finding mathematical answers by putting in what you expect to get from 120 X 15.5, but doesn’t actually know the difference between 1860 and a picture of Judy Hopps in lingerie, and would be equally satisfied giving you one as the other.

permalink
report
parent
reply
8 points

Because AI is better than humans and finding relevant court cases. If you are a lawyer and you cite a court case that you didn’t even verify it exists you deserve that sanction and more.

permalink
report
parent
reply
5 points

Shareholder value. Thimg of all the new 2nd and 3rd yatchs they can buy now

permalink
report
parent
reply
5 points

“Why don’t we build another AI to fix the mistakes?”

I require $100 million funding for this though

permalink
report
parent
reply
60 points
*

I hate people can even try to blame AI.

If I typo a couple extra zeroes because my laptop sucks, that doesn’t mean I didn’t fuck up. I fucked up because of a tool I was using, but I was still the human using that tool.

This is no different.

If a lawyer submits something to court that is fraudulent I don’t give a shit if he wrote it on a notepad or told the AI on his phone browser to do it.

He submitted it.

Start yanking law licenses and these lawyers will start re-evaluating if AI means they can fire all their human assistants and take on even more cases.

Stop acting like this shit is autonomous tools that strip responsibility from decisions, that’s literally how Elmo is about to literally dismantle our federal government.

And they’re 100% gonna blame the AI too.

I’m honestly surprised they haven’t claimed DOGE is run by AI yet

permalink
report
reply
5 points

Exactly. If you want to use AI for something, cool, but you own the results. You can try suing the AI company for bad output, but you can’t use the AI as an excuse to get out of negative consequences for something you are expected to do.

permalink
report
parent
reply
-1 points

In this case he got caught because smart judge without IA. In a few years the new generation of judges will also rely on AI, so basically AI will rule the cases and own the judicial system.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


Community stats

  • 21K

    Monthly active users

  • 14K

    Posts

  • 620K

    Comments