1 point

From Re-evaluating GPT-4’s bar exam performance (linked in the article):

First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population.

Ohhh, that is sneaky!

permalink
report
reply
0 points

What I find delightful about this is that I already wasn’t impressed! Because, as the paper goes on to say

Moreover, although the UBE is a closed-book exam for humans, GPT-4’s huge training corpus largely distilled in its parameters means that it can effectively take the UBE “open-book”

And here I was thinking it not getting a perfect score on multiple-choice questions was already damning. But apparently it doesn’t even get a particularly good score!

permalink
report
parent
reply
0 points

[…W]hen examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to 48th percentile overall, and 15th percentile on essays.

officially Not The Worst™, so clearly AI is going to take over law and governments any day now

also. what the hell is going on in that other reply thread. just a parade of people incorrecting each other going “LLM’s don’t work like [bad analogy], they work like [even worse analogy]”. did we hit too many buzzwords?

permalink
report
parent
reply
0 points

Not the worst? 48th percentile is basically “average lawyer”. I don’t need a Supreme Court lawyer to argue my parking ticket. And if you train the LLM with specific case law and use RAG can get much better.

In a worst case scenario if my local lawyer can use AI to generate a letter and just quickly go through it to make sure it didn’t hallucinate, they can process more clients, offer faster service and cheaper prices. Maybe not a revolution but still a win.

permalink
report
parent
reply
0 points

Why is that a criticism? This is how it works for humans too: we study, we learn the stuff, and then try to recall it during tests. We’ve been trained on the data too, for neither a human nor an ai would be able to do well on the test without learning it first.

This is part of what makes ai so “scary” that it can basically know so much.

permalink
report
parent
reply
0 points

Because a machine that “forgets” stuff it reads seems rather useless… considering it was a multiple choice style exam and, as a machine, Chat GPT had the book entirely memorized, it should have scored perfect almost all the time.

permalink
report
parent
reply
0 points

Dont anthropomorphise. There is quite the difference between a human and an advanced lookuptable.

permalink
report
parent
reply
0 points

LLMs know nothing. literally. they cannot.

permalink
report
parent
reply
0 points

Though making an unreliable intern is amazing and was impossible 5 years ago…

permalink
report
reply
0 points

thank fuck sama invented the concept of doing a shit job

permalink
report
parent
reply
0 points

I mean, it’s not shit at everything; it can be quite useful in the right context (GitHub Copilot is a prime example). Still, it doesn’t surprise me that these first-party LLM benchmarks are full of smoke and mirrors.

permalink
report
parent
reply
1 point

citation needed

permalink
report
parent
reply
0 points

the perils of hitting /all

permalink
report
reply
0 points

416 updoots, what on earth

permalink
report
parent
reply
1 point

dj khaleb suffering from success dot jpeg

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.6K

    Monthly active users

  • 501

    Posts

  • 11K

    Comments

Community moderators