archive https://archive.ph/is57b
From Re-evaluating GPT-4’s bar exam performance (linked in the article):
First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population.
Ohhh, that is sneaky!
What I find delightful about this is that I already wasn’t impressed! Because, as the paper goes on to say
Moreover, although the UBE is a closed-book exam for humans, GPT-4’s huge training corpus largely distilled in its parameters means that it can effectively take the UBE “open-book”
And here I was thinking it not getting a perfect score on multiple-choice questions was already damning. But apparently it doesn’t even get a particularly good score!
[…W]hen examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to 48th percentile overall, and 15th percentile on essays.
officially Not The Worst™, so clearly AI is going to take over law and governments any day now
also. what the hell is going on in that other reply thread. just a parade of people incorrecting each other going “LLM’s don’t work like [bad analogy], they work like [even worse analogy]”. did we hit too many buzzwords?
Not the worst? 48th percentile is basically “average lawyer”. I don’t need a Supreme Court lawyer to argue my parking ticket. And if you train the LLM with specific case law and use RAG can get much better.
In a worst case scenario if my local lawyer can use AI to generate a letter and just quickly go through it to make sure it didn’t hallucinate, they can process more clients, offer faster service and cheaper prices. Maybe not a revolution but still a win.
Why is that a criticism? This is how it works for humans too: we study, we learn the stuff, and then try to recall it during tests. We’ve been trained on the data too, for neither a human nor an ai would be able to do well on the test without learning it first.
This is part of what makes ai so “scary” that it can basically know so much.
Because a machine that “forgets” stuff it reads seems rather useless… considering it was a multiple choice style exam and, as a machine, Chat GPT had the book entirely memorized, it should have scored perfect almost all the time.
Dont anthropomorphise. There is quite the difference between a human and an advanced lookuptable.
Though making an unreliable intern is amazing and was impossible 5 years ago…
I mean, it’s not shit at everything; it can be quite useful in the right context (GitHub Copilot is a prime example). Still, it doesn’t surprise me that these first-party LLM benchmarks are full of smoke and mirrors.
the perils of hitting /all