Text from them:

Calling all model makers, or would-be model creators! Chai asked me to tell you all about their open source LLM leaderboard:

Chai is running a totally open LLM competition. Anyone is free to submit a llama based LLM via our python-package 🐍 It gets deployed to users on our app. We collect the metrics and rank the models! If you place high enough on our leaderboard you’ll win money 🥇

We’ve paid out over $10,000 in prizes so far. 💰

Come to our discord and check it out!

https://discord.gg/chai-llm

Link to latest board for the people who don’t feel like joining a random discord just to see results:

https://cdn.discordapp.com/attachments/1134163974296961195/1138833170838589471/image1.png

11 points
*

Me at first: wow, that’s cool, I wonder how models are ranked

Come to our discord and check it out!

OK, bye

permalink
report
reply
3 points
*

lmao a reasonable request, I’m pretty disappointed they don’t have it hosted anywhere…

here’s a link to their latest image of the leaderboard for what it’s worth:

https://cdn.discordapp.com/attachments/1134163974296961195/1138833170838589471/image1.png

permalink
report
parent
reply
2 points

TYVM, OP :)

Wizard is at the top of every leaderboard I saw so far, I should really check it out

permalink
report
parent
reply
2 points

How is it ranked I’m not familiar with any of those except Wizard

permalink
report
parent
reply
2 points

There’s apparently a pip command to display the leaderboard, if this ends up being of interest to people I could make a post and just update it every so often with the latest leaderboard

permalink
report
parent
reply

At least (as far as I can tell) they appear to be ranking the models by human evaluation rather than “benchmarks”, which is closer to measuring the real-world performance.

It would be interesting to consider the types of questions that users are posing. For example there is a difference between asking:

  • A surface-level fact-based question such as “what is …”

  • A creative question like “write a story/article about …” or “give me a list of possible talking points for a presentation on …”

  • A question about reasoning/understanding like “why do you think the word … is more popular than … when referring to …” or “explain why … is considered socially acceptable while … is not”

  • Anything coding-related

Also, some models seem to do well at things that can be answered after one or two replies, but struggle to follow an argument if you try to go more in-depth or continue a conversation about a topic.

permalink
report
reply
1 point

Yeah it’s a step in the right direction at least, though now that you mention it doesn’t lmsys or someone do the same with human eval and side by side comparisons?

It’s such a tricky line to walk between deterministic questions (repeatable but cheatable) and user questions (real world but potentially unfair)

permalink
report
parent
reply

LocalLLaMA

!localllama@sh.itjust.works

Create post

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

Community stats

  • 94

    Monthly active users

  • 197

    Posts

  • 772

    Comments