Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

You are viewing a single thread.
View all comments View context
1 point

Not enough for it to make results diverge. Randomness is added to avoid falling into local maximas in optimization. You should still end in the same global maxima. Models usualy run until their optimization converges.

As stated, if the randomness is big enough that multiple reruns end up with different weights aka optimized for different maximas, the randomization is trash. Anything worth their salt won’t have randomization big enough.

So, going back to my initial point, we need the training data to validate the weights. There are ways to check the performance of a model (quite literally, the same algorithm that is used to evaluate weights in training is them used to evaluate the trained weights post training) the performance should be identical up to a very small rounding error if a rerun with the same data and parameters is used.

permalink
report
parent
reply

memes

!memes@lemmy.world

Create post

Community rules

1. Be civil

No trolling, bigotry or other insulting / annoying behaviour

2. No politics

This is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent reposts

Check for reposts when posting a meme, you can only repost after 1 month

4. No bots

No bots without the express approval of the mods or the admins

5. No Spam/Ads

No advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

Community stats

  • 13K

    Monthly active users

  • 3.7K

    Posts

  • 89K

    Comments