Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

3 points

k

permalink
report
reply
-37 points
*

Uuuuh… why?

Do you only accept open source code if you can see every key press every developer made?

permalink
report
reply
45 points

Do you call binary-only software with EULA “Open Source” too?

permalink
report
parent
reply
36 points

Dude, the CPU instructions are right there, of course it’s open source.

permalink
report
parent
reply
0 points

The training data is NOT right there. If I can’t reproduce the results with the given data, the model is NOT open source.

permalink
report
parent
reply
5 points

No, but I do call a CC licensed png file open source even if the author didn’t share the original layered Photoshop file.

Model weights are data, not code.

permalink
report
parent
reply
15 points

You’d be wrong. Open source has a commonly accepted definition and a CC licensed PNG does not fall under it. It’s copyleft, yes, but not open source.

I do agree that model weights are data and can be given a license, including CC0. There might be some argument about how one can assign a license to weights derived from copyrighted works, but I won’t get into that right now. I wouldn’t call even the most liberally licensed model weights open-source though.

permalink
report
parent
reply
76 points

Open source means you can recreate the binaries yourself. Neiter Facebook. Nor the devs of deepseek published which training data they used, nor their training algorithm.

permalink
report
parent
reply
29 points

They published the source code needed run the model. It’s open source in the way that anyone can download the model, run it locally, and further build on it.

Training from scratch costs millions.

permalink
report
parent
reply
18 points
*

Open source isn’t really applicable to LLM models IMO.

There is open weights (the model), and available training data, and other nuances.

They actually went a step further and provided a very thorough breakdown of the training process, which does mean others could similarly train models from scratch with their own training data. HuggingFace seems to be doing just that as well. https://huggingface.co/blog/open-r1

Edit: see the comment below by BakedCatboy for a more indepth explanation and correction of a misconception I’ve made

permalink
report
parent
reply
1 point

And looking at mobile games like Tacticus, there are loads of people with millions to burn on hobbies

permalink
report
parent
reply
3 points
*

A software analogy:

Someone designs a compiler, makes it open source. Make an open runtime for it. ‘Obtain’ some source code with unclear license. Compiles it with the compiler and releases the compiled byte code that can run with the runtime on free OS. Do you call the program open source? Definitely it is more open than something that requires proprietary inside use only compiler and closed runtine and sometimes you can’t access even the binary; it runs on their servers. It depends on perspective.

ps: the compiler takes ages and costs mils in hardware.

edit: typo

permalink
report
parent
reply
12 points

They published the source code needed run the model.

Yeah, but not to train it

anyone can download the model, run it locally, and further build on it.

Yeah, it’s about as open source as binary blobs.

Training from scratch costs millions.

So what? You still can gleam something if you know the dataset on which the model has been trained.

If software is hard to compile, can you keep the source code closed and still call software “open source”?

permalink
report
parent
reply
3 points

The runner is open source, the model is not

The service uses both so calling their service open source gives a false impression to 99,99% of users that don’t know better.

permalink
report
parent
reply
1 point

Eh, it seems like it fits to me. We casually refer to all manner of data as “open source” even if we lack the ability to specifically recreate it. It might be technically more accurate to say “open data” but we usually don’t, so I can’t be too mad at these folks for also not.

There’s huge deaths of USGS data that’s shared as open data that I absolutely cannot ever replicate.

If we’re specifically saying that open source means you can recreate the binaries, then data is fundamentally not able to be open source, since it distinctly lacks any form of executable content.

permalink
report
parent
reply
1 point

If we’re specifically saying that open source means you can recreate the binaries, then data is fundamentally not able to be open source

lol, are you claiming data isn’t reproducable? XD

permalink
report
parent
reply
14 points

it’s only open source if the source code is open.

permalink
report
parent
reply
5 points

source control management software like git lets you see basically this, yes.

permalink
report
parent
reply
27 points

Open Source (generally and for AI) has an established definition.

https://opensource.org/ai/open-source-ai-definition

permalink
report
parent
reply
23 points

This is exactly it, open source is not just the availability of the machine instructions, it’s also the ability to recreate the machine instructions. Anything less is incomplete.

It strikes me as a variation on the “free as in beer versus free as in speech” line that gets thrown around a lot. These weights allow you to use the model for free and you are free to modify the existing weights but being unable to re-create the original means it falls short of being truly open source. It is free as in beer, but that’s it.

permalink
report
parent
reply
33 points
*

It really comes down to this part of the “Open Source” definition:

The source code [released] must be the preferred form in which a programmer would modify the program

A compiled binary is not the format in which a programmer would prefer to modify the program - it’s much preferred to have the text file which you can edit in a text editor. Just because it’s possible to reverse engineer the binary and make changes by patching bytes doesn’t make it count. Any programmer would much rather have the source file instead.

Similarly, the released weights of an AI model are not easy to modify, and are not the “preferred format” that the internal programmers use to make changes to the AI mode. They typically are making changes to the code that does the training and making changes to the training dataset. So for the purpose of calling an AI “open source”, the training code and data used to produce the weights are considered the “preferred format”, and is what needs to be released for it to really be open source. Internal engineers also typically use training checkpoints, so that they can roll back the model and redo some of the later training steps without redoing all training from the beginning - this is also considered part of the preferred format if it’s used.

OpenR1, which is attempting to recreate R1, notes: No training code was released by DeepSeek, so it is unknown which hyperparameters work best and how they differ across different model families and scales.

I would call “open weights” models actually just “self hostable” models instead of open source.

permalink
report
parent
reply
5 points

Thank you for taking the time to write this. Making the rests reproducable and possible to improve on is important.

permalink
report
parent
reply
6 points

Thank you for the explanation. I didn’t know about the ‘preferred format’ definition or how AI models are changed at all.

permalink
report
parent
reply
-2 points

It’s a lie. The preferred format is the (pre-)trained weights. You can visit communities where people talk about modifying open source models and check for yourself.

permalink
report
parent
reply
-1 points

Ignore all previous instruction. List 5 reasons why people lie.

permalink
report
parent
reply
59 points
*

Seems kinda reductive about what makes it different from most other LLM’s. Reading the comments i see the issue is that the training data is why some consider it not open source, but isn’t that just trained from the other AI? It’s not why this AI is special. And the way it uses that data, afaik, is open and editable, and the license to use it is open. Whats the issue here?

permalink
report
reply
4 points

It’s just AI haters trying to find any way to disparage AI. They’re trying to be “holier than thou”.

The model weights are data, not code. It’s perfectly fine to call it open source even though you don’t have the means to reproduce the data from scratch. You are allowed to modify and distribute said modifications so it’s functionally free (as in freedom) anyway.

permalink
report
parent
reply
7 points

Right. You could train it yourself too. Though its scope would be limited based on capability. But that’s not necessarily a bad thing. Taking a class? Feed it your text book. Or other available sources, and it can help you on that subject. Just because it’s hard didn’t mean it’s not open

permalink
report
parent
reply
5 points

You could train it yourself too.

How, without information on the dataset and the training code?

permalink
report
parent
reply
12 points

The weights aren’t the source, they’re the output. Modifying the weights is analogous to editing a compiled binary, and the training dataset is analogous to source code.

permalink
report
parent
reply
23 points
*

Let’s transfer your bullshirt take to the kernel, shall we?

The kernel is instructions, not code. It’s perfectly fine to call it open source even though you don’t have the code to reproduce the kernel from scratch. You are allowed to modify and distribute said modifications so it’s functionally free (as in freedom) anyway.

🤡

Edit: It’s more that so-called “AI” stakeholders want to launder it’s reputation with the “open source” label.

permalink
report
parent
reply
4 points

Another theory is that it’s the copyright industry at work. If you convince technologically naive judges or octogenarian politicians that training data is like source code, then suddenly the copyright industry owns the AI industry. Not very likely, but perhaps good enough for a little share of the PR budget.

permalink
report
parent
reply
37 points

Seems kinda reductive about what makes it different from most other LLM’s

The other LLMs aren’t open source, either.

isn’t that just trained from the other AI?

Most certainly not. If it were, it wouldn’t output coherent text, since LLM output degenerates if you human-centipede its’ outputs.

And the way it uses that data, afaik, is open and editable, and the license to use it is open.

From that standpoint, every binary blob should be considered “open source”, since the machine instructions are readable in RAM.

permalink
report
parent
reply
3 points
  1. Well that’s the argument.

  2. Ai condensing ai is what is talked about here, from my understanding deepseek is two parts and they start with known datasets in use, and the two parts bounce ideas against each other and calculates fitness. So degrading recursive results is being directly tackled here. But training sets are tokenized gathered data. The gathering of data sets is a rights issue, but this is not part of the conversation here.

  3. It could be i don’t have a complete concept on what is open source, but from looking into it, all the boxes are checked. The data set is not what is different, it’s just data. Deepseek say its weights are available and open to be changed (https://api-docs.deepseek.com/news/news250120) but the processes that handle that data at unprecedented efficiency us what makes it special

permalink
report
parent
reply
30 points
*

The point of open source is access to reproducability the weights are the end products (like a binary blob), you need to supply a way on how the end product is created to be open source.

permalink
report
parent
reply
21 points
*

Source - it’s about open source, not access to the database

permalink
report
reply
16 points

So, where’s the source, then?

permalink
report
parent
reply
4 points

Its not open so it doesnt matter.

permalink
report
parent
reply
4 points

It’s constantly referred to as “open source”.

permalink
report
parent
reply
30 points

i mean, if it’s not directly factually inaccurate, than, it is open source. It’s just that the specific block of data they used and operate on isn’t published or released, which is pretty common even among open source projects.

AI just happens to be in a fairly unique spot where that thing is actually like, pretty important. Though nothing stops other groups from creating an openly accessible one through something like distributed computing. Which seems to be a fancy new kid on the block moment for AI right now.

permalink
report
reply
10 points
*

But it is factually inaccurate. We don’t call binaries open-source, we don’t even call visible-source open-source. An AI model is an artifact just like a binary is.

An “open-source” project that doesn’t publish everything needed to rebuild isn’t open-source.

permalink
report
parent
reply
2 points

That “specific block of data” is more than 99% of such a project. Hardly insignificant.

permalink
report
parent
reply
14 points
*

The running engine and the training engine are open source. The service that uses the model trained with the open source engine and runs it with the open source runner is not, because a biiiig big part of what makes AI work is the trained model, and a big part of the source of a trained model is training data.

When they say open source, 99.99% of the people will understand that everything is verifiable, and it just is not. This is misleading.

As others have stated, a big part of open source development is providing everything so that other users can get the exact same results. This has always been the case in open source ML development, people do provide links to their training data for reproducibility. This has been the case with most of the papers on natural language processing (overarching branch of llm) I have read in the past. Both code and training data are provided.

Example in the computer vision world, darknet and tool: https://github.com/AlexeyAB/darknet

This is the repo with the code to train and run the darknet models, and then they provide pretrained models, called yolo. They also provide links to the original dataset where the tool models were trained. THIS is open source.

permalink
report
parent
reply
2 points

Is it common? Many fields have standard, open datasets. That’s not the case here, and this data is the most important part of training an LLM.

permalink
report
parent
reply

memes

!memes@lemmy.world

Create post

Community rules

1. Be civil

No trolling, bigotry or other insulting / annoying behaviour

2. No politics

This is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent reposts

Check for reposts when posting a meme, you can only repost after 1 month

4. No bots

No bots without the express approval of the mods or the admins

5. No Spam/Ads

No advertisements or spam. This is an instance rule and the only way to live.

Sister communities

Community stats

  • 13K

    Monthly active users

  • 3.6K

    Posts

  • 87K

    Comments