Office space meme:
“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”
Even worse is calling a proprietary, absolutely closed source, closed data and closed weight company “OpeanAI”
Especially after it was founded as a nonprofit with the mission to push open source AI as far and wide as possible to ensure a multipolar AI ecosystem, in turn ensuring AI keeping other AI in check so that AI would be respectful and prosocial.
It’s even crazier that Sam Altman and other ML devs said that they reached the peak of what current Machine Learning models were capable of years ago
But that doesn’t mean shit to the marketing departments
“Look at this shiny.”
Investment goes up.
“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”
Investment goes up.
“Look at this shiny.”
Investment goes up.
“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”
The training data would be incredible big. And it would contain copyright protected material (which is completely okay in my opinion, but might invoce criticism). Hell, it might even be illegal to publish the training data with the copyright protected material.
They published the weights AND their training methods which is about as open as it gets.
They could disclose how they sourced the training data, what the training data is and how you could source it. Also, did they publish their hyperparameters?
They could jpst not call it Open Source, if you can’t open source it.
For neural nets the method matters more. Data would be useful, but at the amount these things get trained on the specific data matters little.
They can be trained on anything, and a diverse enough data set would end up making it function more or less the same as a different but equally diverse set. Assuming publicly available data is in the set, there would also be overlap.
The training data is also by necessity going to be orders of magnitude larger than the model itself. Sharing becomes impractical at a certain point before you even factor in other issues.
That… Doesn’t align with years of research. Data is king. As someone who specifically studies long tail distributions and few-shot learning (before succumbing to long COVID, sorry if my response is a bit scattered), throwing more data at a problem always improves it more than the method. And the method can be simplified only with more data. Outside of some neat tricks that modern deep learning has decided is hogwash and “classical” at least, but most of those don’t scale enough for what is being looked at.
Also, datasets inherently impose bias upon networks, and it’s easier to create adversarial examples that fool two networks trained on the same data than the same network twice freshly trained on different data.
Sharing metadata and acquisition methods is important and should be the gold standard. Sharing network methods is also important, but that’s kind of the silver standard just because most modern state of the art models differ so minutely from each other in performance nowadays.
Open source as a term should require both. This was the standard in the academic community before tech bros started running their mouths, and should be the standard once they leave us alone.
I like how when America does it we call it AI, and when China does it it’s just an LLM!
Yeah, this shit drives me crazy. Putting aside the fact that it all runs off stolen data from regular people who are being exploited, most of this “AI” shit is basically just freeware if anything, it’s about as “open source” as Winamp was back in the day.
Judging by OP’s salt in the comments, I’m guessing they might be an Nvidia investor. My condolences.