The headline is misleading. DeepSeek outperforms some benchmarks by <1%, and underperforms on some by 1-5%.
Not that there’s a need to lie - China is still winning. Deepseek is definitely cheaper, enormously, massively so, and on-par with other latest AI models. I’m not sure it’s totally positive news, because cheaper AI tends to mean more overall spending on AI, but China winning in this space is a definite lesser evil.
cheaply and less powerful
So AI could be more sustainable this whole time? Interesting…
Who wants to bet their AI will be used for more useful things too?
AI, as in machine learning, has always had uses. Like, real useful additions to human capability. It’s very useful in pattern recognition and statistical analysis of various things.
Our more modern shit connotation comes from capitalists trying replace labor with generative AI, which is a small subset of potential machine learning uses, but by its nature sends massive amounts of shlock at us.
Having dealt with ML engineers in depth before, American tech companies tend to throw blank checks their way which, combined with them not tending to have backgrounds in optimization or infrastructure, means they spin up 8 billion GPU instances in the cloud and use 10% of them ever because engineers are lazy.
They could, without any exaggeration, reduce their energy consumption by a factor of ten with about two weeks of honest engineering work. Yes this bothers the fuck out of me.
That’s my biggest gripe with mainstream closed source AI, they can optimize some of their most powerful MLAs to run on a potato but… they don’t. And they’ll never open source becaue it’d be forked by people who are genuinely passionate about improvement.
AKA they’d be run outta business in no time.
Who wants to bet their AI will be used for more useful things too?
Unlike in the West where if Nvidia tanks then the entire US economy goes down with it.
Idk about sustainability. It still requires massive computing infrastructure, which for now requires unsustainable ways of sourcing metals
So AI could be more sustainable this whole time? Interesting…
It’s not that unsustainable unless you believe the predicts that your microwave will be building it’s own LLM model every month by 2030
The electricity and water usage isn’t very high in general or compared to data centre usage more broadly (and is basically nothing compared to crypto)
source: ML guy i know so this could be entirely unsubstantiated but apparently the main environmental burden of LLM infrastructure comes from training new models not serving inference from already deployed models.
Man, at this rate, China is going to somehow create a cryptocurrency or NFT that actually works and isn’t a scam.
An incredible outcome would be if the US stock market bubble pops because Chinese developed open-source AI that can run locally on your phone end up being about as good as Silicon Valley’s stuff.
I think the bubble might not pop so easily. Even if Microsoft is set back dramatically by this, investors have nowhere else to go. The whole industry is in a turmoil, and since there’s nothing else to invest into, stocks stay high.
At least that’s how i explain the ludicrously high stock rates that we’re seeing in the recent years.
Llms that run locally are already a thing, and I wager that one of those smaller models can do 99% of anything anyone would want.
What does it mean for an llm to run locally? Where’s all the data with the ‘answers’ stored?
Imagine if an idea was a point on a graph, ideas that are similar would have points closer to each other, and points that are very different would be very far away. A llm is a predictive model for this graph, just like a line of best fit is a predictive model for a simple linear graph. So in a way, the model is predicting the information, it’s not stored directly or searched for.
A locally running llm is just one of these models shrunk down and executing on your computer.
Edit: removed a point about embeddings that wasnt fully accurate
I’ve been messing around with deepseek, and I can already tell it’s much smoother and more coherent than chatgpt or gemini.
It also doesn’t have the limiters of the American LLMs, where they accidentally generate a true statement about history or politics that doesn’t show the US in a good light, and then have to stop and argue themselves back into the neoliberal / US state department position.
It has ridiculous limiters on for me, it refuses to answer when I ask who Xi Jinping is
Plus seeseepee
Deepseek made a mistake with the first query I asked it, so from that sample of 1 I’m treating it with the same caution as any of the current LLMs.
I asked it about testing an electronics part (an Integrated Circuit chip) and it confidently told me how to test an imaginary 16-pin version of the chip.
The IC in question has 8 pins.
When I followed up by asking “why pin 16” it confidently responded with a little lecture about what pin 16 does and just how important pin 16 is.
Once I’d proved to it that the IC has 8 pins, I got this:
“You’re absolutely correct that the MN3101 is an 8-pin DIP (Dual Inline Package) chip. My earlier reference to pin 16 was incorrect, and I appreciate your clarification. Let me provide accurate information for the MN3101 (8-pin DIP).”
The thing is that these chips have a unique Id (the name) and publicly available datasheets that explain, amongst other things, how many pins they have.