You are viewing a single thread.
View all comments View context
10 points

But it’s not just that “they effectively trained their model using OpenAI’s model”. The point Ed goes on to make is why hasn’t OpenAI done the same thing? The marvel of DeepSeek is how much more efficient it is, whereas Big Tech keeps insisting that they need ever bigger data centers.

permalink
report
parent
reply
1 point

They HAVE done that. It’s one of the techniques they use to produce things like o1 mini models and the other mini models that run on device.

But that’s not a valid technique for creating new foundation models, just for creating refined versions of existing models. You would never have been able to create for instance, an o1 model from Chat PT 3.5 using distillation.

permalink
report
parent
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 3.3K

    Monthly active users

  • 3.4K

    Posts

  • 59K

    Comments