You are viewing a single thread.
View all comments View context
-1 points

What? I’m not doubting what he said. Just surprised. Look at this. I really hope Sam IPO his company so I can short it.

permalink
report
parent
reply
1 point

oh, so you’re that kind of fygm asshole

good to know

permalink
report
parent
reply
-1 points
*

Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

@sc_griffith@awful.systems asked how request frequency might impact cost per request. Batch inference is a reason (ask anyone in the self-hosted LLM community). I noted that this reason only applies at very small scale, probably much smaller than what OpenAI is operating at.

@dgerard@awful.systems why did you say I am demanding someone disprove the assertion? Are you misunderstanding “I would be very very surprised if they couldn’t fill [the optimal batch size] for any few-seconds window” to mean “I would be very very surprised if they are not profitable”?

The tweet I linked shows that good LLMs can be much cheaper. I am saying that OpenAI is very inefficient and thus economically “cooked”, as the post title will have it. How does this make me FYGM? @froztbyte@awful.systems

permalink
report
parent
reply
3 points

Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

my god! let me fix that

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.7K

    Monthly active users

  • 549

    Posts

  • 12K

    Comments

Community moderators