“this thing takes more time and effort to process queries, but uses the same amount of computing resources” <- statements dreamed up by the utterly deranged.
Kay mate, rational thought 101:
When the setup is “we run each query multiple times” the default position is that it costs more resources. If you claim they use roughly the same amount you need to substantiate that claim.
Like, that sounds like a pretty impressive CS paper, “we figured out how to run inference N times but pay roughly the cost of one” is a hell of an abstract.
and hot young singles in your area have a bridge in Brooklyn to sell
on the blockchain
I’m sure it being so much better is why they charge 100x more for the use of this than they did for 4ahegao, and that it’s got nothing to do with the well-reported gigantic hole in their cashflow, the extreme costs of training, the likely-looking case of this being yet more stacked GPT3s (implying more compute in aggregate for usage), the need to become profitable, or anything else like that. nah, gotta be how much better the new model is
also, here’s a neat trick you can employ with language: install a DC full of equipment, run some jobs on it, and then run some different jobs on it. same amount of computing resources! amazing! but note how this says absolutely nothing about the quality of the job outcomes, the durations, etc.