blakestacey
The big claim is that R1 was trained on far less computing power than OpenAI’s models at a fraction of the cost.
And people believe this … why? I mean, shouldn’t the default assumption about anything anyone in AI says is that it’s a lie?
This seems like an apt point to share Maxwell Neely-Cohen’s “Century-Scale Storage”.
I asked ChatGPT, the modern apotheosis of unjustified self-confidence, to prove that .999… is less than 1. Its reply began “Here is a proof that .999… is less than 1.” It then proceeded to show (using familiar arguments) that .999… is equal to 1, before majestically concluding “But our goal was to show that .999… is less than 1. Hence the proof is complete.” This reply, as an example of brazen mathematical non sequitur, can scarcely be improved upon.
brb, saving copies of physics and math books before they go offline