Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this, and happy new year in advance.)
a reply from a mastodon thread about an instance of AI crankery:
Claude has a response for ya. “You’re oversimplifying. While language models do use probabilistic token selection, reducing them to “fancy RNGs” is like calling a brain “just electrical signals.” The learned probability distributions capture complex semantic relationships and patterns from human knowledge. That said, your skepticism about AI hype is fair - there are plenty of overinflated claims worth challenging.” Not bad for a bucket of bolts ‘rando number generator’, eh?
maybe I’m late to this realization because it’s a very stupid thing to do, but a lot of the promptfondlers who come here regurgitating this exact marketing fluff and swearing they know exactly how LLMs work when they obviously don’t really are just asking the fucking LLMs, aren’t they?
Not bad for a bucket of bolts ‘rando number generator’, eh?
Because… because it generated plausibly looking sentence? Do… do you think the “just electrical signals” bit is clever or creative?
Here’s an LLM performance test that I call the Elon Test: does the sentence plausibly look like it could’ve been said by Elon Musk? Yes? Then your thing is stupid and a failure.
That first post. They are using llms to create quantum resistant crypto systems? Eyelid twitch
E: also, as I think cryptography is the only part of CS which really attracts cranks, this made me realize how much worse science crankery is going to get due to LLMs.
I think cryptography is the only part of CS which really attracts cranks
every once in a while we get a “here is a compression scheme that works on all data, fuck you and your pidgins” but yeah i think this is right
there’s unfortunately a lot of cranks around lambda calculus and computability (specifically check out the Wikipedia article on hypercomputation and start chasing links; you’re guaranteed to find at least one aggressive crank editing their favorite grift into the less watched corners of the wiki), and a lot of them have TESCREAL roots or some ties to that belief cluster or to technofascism, because it’s much easier to form a computer death cult when your idea of computation is utterly fucked
As self and khalid_salad said, there are certainly other branches of CS that attract cranks. I’m not much of a computer scientist myself but even I have seen some 🤔-ass claims about compilers, computational complexity, syntactic validity of the entire C programming language (?), and divine approval or lack thereof of particular operating systems and even the sorting algorithms used in their schedulers!
I thought those non crypto cranks were relatively rare, which is why I added the “really” part. There has been only one templeos after all. And cryptography (crypto too but that is more financial cranks) has that 'this will ve revolutionary feeling which cranks seem to love, while also feeling accessable (compared to complexity theory, which you usually only know about if you know some cs already). I didn’t mean there are no cranks/weird ass claims about the whole field, but Id think that cryptography attracts the lions share. The lambda calculus bit down thread might prove me wrong however.
I still need to finish that FPGA Krivine machine because it’s still living rent-free in my head and will do so until it’s finally evaluating expressions, but boy howdy fuck am I not looking forward to the cranks finding it
a non-zero amount of the time, yeah
also, that poster’s profile, holy fuck. even just the About is a trip
Wow, how is every post somehow weird and offputting? And lol at ‘im seeing evidence the voting public was HACKED! (emph mine)’ a few moments later ‘anybody know some big 5 webscrape API coders? I need them for evidence gathering’. The delightful pattern of crankery where there is a big sweeping new idea that nobody else has seen, plus no actual ability in a technical field.
Wow, how is every post somehow weird and offputting?
just an ordinary mastodon poster, doing the utterly ordinary thing of fedposting in every thread started by a popular leftist account, calling “their wing” a bunch of cowards for not talking in public about doing acts of stochastic violence, and pondering why they don’t have more followers
Right, well God says:
meditated exude faithful estimate nature message glittering indiana intelligences dedicate deception ruinous asleep sensitive plentiful thinks justification subjoinedst rapture wealthy frenzied release trusting apostles judge access disguising billows deliver range
Not bad for the almighty creator ‘rando number generator’, eh?
as an amuse bouche for the horrors that will follow this year, please enjoy this lobste.rs reaching the melting down end stage after going full Karen at someone who agrees with a submitted post saying LLMs are a dead end when it comes to AI.
https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_tefto4
Thankfully, accusing someone of being a crapto promoter is seen as an attack that is beyond the pale.
Highlights from the rest of the thread include bemoaning the lack of a downvote button for registering disapproval:
https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_ft9mpj
unilaterally deciding to reply multiple times to one comment, neccesitating them to add a meta comment with hyperlinks
https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_jjk5ei
And of course is a MoreWronger (moroner?)
Lol of course they think they are civil and other people as pushing nasty rethoric. Quite the sealion feeling.
Wonder if they even notice how much communication weirdness they themself used. With the emphasis of emotional laden language. (They didnt use bold so i cant call it crank capitalization, but more crank cursive. A big deal for me! ;) )
Anyway the questioning of “how do you know this is why there is no downvoting” shows the type of person they are. (And is quite the Rationalist annoying behavior, suddenly they demand excessive sourcing for small remarks of people they disagree with).
I just got a hit of esprit d’escalier, and wished I’d replied to this
But the road to Hackers News is paved with good intentions.
with
So too is the road to Roko’s Basilisk.
I’m increasingly convinced that this person is in a dark place mentally, and am fighting an internal battle to keep poking them for the lulz or just ignoring them.
https://lobste.rs/s/lgqwje/does_current_ai_represent_dead_end#c_cyrxm4
(I’ve seen this behavior on lobste.rs before and I think sometimes people literally get banned for their own good)
Edit bored on a train so I did the math, in the comment thread, this user has made 30% of the comments by count and 20% by “volume” (basically number of bytes in the plaintext).
LLMs continue to be so good and wagmi that they’ve progressed to the serving ads part of the extractivist SaaS lifecycle
Nobody outside the company has been able to confirm whether the impressive benchmark performance of OpenAI’s o3 model represents a significant leap in actual utility or just a significant gap in the value of those benchmarks. However, they have released information showing that the most ostensibly-powerful model costs orders of magnitude more. The lede is in that first graph, which shows that for whatever performance gain o3 costs over ~$10 per request with the headline-grabbing version costing ~$1500 per request.
I hope they’ve been able to identify a market willing to pay out the ass for performance that, even if it somehow isn’t over hyped, is roughly equivalent to an average college graduate.
if all of that $1500 cost is electricity, and at arbitrarily chosen but probably high electricity price of $0.2/kWh, that’s 7.5MWh per request. could be easily twice that. this is approx how much electricity four 4-person households consume in a year in poland. or about half of american one. six tons of TNT equivalent, or almost 2/3 ton of oil equivalent if you prefer
I’m wondering about the benchmark too. It’s way above my level to figure out how it can be gamed. But, buried in the article:
Moreover, ARC-AGI-1 is now saturating – besides o3’s new score, the fact is that a large ensemble of low-compute Kaggle solutions can now score 81% on the private eval.
The most expensive o3 version achieved 87.5%