Avatar

blakestacey

blakestacey@awful.systems
Joined
27 posts • 388 comments
Direct message

Gah. I’ve been nerd sniped into wanting to explain what LessWrong gets wrong.

permalink
report
parent
reply

There’s a “critique of functional decision theory”… which turns out to be a blog post on LessWrong… by “wdmacaskill”? That MacAskill?!

permalink
report
parent
reply

If you want to read Yudkowsky’s explanation for why he doesn’t spend more effort on academia, it’s here.

spoiler alert: the grapes were totally sour

permalink
report
parent
reply

We have a few Wikipedians who hang out here, right? Is a preprint by Yud and co. a sufficient source to base an entire article on “Functional Decision Theory” upon?

permalink
report
reply

If you go over to LessWrong, you can get some ideas of what is possible

permalink
report
parent
reply

You might think that this review of Yud’s glowfic is an occasion for a “read a second book” response:

Yudkowsky is good at writing intelligent characters in a specific way that I haven’t seen anyone else do as well.

But actually, the word intelligent is being used here in a specialized sense to mean “insufferable”.

Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey.

Ah, the book that isn’t actually about kink, but rather an abusive relationship disguised as kink — which would be a great premise for an erotic thriller, except that the author wasn’t sufficiently self-aware to know that’s what she was writing.

permalink
report
reply

Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.

I’m trying, but I can’t not donate any harder!

The most popular LessWrong posts, SSC posts or books like HPMoR are usually people’s first exposure to core rationality ideas and concerns about AI existential risk.

Unironically the better choice: https://archiveofourown.org/donate

permalink
report
reply

The post:

I think Eliezer Yudkowsky & many posts on LessWrong are failing at keeping things concise and to the point.

The replies: “Kolmogorov complexity”, “Pareto frontier”, “reference class”.

permalink
report
reply

The lead-in to that is even “better”:

This seems particularly important to consider given the upcoming conservative administration, as I think we are in a much better position to help with this conservative administration than the vast majority of groups associated with AI alignment stuff. We’ve never associated ourselves very much with either party, have consistently been against various woke-ish forms of mob justice for many years, and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).

“The reason for optimism is that we can cozy up to fascists!”

permalink
report
parent
reply

The collapse of FTX also caused a reduction in traffic and activity of practically everything Effective Altruism-adjacent

Uh-huh.

permalink
report
reply