

TinyTimmyTokyo
Amazing how many awful things are orange.
People are often overly confident about their imperviousness to mental illness. In fact I think that --given the right cues – we’re all more vulnerable to mental illness than we’d like to think.
Baldur Bjarnason wrote about this recently. He talked about how chatbots are incentivizing and encouraging a sort of “self-experimentation” that exposes us to psychological risks we aren’t even aware of. Risks that no amount of willpower or intelligence will help you avoid. In fact, the more intelligent you are, the more likely you may be to fall into the traps laid in front of you, because your intelligence helps you rationalize your experiences.
So it turns out the healthcare assassin has some… boutique… views. (Yeah, I know, shocker.) Things he seems to be into:
- Lab-grown meat
- Modern architecture is rotten
- Population decline is an existential threat
- Elon Musk and Peter Thiel
How soon until someone finds his LessWrong profile?
So now Steve Sailer has shown up in this essay’s comments, complaining about how Wikipedia has been unfairly stifling scientific racism.
Birds of a feather and all that, I guess.
Scott Alexander, by far the most popular rationalist writer besides perhaps Yudkowsky himself, had written the most comprehensive rebuttal of neoreactionary claims on the internet.
Hey Trace, since you’re undoubtedly reading this thread, I’d like to make a plea. I know Scott Alexander Siskind is one of your personal heroes, but maybe you should consider digging up some dirt in his direction too. You might learn a thing or two.
After minutes of meticulous research and quantitative analysis, I’ve come up with my own predictions about the future of AI.