Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
Oh no. Kurzgesagt just published a full-on TREACLES piece.
https://www.youtube.com/watch?v=fa8k8IQ1_X0
These are the sources they cited: https://sites.google.com/view/sources-superintelligence/
Open Philanthropy is a sponsor of kurzgesagt. The foundation is supporting academic work across the field of Artificial Intelligence, and some of the sources used to create this script (from OpenAI, Future of Humanity Institute, Machine Intelligence Research Institute, Future of Life Institute and Epoch AI) also receive financial support from Open Philanthropy.
Open Philanthropy had no influence on the content and messages of this video.
I’m sure!
They’ve given me a strange vibe for a while. I suspected they might be in the TREACLES sphere, so I guess at least I finally have confirmation.
Also, the amount of “ChatGPT is basically AGI already” people in the comments is alarming.
Also, the amount of “ChatGPT is basically AGI already” people in the comments is alarming.
They prob should do a video on this effect as shown by the early ELIZA experiments. That even the smartest people could get fooled by a dumb program. Doubt they will though if they are into EA stuff.
Some ‘scientists’ cough Yud/Rob/Kokotajlo cough believe in FOOM. Anyways, let’s not question this fundamental assumption so we can engage in fear baiting and mental masturbation for the remainder of our show. It’s bonkers that people keep citing Kokotajlo as an AI researcher, like, I have serious doubts this man knows what a computer is. Pretty good at grifting though. Also why is Rob Miles still listed as a PhD student. Like cant he hurry up and fuckin graduate already? Christ.
As an aside, I remember watching a PBS space time and seeing sponsored by Open Philanthropy (or some other EA organization) and I was like no not my beloved PBS!! I know it feels :(
Fwiw, this is also why I -do- think it’s important to talk more frankly about where science is moving towards ala things like FEP or scale free dynamics. An alternative story on things like what energy, computation, and participation really means, is useful, not for prescribing the future, but the opposite: putting ambiguity and the importance of participation back in it.
The current world view, that some how things are cleanly separated and in nice little ontological boxes of capability and shape and form, lead to closed systems delusions. It’s fragile and we know it, I hope. Von Neuman’s “last invention” is wrong because most, unfortunately, most “smart people’s” view of intelligence has become reductive in liu of a bigger picture.
In addition to our sneers, we should want to tell a more robust story about all of these things.
@hrrrngh @froztbyte I was preeved about this, but was already starting to get sus because they’ve previously touched on Effective Altrusim positively (without using the term) and seem uncomfortably longtermist, which really does concern me given the figures and philosophies involved.
This is extra crap because I’d already introduce the channel to younger family members as a learning resource that I wouldn’t have to fact-check to death, but apparently noooo.
Idea: a Pivot to AI video series hosted by an avatar that’s, like, a talking polyhedron in the style of Mind’s Eye/Body Wars era CGI.
This would require effort and thus is a terrible idea, but I find the mental image amusing.
using Hatsune Miku but with my voice would save on having to maintain a video setup