Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

8 points

The Columbia Journalism Review does a study and finds the following:

  • Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead.
  • Premium chatbots provided more confidently incorrect answers than their free counterparts.
  • Multiple chatbots seemed to bypass Robot Exclusion Protocol preferences.
  • Generative search tools fabricated links and cited syndicated and copied versions of articles.
  • Content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses.
permalink
report
reply
6 points
5 points

this article will most likely be how I (hopefully very rarely) start off conversations about rationalism in real life should the need once again arise (and somehow it keeps arising, thanks 2025)

but also, hoo boy what a painful talk page

permalink
report
parent
reply
5 points

it’s not actually any more painful than any wikipedia talk page, it’s surprisingly okay for the genre really

remember: wikipedia rules exist to keep people like this from each others’ throats, no other reason

permalink
report
parent
reply
4 points

that’s fair, and I can’t argue with the final output

permalink
report
parent
reply
9 points

the btb zizians series has started

surprisingly it’s only 4 episodes

permalink
report
reply
12 points

On one hand: all of this stuff entering greater public awareness is vindicating, i.e. I knew about all this shit before so many others, I’m so cool

On the other hand: I want to stop being right about everything please, please just let things not become predictably worse

permalink
report
parent
reply
8 points

I maintain that our militia ought to be called the Cassandra Division

permalink
report
parent
reply
3 points

Even just The Cassandras would work well (that way all the weird fucks who are shitty about gender would hate the name even more)

permalink
report
parent
reply
8 points

David Gborie! One of my fave podcasters and podcast guests. Adding this to the playlist

permalink
report
parent
reply
23 points

was discussing a miserable AI related gig job I tried out with my therapist. doomerism came up, I was forced to explain rationalism to him. I would prefer that all topics I have ever talked to any of you about be irrelevant to my therapy sessions

permalink
report
reply
20 points

Regrettably I think that the awarereness of these things is inherently the kind of thing that makes you need therapy, so…

permalink
report
parent
reply
13 points

Sweet mother of Roko it’s an infohazard!

I never really realized that before.

permalink
report
parent
reply
11 points
7 points

I’ve been beating this dead horse for a while (since July of last year AFAIK), but its clear to me that the AI bubble’s done horrendous damage to the public image of artificial intelligence as a whole.

Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst - a trend I expect that’ll last for a good while after the bubble pops.

To beat a slightly younger dead horse, I also anticipate AI as a concept will die thanks to this bubble, with its utterly toxic optics as a major reason why. With relentless slop, nonstop hallucinations and miscellaneous humiliation (re)defining how the public views and conceptualises AI, I expect any future AI systems will be viewed as pale imitations of human intelligence, theft-machines powered by theft, or a combination of the two.

permalink
report
parent
reply
11 points

Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst

it’s fucking wild how PMs react to this kind of thing; the general consensus seems to be that the users are wrong, and that surely whichever awful feature they’re working on will “break through all that hostility” — if the user’s forced (via the darkest patterns imaginable) to use the feature said PM’s trying to boost their metrics for

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.1K

    Monthly active users

  • 632

    Posts

  • 15K

    Comments

Community moderators