Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
so it looks like openai has bought a promptfondler IDE
some of the coverage is … something:
Windsurf brings unique strengths to the table, including a seamless UI, faster performance, and a focus on user privacy
(and yes, the “editor” is once again VSCode With Extras)
Found on the sneer club legacy version -
ChatGPT 4o will straight up tell you you’re God.
Also I find this quote interesting (emphasis mine:
He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.” “At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT.
I would absolutely believe that this is the case, especially if like Sem you have a sufficiently uncommon name that the model doesn’t have a lot of context and connections to hang on it to begin with.
More big “we had to fund, enable, and sane wash fascism b.c. the leftist wanted trans people to be alive” energy from the EA crowd.
Quick update on the ongoing copyright suit against OpenAI: The federal judge has publicly sneered at Facebook’s fair use argument:
“You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products,” said Chhabria to Meta’s attorneys in a San Francisco court last Thursday.
“You are dramatically changing, you might even say obliterating, the market for that person’s work, and you’re saying that you don’t even have to pay a license to that person… I just don’t understand how that can be fair use.”
The judge itself does seem unconvinced about the material cost of Facebook’s actions, however:
“It seems like you’re asking me to speculate that the market for Sarah Silverman’s memoir will be affected by the billions of things that Llama [Meta’s AI model] will ultimately be capable of producing,” said Chhabria.
Here’s a pretty good sneer at the writing out of LLMs, with a focus on meaning https://www.experimental-history.com/p/28-slightly-rude-notes-on-writing
Maybe that’s my problem with AI-generated prose: it doesn’t mean anything because it didn’t cost the computer anything. When a human produces words, it signifies something. When a computer produces words, it only signifies the content of its training corpus and the tuning of its parameters.
Also, on people:
I see tons of essays called something like “On X” or “In Praise of Y” or “Meditations on Z,” and I always assume they’re under-baked. That’s a topic, not a take.