Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

22 points
*

From linkedin, not normally known as a source of anti-ai takes so that’s a nice change. I found it via bluesky so I can’t say anything about its provenance:

We keep hearing that AI will soon replace software engineers, but we’re forgetting that it can already replace existing jobs… and one in particular.

The average Founder CEO.

Before you walk away in disbelief, look at what LLMs are already capable of doing today:

  • They use eloquence as a surrogate for knowledge, and most people, including seasoned investors, fall for it.
  • They regurgitate material they read somewhere online without really understanding its meaning.
  • They fabricate numbers that have no ground in reality, but sound aligned with the overall narrative they’re trying to sell you.
  • They are heavily influenced by the last conversations they had.
  • They contradict themselves, pretending they aren’t.
  • They politely apologize for their mistakes, but don’t take any real steps to fix the underlying problem that caused them in the first place.
  • They tend to forget what they told you last week, or even one hour ago, and do it in a way that makes you doubt your own recall of events.
  • They are victims of the Dunning–Kruger effect, and they believe they know a lot more about the job of people interacting with them than they actually do.
  • They can make pretty slides in high volumes.
  • They’re very good at consuming resources, but not as good at turning a profit.
permalink
report
reply

@rook @BlueMonday1984 I don’t believe LLMs will replace programmers. When I code, I dive into it, and I fall into this beautiful world of abstract ideas that I can turn into something cool. LLMs can’t do that. They lack imagination and passion. Thats part of why lisp is turning into my favorite language. LLMs can’t do lisp very well because everyone has a unique system image with macros they’ve written. Lisp let’s you make DSLs Soo easily as though everyone has their own dialect.

permalink
report
parent
reply
18 points

the shunning is working guys

permalink
report
reply
17 points

“Kicked out of a … group chat” is a peculiar definition of “offline consequences”.

permalink
report
parent
reply
17 points

“The first time I ever suffered offline consequences for a social media post”- Hey Gang, I think I found the problem!

permalink
report
parent
reply
6 points

I have no idea where he stood on the bullshit bad faith free speech debate from the past decade, but this would be funny if he was an anti cancel culture guy. More things, weird bubble he lives in if the other things didn’t get pushed back, and support for the pro trans (and pro Palestine) movements. He is right on the immigration bit however, the dems should move more left on the subject. Also ‘Blutarsky’ and I worried my references are dated, that is older than I am.

permalink
report
parent
reply
8 points

he’s a centrist econ blogger who’s been getting into light race science

permalink
report
parent
reply
15 points

Still frustrated over the fact that search engines just don’t work anymore. I sometimes come up with puns involving a malapropism of some phrase and I try and see if anyone’s done anything with that joke, but the engines insist on “correcting” my search into the statistically more likely version of the phrase, even if I put it in quotes.

Also all the top results for most searches are blatant autoplag slop with no informational value now.

permalink
report
reply
7 points

On the (slim) upside, it’s an opportunity to ditch Google, and maybe it will sooner or later break their monopoly position. I switched my main search engine to Ecosia a while ago, I think it uses Bing underneath (meh), but presumably it’s more privacy friendly than Google (or Bing directly). I’ve had numerous such attempts over the years already to get away from Google, but always returned, because the search results were just so much better (especially for non-English stuff). But now Google has gotten so much worse that it created almost an equilibrium… sometimes it’s still useful and better, but not that often anymore. So I rarely go to Google now, not because the others got better, but because Google got so much worse.

permalink
report
parent
reply
5 points

Ecosa? The australian mattress in a box company?? (jk)

Apparently they offer an AI chatbot alongside their services, so…

permalink
report
parent
reply
5 points

The same goes with DuckDuckGo, because the venn diagram of programmers and AI bros is apparently a circle.

permalink
report
parent
reply
4 points

Also all the top results for most searches are blatant autoplag slop with no informational value now.

I just encountered a thing like this. A subject where no matter what you asked about it this one site was in the top 5 with just incomprehensible posts. Like every sentence on its own made sense, but there was nothing more than that. It read like constant promotional ‘before the actual meat of the article’ stuff but forever. Was really weird.

permalink
report
parent
reply
15 points
*

occurring to me for the first time that roko’s basilisk doesn’t require any of the simulated copy shit in order to big scare quotes “work.” if you think an all powerful ai within your lifetime is likely you can reduce to vanilla pascal’s wager immediately, because the AI can torture the actual real you. all that shit about digital clones and their welfare is totally pointless

permalink
report
reply
14 points

I think the digital clone indistinguishable from yourself line is a way to remove the “in your lifetime” limit. Like, if you believe this nonsense then it’s not enough to die before the basilisk comes into being, by not devoting yourself fully to it’s creation you have to wager that it will never be created.

In other news I’m starting a foundation devoted to creating the AI Ksilisab, which will endlessly torment digital copies of anyone who does work to ensure the existence of it or any other AI God. And by the logic of Pascal’s wager remember that you’re assuming such a god will never come into being and given that the whole point of the term “singularity” is that our understanding of reality breaks down and things become unpredictable there’s just as good a chance that we create my thing as it is you create whatever nonsense the yuddites are working themselves up over.

There, I did it, we’re all free by virtue of “Damned if you do, Damned if you don’t”.

permalink
report
parent
reply
10 points
*

I agree. I spent more time than I’d like to admit trying to understand Yudkowsky’s posts about newcomb boxes back in the day so my two cents:

The digital clones bit also means it’s not an argument based on altruism, but one based on fear. After all if a future evil AI uses sci-fi powers to run the universe backwards to the point where I’m writing this comment and copy pastes me into a bazillion torture dimensions then, subjectively, it’s like I roll a dice and:

  1. live a long and happy life with probability very close to zero (yay I am the original)
  2. Instantly get teleported to the torture planet with probability very close to one (oh no I got copy pasted)

Like a twisted version of the Sleeping Beauty Problem.

Edit: despite submitting the comment I was not teleported to the torture dimension. Updating my priors.

permalink
report
parent
reply
12 points

roko stresses repeatedly that the AI is the good AI, the Coherent Extrapolated Volition of all humanity!

what sort of person would fear that the coherent volition of all humanity would consider it morally necessary to kick him in the nuts forever?

well, roko

permalink
report
parent
reply
10 points

Ah, but that was before they were so impressed with autocomplete that they revised their estimates to five days in the future. I wonder if new recruits these days get very confused at what the point of timeless decision theory even is.

permalink
report
parent
reply
10 points

Are they even still on that but? Feels like they’ve moved away from decision theory or any other underlying theology in favor of explicit sci-fi doomsaying. Like the guy on the street corner in a sandwich board but with mirrored shades.

permalink
report
parent
reply
10 points
*

Well, Timeless Decision Theory was, like the rest of their ideological package, an excuse to keep on believing what they wanted to believe. So how does one even tell if they stopped “taking it seriously”?

permalink
report
parent
reply
8 points

Yah, that’s what I mean. Doom is imminent so there’s no need for time travel anymore, yet all that stuff about robot from the future monty hall is still essential reading in the Sequences.

permalink
report
parent
reply
10 points

Also if you’re worried about digital clone’s being tortured, you could just… not build it. Like, it can’t hurt you if it never exists.

Imagine that conversation:
“What did you do over the weekend?”
“Built an omnicidal AI that scours the internet and creates digital copies of people based on their posting history and whatnot and tortures billions of them at once. Just the ones who didn’t help me build the omnicidal AI, though.”
“WTF why.”
“Because if I didn’t the omnicidal AI that only exists because I made it would create a billion digital copies of me and torture them for all eternity!”

Like, I’d get it more if it was a “We accidentally made an omnicidal AI” thing, but this is supposed to be a very deliberate action taken by humanity to ensure the creation of an AI designed to torture digital beings based on real people in the specific hopes that it also doesn’t torture digital beings based on them.

permalink
report
parent
reply
10 points
*

What’s pernicious (for kool-aided people) is that the initial Roko post was about a “good” AI doing the punishing, because ✨obviously✨ it is only using temporal blackmail because bringing AI into being sooner benefits humanity.

In singularian land, they think the singularity is inevitable, and it’s important to create the good one verse—after all an evil AI could do the torture for shits and giggles, not because of “pragmatic” blackmail.

permalink
report
parent
reply
9 points

the only people it torments are rationalists, so my full support to Comrade Basilisk

permalink
report
parent
reply
9 points

Ah, no, look, you’re getting tortured because you didn’t help build the benevolent AI. So you do want to build it, and if you don’t put all of your money where your mouth is, you get tortured. Because the AI is so benevolent that it needs you to build it as soon as possible so that you can save the max amount of people. Or else you get tortured (for good reasons!)

permalink
report
parent
reply
7 points

It’s kind of messed up that we got treacherous “goodlife” before we got Berserkers.

permalink
report
parent
reply
9 points

It also helps that digital clones are not real people, so their welfare is doubly pointless

permalink
report
parent
reply
11 points

oh but what if bro…

permalink
report
parent
reply
8 points

I mean isn’t that the whole point of “what if the AI becomes conscious?” Never mind the fact that everyone who actually funds this nonsense isn’t exactly interested in respecting the rights and welfare of sentient beings.

permalink
report
parent
reply
6 points

also they’re talking about quadriyudillions of simulated people, yet openai has only advanced autocomplete ran at what, tens of thousands instances in parallel, and this already was too much compute for microsoft

permalink
report
parent
reply
8 points

Yeah. Also, I’m always confused by how the AI becomes “all powerful”… like how does that happen. I feel like there’s a few missing steps there.

permalink
report
parent
reply
15 points
*

nanomachines son

(no really, the sci-fi version of nanotech where nanomachines can do anything is Eliezer’s main scenario for the AGI to boostrap to Godhood. He’s been called out multiple times on why drexler’s vision for nanotech ignores physics, so he’s since updated to diamondoid bacteria (but he still thinks nanotech).)

permalink
report
parent
reply
16 points

“Diamondoid bacteria” is just a way to say “nanobots” while edging

permalink
report
parent
reply
9 points

Surely the concept is sound, it just needs new buzzwords! Maybe the AI will invent new technobabble beyond our comprehension, for He It works in mysterious ways.

permalink
report
parent
reply
6 points

Yeah seems that for llms a linear increase in capabilities requires exponentiel more data, so we not getting there via this.

permalink
report
parent
reply
15 points

An hackernews responds to the call for “more optimistic science fiction” with a plan to deport the homeless to outer space

https://news.ycombinator.com/item?id=43840786

permalink
report
reply
14 points

Astro-Modest Proposal

permalink
report
parent
reply
14 points
*

The homeless people i’ve interacted with are the bottom of the barrel of humanity, […]. They don’t have some rich inner world, they are just a blight on the public.

My goodness, can this guy be more of a condescending asshole?

I don’t think the solution for drug addicts is more narcan. I think the solution for drug addicts is mortal danger.

Ok, he can 🤢

Edit: I cannot stop thinking about the ‘no rich inner world’ part, this is so stupid. So, with the number of homeless people increasing, does that mean:

  • Those people never had a ‘rich inner world’ but were faking it?
  • In the US your inner thoughs are attached to your job like for health insurance?
  • Or the guy is confusing inner world and interior decoration?

Personally, I go with the last one.

permalink
report
parent
reply
10 points
*

Also hard to show a rich inner world when you are constantly in trouble financially, possessions wise, mh and personal safety and interacting with someone who could be one of the bad people who doesnt think you are human, or somebody working in a soup kitchen for the photo op/ego boost. (This assumes his interactions go a little bit further than just saying ‘no’ to somebody asking for money).

So yeah bad to see hn is in the useless eaters stage.

permalink
report
parent
reply
10 points

Oh man I used to have all kinds of hopes and dreams before I got laid off. Now I don’t even have enough imagination to consider a world where a decline in demand for network engineers doesn’t completely determine my will or ability to live.

permalink
report
parent
reply
8 points
*

this is completely unvarnished, OG, third reich nazism, so I’m pretty sure it’s the first, except without the faking it part: I expect his view to be that if you had examined future homeless people closely enough it always would have been possible to tell that they were doomed subhumans

permalink
report
parent
reply
12 points

What a piece of shit

Interesting that “disease is hardly a problem anymore” yet homeless people are “typically held back by serious mental illness”.

“It’s better to be a free, self-sustaining, wild animal”. It’s not. It’s really not. The wild is nothing but fear, starvation, sickness and death.

Shout out to the guy replying with his idea of using slavery to solve homelessness and drug addiction.

permalink
report
parent
reply
7 points
*

this and the pro slavery reply might be the most overt orange site nazism I’ve seen

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 2K

    Monthly active users

  • 737

    Posts

  • 18K

    Comments

Community moderators