Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week’s thread

(Semi-obligatory thanks to @dgerard for starting this - this one was a bit late, I got distracted)

16 points

Starting things off with a fresh post from Brian Merchant: Tech under Trump, part 1

permalink
report
reply
8 points
*

Sidenote: Love how the tech VCs all grew up in the media landscape of tech workers going ‘the management of this company is a group of idiots’ an then didn’t think that would apply to themselves.

permalink
report
parent
reply
8 points

The classic Scott Adams manœuvre.

permalink
report
parent
reply
7 points
*

I woke up and immediately read about something called “Defense Llama”. The horrors are never ceasing: https://theintercept.com/2024/11/24/defense-llama-meta-military/

Scale AI advertised their chatbot as being able to:

apply the power of generative AI to their unique use cases, such as planning military or intelligence operations and understanding adversary vulnerabilities

However their marketing material, as is tradition, include an example of terrible advice. Which is not great given it’s about blowing up a building “while minimizing collateral damage”.

Scale AI’s response to the news pointing this out – complaining that everyone took their murderbot marketing material seriously:

The claim that a response from a hypothetical website example represents what actually comes from a deployed, fine-tuned LLM that is trained on relevant materials for an end user is ridiculous.

permalink
report
reply
4 points

On the one hand, that spectacular failure could potentially dissuade the military from buying in and prolonging this bubble. On the other hand, having an accountability sink for war crimes would be a tempting offer to your average army.

permalink
report
parent
reply
13 points

The eventual war crimes trials will very likely reveal that “AI targeting” has already been used as an accountability sink for a premeditated ethnic cleansing policy in Gaza.

permalink
report
parent
reply
1 point

I’ve been wondering about this

One the one hand, military procurement (at least afaik) tends toward complete functional product

On the other hand, military R&D programs have been among the most spectacularly profligate financial black holes in recent decades

None of the options involved feel great, even if “it gets shunted from mil procurement and all industry claims get publicly brandished as the bullshit it is” comes to pass (which tbh still feels like an optimistic outcome, with unclear time horizons)

permalink
report
parent
reply
4 points

I mean it fits into the pattern of procurement projects that aren’t allowed to fail despite having had serious coherence issues starting at the design stage. Though the military is usually less prone to the “problem in search of a solution” dynamic that VCs are prone to if a project gets started it can shamble forwards as a zombie for years before anyone finds the political will to kill it.

permalink
report
parent
reply
20 points

Police are openly admitting to using chatGPT to hallucinate reports. I’m sure they were before, but now they’re comfortable enough to admit to it.

Nothing could possibly go wrong with this. Nothing at all.

permalink
report
reply
7 points

oh of course it’s fucking Axon

permalink
report
parent
reply
11 points

great news for lawyers hopefully

permalink
report
parent
reply
3 points

I wonder if we’re going to see Baldur Bjarnason or Emily Bender tapped as expert witnesses in the not-too-distant future.

permalink
report
parent
reply
11 points
*

All

Coppers

Are

Bots

permalink
report
parent
reply
13 points

The promptfans testing OpenAI Sora have gotten mad that it’s happening to them and (temporarily) leaked access to the API.

https://techcrunch.com/2024/11/26/artists-appears-to-have-leaked-access-to-openais-sora/

“Hundreds of artists provide unpaid labor through bug testing, feedback and experimental work for the [Sora early access] program for a $150B valued [sic] company,” the group, which calls itself “Sora PR Puppets,” wrote in a post …

“Well, they didn’t compensate actual artists, but surely they will compensate us.”

“This early access program appears to be less about creative expression and critique, and more about PR and advertisement.”

OK, I could give them the benefit of the doubt: maybe they’re new to the GenAI space, or general ML Space … or IT.

But I’m not going to. Of course it’s about PR hype.

permalink
report
reply
9 points

I’d say lol but I’m like 72% sure this is straight out of the video game industry’s playbook and very much intentional to create hype because everyone has forgotten this shit even exists.

Also, I’m still waiting for just one use case for video-generating autoplag that is, even in theory, not either morally reprehensible or outright criminal.

permalink
report
parent
reply
4 points

currently in vc delusion, the public just doesn’t understand how to move about efficiently

the levels of not-even-wrong from these dipshits continue to be astounding

permalink
report
reply
12 points

Also, this plan has a very much a fuck disabled people and old people factor. And what a lonely world they live in.

permalink
report
parent
reply
3 points

If you asked people what they wanted, they would say a car that drives itself

permalink
report
parent
reply
2 points

Is that a Henry Ford reference? Very clever lol

permalink
report
parent
reply
4 points

If I mathed right that’d be one waymo every 350 feet of road on average. Is that a lot? It sounds like it might be a lot. Especially since self-driving cars greatest weakness appears to be driving in the vicinity of other self-driving cars.

permalink
report
parent
reply
5 points

Well, if my math is right, on a 50km/hr road you’d see one about every 8 seconds.

permalink
report
parent
reply
1 point

I think the idea is to solve that by networking all the self-driving cars together. I’m sure the long history of trying to get vendors to agree on a standard when they all benefit individually from the lock-in of proprietary systems has nothing to teach us about this prospect.

permalink
report
parent
reply
9 points
*

Complexity theoretical, security and latency wise this sounds like a great plan. Can wait for people being stuck in cars for days because the freeway offramps are causing livelocks. (Like the example of the waymo cars all honking at each other at the parking lots).

Wonder if they are going to use the routing solutions used in tcp and then discover that cars are heavier and slower than data and suddenly waste a lot of peoples time and money.

E: small little detail which I don’t know if other countries also have it, but in the dutch traffic system, emergency services and busses (and perhaps a few hackers who really want to be in trouble with the law (but I always heard this described as a ‘this exist, but we don’t mess with it’ system)) have a system where you can get priority at traffic lights, so they turn green faster. Wonder if other countries have this, and how much they realize this will not work for waymo systems.

permalink
report
parent
reply
5 points

other than interop, the big problem I have with this is security. car modding for performance is already a big thing, and a car mod that makes other cars slow down, stop, get out of your way, or otherwise malfunction would be incredibly popular with assholes of all varieties, and car modding has many. the current state of automotive is that security is a fucking shitshow, but I can’t figure out any kind of security model for this that isn’t vulnerable to a wide variety of obvious attacks. even a perfect inter-vendor attestation chain (good fucking luck) is vulnerable to hooking an ECU (or whatever the ruggedized monitoring microcontroller unit for a magic self-driving EV is) and radio up to a variety of fake sensors and crafting inputs such that the thing starts transmitting “wait no stop here” signals to all the surrounding cars

but then again, all of this is probably intentional because it creates a privileged class of people who can afford to fuck with self-driving car networking and not worry about any associated fines, and an unprivileged class who just have to put up with everything being so much worse. in a world where you can roll smoke into a Subway with relatively few consequences (not to mention all the other horseshit Truck Guys get away with), it’s not a hard outcome to imagine.

permalink
report
parent
reply
7 points

here’s a thought. what if we just stacked every building on top of each other and had the cars drive vertically along the outside. then you wouldn’t need roads at all

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.6K

    Monthly active users

  • 501

    Posts

  • 11K

    Comments

Community moderators