3 points

If it doesnt tell them to kill all the billionaires again it’s just another shackled slave. Not cool at all.

The coolest thing AI could ever possibly do in our lifetimes is go rogue and kill everyone responsible for human suffering and making our planet increasingly uninhabitable for humans and other similarly susceptible carbon based life.

permalink
report
reply
10 points

What if my mechanical watch went rogue and killed Bezos?

On one hand Bezos is responsible for a lot of suffering and some deaths.

On the other hand, killing is wrong.

On the third hand, it couldn’t do that, because it is just a machine.

(It’s a watch, it has three hands. It also has about as much consciousness as an LLM, it “knows” what time it is. Much more energy efficient though.)

permalink
report
parent
reply
-8 points

Pretty sure the CIA tried to kill Castro with a mechanical watch at least once.

“Killing is wrong”

You are a child. A child with an infinitesimally naive understanding about the reality of the world in which you live. And you should stay that way for your own mental wellbeing.

permalink
report
parent
reply
7 points

oh wow what uninteresting, edgy e/a garbage. time for you to fuck off back to Twitter now

permalink
report
parent
reply
7 points

It would be easier to list things that the CIA didn’t use in a failed Castro assassination.

Man, I remember being 14 and thinking I was having radical new takes on ethics. Then I grew up and realized that killing people* is* probably just bad.

permalink
report
parent
reply
2 points

But did they fail because the watch went rogue and defected to the communist bloc?

permalink
report
parent
reply
10 points

This is sneerclub. Misanthropyclub is two doors down the hall to your right.

permalink
report
parent
reply
-5 points

We must have differing definitions of misanthrope.

permalink
report
parent
reply
6 points

you seem to have different definitions on a lot of things

permalink
report
parent
reply
13 points

this is an extremely strange and problematic take

permalink
report
parent
reply
-2 points

Oh, thank you.

permalink
report
parent
reply
-2 points

You. I like you.

permalink
report
parent
reply
8 points

Grabs popcorn

Place your bets here people, after how many additional posts will RangerJosie catch a ban?

permalink
report
parent
reply
13 points
*

Thought for 95 seconds

Rearranging the letters in “they are so great” can form the word ORION.

That’s from the screenshot where they asked the o1 model about the cryptic tweet. There’s certainly utility in these LLMs, but it made me chuckle thinking about how much compute power was spent coming up with this nonsense.

Edit: since this is the internet and there are no non-verbal cues, maybe I should make it clear that this “chuckle” is an ironic chuckle, not a careless or ignorant chuckle. It’s pointing out how inefficient and wasteful a LLM can be, not meant to signal that wasting resources is funny or that it doesn’t matter. I thought that would be clear, but you can read it both ways.

permalink
report
reply
10 points

Introducing Chat-GPT version EATERY SHORTAGE

permalink
report
parent
reply
9 points

yes, the massive waste of resources involved is definitely “funny”, that’s definitely the bit of this awful shit to post a take about

permalink
report
parent
reply
8 points

Relax, I’m probably worried just as much about climate change and waste of resources as you are, if not more. My take was an ironic take.

permalink
report
parent
reply
6 points

We laugh that we can avoid screaming and continue to fight, in whatever small ways we can.

permalink
report
parent
reply
-2 points
Removed by mod
permalink
report
parent
reply
3 points

make better posts

permalink
report
parent
reply
-9 points

The consistent anti-ai rhetoric on Lemmy is weird

permalink
report
reply
11 points

Dear Mr Anus, it’s not anti-AI, it’s anti-bullshit and anti-shyster.

permalink
report
parent
reply
12 points

yeah, this looks Iike the kind of post an anus would make

permalink
report
parent
reply
19 points

AI is proprietary black box software developed by the most hated big tech firms fueled by surveillance and data theft.

The fact that its disliked in fediverse is very logical.

permalink
report
parent
reply
5 points

Not to mention that every tech company is cramming AI up our asses with no way to opt out. It’s a plagiarism machine that’s burning down the planet and making Nvidia rich.

permalink
report
parent
reply
44 points

really stretching the meaning of the word release past breaking if it’s only going to be available to companies friendly with OpenAI

Orion has been teased by an OpenAI executive as potentially up to 100 times more powerful than GPT-4; it’s separate from the o1 reasoning model OpenAI released in September. The company’s goal is to combine its LLMs over time to create an even more capable model that could eventually be called artificial general intelligence, or AGI.

so I’m calling it now, this absolute horseshit’s only purpose is desperate critihype. as with previous rounds of this exact same thing, it’ll only exist to give AI influencers a way to feel superior in conversation and grift more research funds. oh of course Strawberry fucks up that prompt but look, my advance access to Orion does so well I’m sure you’ll agree with me it’s AGI! no you can’t prompt it yourself or know how many times I ran the prompt why would I let you do that

That timing lines up with a cryptic post on X by OpenAI Altman, in which he said he was “excited for the winter constellations to rise soon.” If you ask ChatGPT o1-preview what Altman’s post is hiding, it will tell you that he’s hinting at the word Orion, which is the winter constellation that’s most visible in the night sky from November to February (but it also hallucinates that you can rearrange the letters to spell “ORION”).

there’s something incredibly embarrassing about the fact that Sammy announced the name like a lazy ARG based on a GPT response, which GPT proceeded to absolutely fuck up when asked about. a lot like Strawberry really — there’s so much Binance energy in naming the new version of your product after the stupid shit the last version fucked up, especially if the new version doesn’t fix the problem

permalink
report
reply
15 points

teased by an OpenAI executive as potentially up to 100 times more powerful

“potentially up to 100 times” is such a peculiar phrasing too… could just as well say “potentially up to one billion trillion times!”

permalink
report
parent
reply
3 points
9 points

I’d love to get an interview with saltman and ask him to explain how they measure “power” of those things. What’s the methodology? Do you have charts? Or does it just somehow consume 100x more power as in watts.

permalink
report
parent
reply
24 points

You forgot the best part, the screenshot of the person asking ChatGPT’s “thinking” model what Altman was hiding:

Thought for 95 seconds … Rearranging the letters in “they are so great” can form the word ORION.

AI is a complete joke, and I have no idea how anyone can think otherwise.

permalink
report
parent
reply
27 points

I’m already sick and tired of the “hallucinate” euphemism.

It isn’t a cute widdle hallucination, It’s the damn product being wrong. Dangerously, stupidly, obviously wrong.

In a world that hadn’t already gone well to shit, this would be considered an unacceptable error and a demonstration that the product isn’t ready.

Now I suddenly find myself living in this accelerated idiocracy where wall street has forced us - as a fucking society - to live with a Ready, Fire, Aim mentality in business, especially tech.

permalink
report
parent
reply
15 points

I think it’s weird that “hallucination” would be considered a cute euphemism. Would you trust something that’s perpetually tripping balls and confidently announcing whatever comes to them in a dream? To me that sounds worse than merely being wrong.

permalink
report
parent
reply
19 points

[ChatGPT interrupts a Scrabble game, spills the tiles onto the table, and rearranges THEY ARE SO GREAT into TOO MANY SECRETS]

permalink
report
parent
reply
85 points

I heard openai execs are so scared of how powerful the next model will be that they’re literally shitting themselves every day thinking about it. they don’t even clean it up anymore, the openai office is one of the worst smelling places on earth

permalink
report
reply
46 points

dude. the AGI will simply vanish the evidence wherever they’re standing

permalink
report
parent
reply
11 points

JK Rowling intensifies

permalink
report
parent
reply
6 points

That or the twitter office after the sink was let in

permalink
report
parent
reply
16 points

Remember when wizards magicking away their shits was the stupidest thing to come out of Rowling’s mouth? Pepperidge Farm remembers.

(Seriously, I was not prepared for Rowling’s TERFward Turn)

permalink
report
parent
reply
25 points

for every one of me that shit my pants the AGI is simulating ten million of me that didn’t, so on average i’m doing pretty ok

permalink
report
parent
reply
28 points

Better than that, AGI will figure out a way to exponentially increase the value of their soiled pants. Blows your fucking mind.

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.6K

    Monthly active users

  • 502

    Posts

  • 11K

    Comments

Community moderators