7 points

@dgerard
Shouldn’t this be linking directly to
https://pivot-to-ai.com/2024/10/26/whoops-google-copilot-perplexity-push-scientific-racism-in-search-results/
?

By the way, re the subheading “It can’t be that stupid, you must be prompting it wrong” -
I saw that comment made here on Mastodon in al seriousness just the other day, with a sample of the “correct” prompt that included a misprint rendering much of it meaningless.
According to the poster, he’s been using it regularly with satisfying results (or “satisfactory,” but that seems less plausible).

permalink
report
reply
7 points

yeah, someone on mastodon suggested that tagline but it’s also a sentiment ai bros say unironically

it also comes from “Bitcoin: It can’t be that stupid, you must be explaining it wrong”

permalink
report
parent
reply
6 points

this is a Lemmy thread, and the behavior you’re seeing is a mismatch between how Lemmy and Mastodon handle links. on our end, it’s a direct link to the pivot-to-ai url. the version of this post federated to Mastodon links to our Lemmy instance’s thread instead, presumably to drive engagement (note that that’s not my decision as an instance admin, and I’d probably have it federate differently given the option)

permalink
report
parent
reply
6 points

probably some whacky shit with how it emits the AP Post object

it’s been a while since I looked into that code but I remember it being a bit weirdly trampolined for how it pulls fields together for the final emitted object/blob

permalink
report
parent
reply
6 points

activitypub allows thousands of computers not to quite talk to each other

permalink
report
parent
reply
7 points

this seems more like a Mastodon thing, considering that if I curl the post with Accept: application/activity+json it doesn’t have a link to the Lemmy thread, it has the link actually embedded in the post

I think Mastodon does this for other AP object types (like articles, e.g. if you put a link to a WriteFreely article into Mastodon it’ll just show the title and a link to the original post)

@self

permalink
report
parent
reply
19 points

not included: a few hundred words of other ranting about the pervasive race science in AI research and especially from anyone talking about “AGI”. Maybe later.

permalink
report
reply
28 points

It’s fascinating to watch, in real time, the catastrophe which sunk Atlantis. Everyone’s just fighting over who’s biases to use, rather than re-examining the systems that have been created which demand a bias in the first place.

permalink
report
reply
14 points

Between this high-profile disaster and character.ai’s suicide lawsuit (which I’ve talked about here), it feels more and more and more like the current system’s gonna end up getting torn to shreds once this bubble bursts.

permalink
report
parent
reply
11 points

I thought “character.ai’s suicide lawsuit” was your way of describing a stupid lawsuit that is suicidal to the company, but this is so much fucking darker, god.

permalink
report
parent
reply
6 points

Yeah.

Looking back at my quick-and-dirty thoughts about the suit, I feel like I handled it in a pretty detached way, focusing very little on the severe human cost that kicked off the suit and more on what it could entail for AI at large.

permalink
report
parent
reply
9 points

I’m relatively confident that AI represents the formalization of the perspective adhered to by those who run the economy. So, yes, once AI finally fails spectacularly that will serve as the death knell for their entire system. Many probably already know it, which is why things are falling apart left and right, but that bubble bursting will be the end of their last ditch effort.

permalink
report
parent
reply
10 points

I’m relatively confident that AI represents the formalization of the perspective adhered to by those who run the economy.

It does provide context for why so many are throwing so much money at it, when experts know they’re not going to get a monetary return.

It could be that they’re just genuinely huge suckers. But I’m inclined to wonder if there’s more sinister motives in play.

permalink
report
parent
reply
3 points
*

I wish I could agree, but we’re all AI fodder. AI companies will spend us and anyone who disagrees can get fucked because money. The ownership class is going to milk this for every damn cent until they get their returns, and if that means more murder-suicides in that pursuit, well then buckle up.

permalink
report
parent
reply
8 points

The “money” the AI companies have are basically just promises from backers. If they cannot deliver their promises (which boil down to basically knowledge industries replacing around 20% of their workforce with LLMs) then that imaginary money dries up. Remember, there are real bills in the form of power and cooling and hardware that have to be paid all the time just to keep running in place.

A lawsuit that convinces the public and investors that LLMs are a dead end will kill most LLM companies.

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.5K

    Monthly active users

  • 502

    Posts

  • 11K

    Comments

Community moderators