6 points

Looks like it is not any smarter than the other junk on the market. The confusion that people consider AI as “intelligence” may be rooted in their own deficits in that area.

And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware. Hurray! Progress!

permalink
report
reply
3 points

artificial intelligence

AI has been used in game development for a while and i havent seen anyone complain about the name before it became synonymous with image/text generation

permalink
report
parent
reply
3 points

Well, that is where the problems started.

permalink
report
parent
reply
4 points

It was a misnomer there too, but at least people didn’t think a bot playing C&C would be able to save the world by evolving into a real, greater than human intelligence.

permalink
report
parent
reply
1 point

I’m tired of this uninformed take.

LLMs are not a magical box you can ask anything of and get answers. If you are lucky and blindly asking questions it can give some accurate general data, but just like how human brains work you aren’t going to be able to accurately recreate random trivia verbatim from a neural net.

What LLMs are useful for, and how they should be used, is a non-deterministic parsing context tool. When people talk about feeding it more data they think of how these things are trained. But you also need to give it grounding context outside of what the prompt is. give it a PDF manual, website link, documentation, whatever and it will use that as context for what you ask it. You can even set it to link to reference.

You still have to know enough to be able to validate the information it is giving you, but that’s the case with any tool. You need to know how to use it.

As for the spyware part, that only matters if you are using the hosted instances they provide. Even for OpenAI stuff you can run the models locally with opensource software and maintain control over all the data you feed it. As far as I have found, none of the models you run with Ollama or other local AI software have been caught pushing data to a remote server, at least using open source software.

permalink
report
parent
reply
4 points
*

And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware.

LLMs aren’t spyware, they’re graphs that organize large bodies of data for quick and user-friendly retrieval. The Wikipedia schema accomplishes a similar, abet more primitive, role. There’s nothing wrong with the fundamentals of the technology, just the applications that Westoids doggedly insist it be used for.

If you no longer need to boil down half a Great Lake to create the next iteration of Shrimp Jesus, that’s good whether or not you think Meta should be dedicating millions of hours of compute to this mind-eroding activity.

permalink
report
parent
reply
0 points

There’s nothing wrong with the fundamentals of the technology, just the applications that Westoids doggedly insist it be used for.

Westoids? Are you the type of guy I feel like I need to take a shower after talking to?

permalink
report
parent
reply
3 points

I think maybe it’s naive to think that if the cost goes down, shrimp jesus won’t just be in higher demand. Shrimp jesus has no market cap, bullshit has no market cap. If you make it more efficient to flood cyberspace with bullshit, cyberspace will just be flooded with more bullshit. Those great lakes will still boil, don’t worry.

permalink
report
parent
reply
1 point

I think maybe it’s naive to think that if the cost goes down, shrimp jesus won’t just be in higher demand.

Not that demand will go down but that economic cost of generating this nonsense will go down. The number of people shipping this back and forth to each other isn’t going to meaningfully change, because Facebook has saturated the social media market.

If you make it more efficient to flood cyberspace with bullshit, cyberspace will just be flooded with more bullshit.

The efficiency is in the real cost of running the model, not in how it is applied. The real bottleneck for AI right now is human adoption. Guys like Altman keep insisting a new iteration (that requires a few hundred miles of nuclear power plants to power) will finally get us a model that people want to use. And speculators in the financial sector seemed willing to cut him a check to go through with it.

Knocking down the real physical cost of this boondoggle is going to de-monopolize this awful idea, which means Altman won’t have a trillion dollar line of credit to fuck around with exclusively. We’ll still do it, but Wall Street won’t have Sam leading them around by the nose when they can get the same thing for 1/100th of the price.

permalink
report
parent
reply
8 points

With understanding LLM, I started to understand some people and their “reasoning” better. That’s how they work.

permalink
report
parent
reply
1 point

That’s a silver lining, at least.

permalink
report
parent
reply
10 points
*

It is progress in a sense. The west really put the spotlight on their shiny new expensive toy and banned the export of toy-maker parts to rival countries.

One of those countries made a cheap toy out of jank unwanted parts for much less money and it’s of equal or better par than the west’s.

As for why we’re having an arms race based on AI, I genuinely dont know. It feels like a race to the bottom, with the fallout being the death of the internet (for better or worse)

permalink
report
parent
reply
6 points
*

Looks like it is not any smarter than the other junk on the market. The confusion that people consider AI as “intelligence” may be rooted in their own deficits in that area.

Yep, because they believed that OpenAI’s (two lies in a name) models would magically digivolve into something that goes well beyond what it was designed to be. Trust us, you just have to feed it more data!

And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware. Hurray! Progress!

That’s the neat bit, really. With that model being free to download and run locally it’s actually potentially disruptive to OpenAI’s business model. They don’t need to do anything malicious to hurt the US’ economy.

permalink
report
parent
reply

It is open source, so it should be audited and if there are back doors they can be plugged in a fork

permalink
report
parent
reply
8 points

The difference is that you can actually download this model and run it on your own hardware (if you have sufficient hardware). In that case it won’t be sending any data to China. These models are still useful tools. As long as you’re not interested in particular parts of Chinese history of course ;p

permalink
report
parent
reply
27 points

As a European, gotta say I trust China’s intentions more than the US’ right now.

permalink
report
reply
5 points

Two times zero is still zero

permalink
report
parent
reply
-13 points

With that attitude I am not sure if you belong in a Chinese prison camp or an American one. Also, I am not sure which one would be worse.

permalink
report
parent
reply
-5 points

They should conquer a country like Switzerland and split it in 2

At the border, they should build a prison so they could put them in both an American and a Chinese prison

permalink
report
parent
reply
8 points
*

Not really a question of national intentions. This is just a piece of technology open-sourced by a private tech company working overseas. If a Chinese company releases a better mousetrap, there’s no reason to evaluate it based on the politics of the host nation.

Throwing a wrench in the American proposal to build out $500B in tech centers is just collateral damage created by a bad American software schema. If the Americans had invested more time in software engineers and less in raw data-center horsepower, they might have come up with this on their own years earlier.

permalink
report
parent
reply
3 points

You’re absolutely right.

permalink
report
parent
reply

All of this deepseek hype is overblown. Deepseek model was still trained on older american Nvidia GPUs.

permalink
report
reply
22 points

AI is overblown, tech is overblown. Capitalism itself is a senseless death cult based on the non-sensical idea that infinite growth is possible with a fragile, finite system.

permalink
report
parent
reply
13 points

Your confidence in this statement is hilarious the fact that it doesn’t help your argument at all. If anything, the fact they refined their model so well on older hardware is even more remarkable, and quite damning when OpenAI claims it needs literally cities worth of power and resources to train their models.

permalink
report
parent
reply
17 points

I’d argue this is even worse than Sputnik for the US because Sputnik spurred technological development that boosted the economy. Meanwhile, this is popping the economic bubble in the US built around the AI subscription model.

permalink
report
reply
51 points

Good. LLM AIs are overhyped, overused garbage. If China putting one out is what it takes to hack the legs out from under its proliferation, then I’ll take it.

permalink
report
reply

Overhyped? Sure, absolutely.

Overused garbage? That’s incredibly hyperbolic. That’s like saying the calculator is garbage. The small company where I work as a software developer has already saved countless man hours by utilising LLMs as tools, which is all they are if you take away the hype; a tool to help skilled individuals work more efficiently. Not to replace skilled individuals entirely, as Sam Dead eyes Altman would have you believe.

permalink
report
parent
reply
-4 points

LLMs as tools,

Yes, in the same way that buying a CD from the store, ripping to your hard drive, and returning the CD is a tool.

permalink
report
parent
reply
27 points

Cutting the cost by 97% will do the opposite of hampering proliferation.

permalink
report
parent
reply
6 points

What DeepSeek has done is to eliminate the threat of “exclusive” AI tools - ones that only a handful of mega-corps can dictate terms of use for.

Now you can have a Wikipedia-style AI (or a Wookiepedia AI, for that matter) that’s divorced from the C-levels looking to monopolize sectors of the service economy.

permalink
report
parent
reply
8 points

It’s not about hampering proliferation, it’s about breaking the hype bubble. Some of the western AI companies have been pitching to have hundreds of billions in federal dollars devoted to investing in new giant AI models and the gigawatts of power needed to run them. They’ve been pitching a Manhattan Project scale infrastructure build out to facilitate AI, all in the name of national security.

You can only justify that kind of federal intervention if it’s clear there’s no other way. And this story here shows that the existing AI models aren’t operating anywhere near where they could be in terms of efficiency. Before we pour hundreds of billions into giant data center and energy generation, it would behoove us to first extract all the gains we can from increased model efficiency. The big players like OpenAI haven’t even been pushing efficiency hard. They’ve just been vacuuming up ever greater amounts of money to solve the problem the big and stupid way - just build really huge data centers running big inefficient models.

permalink
report
parent
reply
9 points

No but it would be nice if it would turn back in the tool it was. When it was called machine learning like it was for the last decade before the bubble started.

permalink
report
parent
reply
26 points

Possibly, but in my view, this will simply accelerate our progress towards the “bust” part of the existing boom-bust cycle that we’ve come to expect with new technologies.

They show up, get overhyped, loads of money is invested, eventually the cost craters and the availability becomes widespread, suddenly it doesn’t look new and shiny to investors since everyone can use it for extremely cheap, so the overvalued companies lose that valuation, the companies using it solely for pleasing investors drop it since it’s no longer useful, and primarily just the implementations that actually improved the products stick around due to user pressure rather than investor pressure.

Obviously this isn’t a perfect description of how everything in the work will always play out in every circumstance every time, but I hope it gets the general point across.

permalink
report
parent
reply

Technology

!technology@lemmy.ml

Create post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

Community stats

  • 3.5K

    Monthly active users

  • 2.9K

    Posts

  • 43K

    Comments

Community moderators