I saw people complaining the companies are yet to find the next big thing with AI, but I am already seeing countless offer good solutions for almost every field imaginable. What is this thing the tech industry is waiting for and what are all these current products if not what they had in mind?

I am not great with understanding the business point of view of this situation and I have been out from the news for a long time, so I would really appreciate if someone could ELI5.

13 points

They’re looking for something like the internet or smartphones and are disappointed that it’s not doing something on that level. Doesn’t matter that there’s tons of applications in science and art (even if we’d like to ignore the latter).

Or maybe they thought we’d have human level AI by now.

permalink
report
reply
1 point

I’m pretty chuffed with what we have now. Considering it really hasn’t been that long that this sort of stuff has even been around, yet the average person can utilize an “AI” in their everyday life without even knowing how to use a computer.

Sure, it’s not 100% perfect, but I’ll take “stupidly convenient and right 90% of the time” over “takes hours of sifting through blogspam to find useful information that may or may not be correct”. Especially when it comes to mundane stuff like writing a resume or things where you have the knowledge, but just not the time.

permalink
report
parent
reply
50 points

Here’s a secret. It’s not true AI. All the hype is marketing shit.

Large language models like GPT, llama, and Gemini don’t create anything new. They just regurgitate existing data.

You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.

Until a llm can understand why it is wrong we won’t have true AI.

permalink
report
reply
20 points

It’s just a stupid probability bucket. The term AI shits me.

permalink
report
parent
reply
9 points

Statistical methods have been a longstanding mainstay in the field of AI since its inception. I think the trouble is that the term AI has been co-opted for marketing.

permalink
report
parent
reply
11 points

That’s not a secret. The industry constantly talks about the difference between LLMs and AGI.

permalink
report
parent
reply
14 points

Until a product goes through marketing and they slap that ‘Using AI’ into the blurb when it doesn’t.

permalink
report
parent
reply
8 points

LLMs are AI. They are not AGI. AGI is a particular subset of AI, that does not preclude non-general AI from being AI.

People keep talking about how it just regurgitates information, and says incorrect things sometimes, and hallucinates or misinterprets things, as if humans do not also do those things. Most people just regurgitate information they found online, true or false. People frequently hallucinate things they think are true and stubbornly refuse to change when called out. Many people cannot understand when and why they’re wrong.

permalink
report
parent
reply
5 points
*

I have different weights for my two dumbbells and I asked ChatGPT 4.0 how to divide the weights evenly on all 4 sides of the 2 dumbbells. It told me to use 4 half-pound weighs instead of my 2 pound weighs constantly, and finally after like 15 minutes, it admitted that, with my sets of weights, it’s impossible to divide them evenly…

permalink
report
parent
reply
8 points

You used an LLM for one of the things it is specifically not good at. Dismissing its overall value on that basis is like complaining that your snowmobile is bad at making its way up and down your basement stairs, and so it is therefore useless.

permalink
report
parent
reply
4 points
*

You are totally right! Sadly, people think that LLMs are able to do all of these things…

permalink
report
parent
reply
8 points

Large language models like GPT, llama, and Gemini don’t create anything new

That’s because it is a stupid use case. Why should we expect AI models to be creative, when that is explicitly not what they are for?

permalink
report
parent
reply
-5 points

They are creative, though:

They put things that are “near” each-other into juxtaposition, and sometimes the insights are astonishing.

The AI’s don’t understand anything, though: they’re like bacteria-instinct: total autopilot.

The real problem is that we humans aren’t able to default to understanding such non-understanding apparent-someones.

We’ve created a “hack” of our entire mental-system, and it is the money-profit-rules-the-world group which controls its evolution.

This is called “Darwin Award territory”, at the species-scale.

No matter:

The Great Filter, which is what happens when a world-species hasn’t grown-up, but gains adult-level technology ( nukes, entire-country-destroying-militaries, biotech, neurotoxins, immense industrial toxic wastelands like the former USSR, accountability-denial-mechanisms in all corporate “persons”, etc… )

you have a toddler with a loaded gun, & killing can happen.

“there’s no such thing as a dangerous gun: only a dangerous man”, as the book “Starship Troopers” pushed…

Toddlers with guns KILL people in the US.

AI’s our “gun”, & narcissistic-sociopathy’s our “toddler commanding the ship” nature.

Maybe we should rename Earth to “The Titanic”, for honesty’s sake…

_ /\ _

permalink
report
parent
reply
17 points
*

It is true AI, it’s just not AGI. Artificial General Intelligence is the sort of thing you see on Star Trek. AI is a much broader term and it encompasses large language models, as well as even simpler things like pathfinding algorithms or OCR. The term “AI” has been in use for this kind of thing since 1956, it’s not some sudden new marketing buzzword that’s being misapplied. Indeed, it’s the people who are insisting that LLMs are not AI that are attempting to redefine a word that’s already been in use for a very long time.

You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.

Reminds me of the classic quote from Charles Babbage:

“On two occasions I have been asked, – “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question”

How is the chatbot supposed to know that the information it’s been given is wrong?

If you were talking with a human and they thought something was true that wasn’t actually true, do you not count them as an intelligence any more?

permalink
report
parent
reply
5 points

If you were talking with a human and they thought something was true that wasn’t actually true, do you not count them as an intelligence any more?

If they refuse to learn and change their belief? Absolutely.

permalink
report
parent
reply
3 points

AI is being used to replace a lot of jobs, but companies usually do not want to advertise that.

There are possibilities of consumer products (e.g. smarter alexa and siri) but those are non monetized, so they cannot generate 100B revenue from it.

There is possibility of more innovative products e.g. smart christmas toy, but AI needs few more years to get there.

permalink
report
reply
6 points

AI is being used to replace a lot of jobs, but companies usually do not want to advertise that.

I would be careful with that statement.

I’ve been involved in some projects about “leveraging on data” to reduce maintenance costs. And a big pitfall is that you still someone to do the job. Great, now, you know that the “Primary pump” is about to break. You still need to send a tech to replace-it, and often you have to deal with a user who can’t afford to turn the system off until the repair is done, and the you can’t let someone work alone in the area. So you end-up having to send 2 persons asap to repair the “primary pump”.

It’s a bit better in term of planning/ressources than “Send 2 persons to diagnose what’s going wrong, get the part and do the repair”, which allows to replace engineer able to do a diagnostic by technicians able to execute a procedure (which is itself an issue as soon as you have to think out of the box). It allow to have a more dynamic “preventive maintenance planning”. So somehow, it helped cutting down the maintenance costs and improve system reliability. But in the end, you still need staff to do the repair. And I let alone, all the manpower needed to collect/process the data, hardware engineer looking on how to integrate sensor in the machines, data-engineer building a data-base able to use these data, data-scientists building efficient algorithm, product maintenance expert trying to make-sense of these data and so on.

I feel like, a big chunk of the AI will be similar, with some jobs being cut down (or less qualified) while tons of new jobs will take over

permalink
report
parent
reply
3 points

I’m not sure it’s going to be that. That was the model for the last wave of tech advancement layoffs and job replacements. This one is going to be so much dumber.

It’s no secret that most companies are stagnant or losing money right now across the board. For many reasons, disposable income is way down, COVID mentality change (people decided they wanted to live instead of just consume), and products have just been getting worse. So, CEOs are using AI to replace jobs that AI cannot yet replace. It immediately makes their bottom line look better for investors while doing nothing useful. This will bite them in the ass soon but they’ll say AI was oversold and it’s not their fault. Meanwhile, they look like the nothing they’re doing to improve their company is working and will survive another day.

permalink
report
parent
reply
23 points
*

The most successful applications (e.g. translation, medical image processing) aren’t marketed as “AI”. That term seems to be mostly used for more controversial applications, when companies want to distance themselves from the potential output by pretending that their software tools have independent agency.

permalink
report
reply
11 points
*

Recently I saw AI transcribe a YT video. It was genuinely helpful.

https://lazysoci.al/comment/9866410

permalink
report
reply

No Stupid Questions

!nostupidquestions@lemmy.world

Create post

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others’ questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That’s it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it’s in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.

Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.

Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

Community stats

  • 9.5K

    Monthly active users

  • 3.3K

    Posts

  • 132K

    Comments