Over half of all tech industry workers view AI as overrated::undefined

9 points

People who use ChatGPT to program for them deserve their programs to fail

permalink
report
reply
1 point
*
Deleted by creator
permalink
report
parent
reply
-3 points

You sound like someone who doesn’t know how to program and is allowing yourself to become hopelessly dependent on corporations to be able to do anything.

permalink
report
parent
reply
-4 points
*
Deleted by creator
permalink
report
parent
reply
36 points

Yeah , they should copy paste answers from stack overflow like real developers.

permalink
report
parent
reply
14 points

Real developers just hit tab on whatever copilot tells them to

permalink
report
parent
reply
5 points

Guilty

permalink
report
parent
reply
-16 points

The other half have actually used it

permalink
report
reply
30 points
*

As someone who works in the tech industry and has used AI tools (or more accurately machine learning models), I do think it is overrated.
That doesn’t mean that I don’t think it can be useful, just that it’s not going to live up to the immense hype surrounding it right now.

permalink
report
parent
reply
9 points
*

I work in tech and have used the tools. I am mostly neutral on its prospects. I think it’s somewhat overrated right now for many purposes, but just seeing how rapidly things are progressing gives me pause to outright dismiss its potential for immense utility.

We have to consider that few saw ChatGPT coming so soon and even fewer predicting ahead of time for it to work as well as it does. Now that Microsoft is fully bankrolling its development-- providing their newly acquired former-OpenAI team virtually unlimited resources with bleeding edge hardware custom built for its models–I really have no idea how far and quickly they’ll progress their AGI tech. For all we know right know, in 5+ years LLMs and their ilk could be heralding another tech revolution.

permalink
report
parent
reply
6 points
*

They probably won’t advance much because currently it has two opposite but equally difficult problems. On the one hand, AI still hasn’t achieved sensor integration, or creating an ontologically sound world model that includes more than one sensor data stream at a time. Right now it can model based on one sensor or one multidimensional array of sensors. But it can’t model in-between models. So you can’t have, let’s say, one single model that can hear, see light and radar at the same time. The same way that animal intelligence can self-correct their world model when one sensor says A but another sensor disagrees and says B. Current models just hallucinate and go off the deep end catastrophically.

On the opposite end, if we want them to be products, as seems to be MS and Altman fixation. Then it cannot be a black box, at least not for the implementers. Only in this past year there have been actual efforts to really see WTF is going on inside the models after they’ve been trained and how to interpret and manipulate that inner world to effective and intentional results. Even then, the progress is difficult because it’s all abstract mathematics and we haven’t found a translation layer to parse the model’s internal world into something humans can easily interpret.

permalink
report
parent
reply
5 points
*

I’ve had an AI bot trained on our company’s knowledge base literally make up links to nonexistent articles out of whole cloth. It’s so useless I just stopped bothering to ask it anything, I save more time looking it up myself.

permalink
report
parent
reply
14 points

Useful and overrated aren’t mutually exclusive.

permalink
report
parent
reply
0 points

It’s not overrated.

Using the “Mistral instruct” model to select your food in the canteen works like a charm.

Just provide it with the daily option, tell it to select one main-, side dish and a dessert and explain the selection. Never let me down. Consistently selects the healthier option that still tastes good.

permalink
report
reply
7 points

Meh. Roughly 90% of what I know about baking is from chatgpt. There just wasn’t a comparable resource. “Oh God the dough is too dry”, “can I sub in this fat for this fat and if so how?”, “if I change the bath do I have to change the score method?”.

It is like I have a professional baker I can just talk to whenever. I am sure as I get better at baking I will exceed it’s ability to help but I can’t deny that what I have accomplished now I could not have in the same timeframe without it.

permalink
report
reply
3 points

So? Are you saying you disagree with the premise of the article because chatgpt taught you how to bake? Professional tech work isn’t really relatable to baking at home.

permalink
report
parent
reply
4 points
*
Deleted by creator
permalink
report
parent
reply
2 points

ChatGPT has never worked well for me. Sure, it can tell you how to center a div, but for anything complex it just fails. ChatGPT is really only useful for elaborating on something. You can give it a well commented code snippet, ask it to add some simple feature to it, and it will sometimes give a correct answer. For coding, it has the same level of experience as a horde of highschool CS students.

permalink
report
parent
reply
7 points

I believe the central premise was

Over half of all tech industry workers view AI as overrated::undefined

Not professional tech work. Really not sure what you want from me. I found it a useful tool and I am sorry it didn’t work out for you or your application.

permalink
report
parent
reply
-2 points

You’re splitting hairs here I think it’s fair to make the statement that tech industry workers perform professional tech work. I mean it’s cool that you learned to bake but what makes you think this means you know what the skill requirements are for tech workers and how well chatgpt can cover for gaps in those skills? Your dismissive ‘meh’ says to me ‘yea but I learned how to bake with chatgpt so I disagree with this statement’.

permalink
report
parent
reply
-1 points

What’s professional tech work when it’s at home?

I don’t know what that term actually is supposed to mean do they mean programming, do they mean system architecture, systems management, cyber security, what?

The term is so broad as to be meaningless, so I don’t think you can necessarily say that it’s any harder than baking, because we don’t know what an earth we’re talking about.

permalink
report
parent
reply
0 points

Professional tech work at home is professional tech work. I think to anyone who actually have careers in technology wouldn’t see a distinction here. Programming is not the same as systems architecture, systems management etc. Programming is simply one of the tools you use as a software engineer. I do not think it’s too broad to be meaningless and I think comparing learning to bake to software engineering is reductive and shows a lack of understanding about the requirements of the field.

permalink
report
parent
reply
-2 points

Buy a fucking book about baking. Not a fancy colour print recipe book, but a textbook, you know the kind with a chapter or two about dough chemistry.

If nothing else, as a beginner you have no idea which questions to ask and ChatGPT is never going to give you a dissertation about fundamental knowledge and principles. And you have absolutely no way to tell whether ChatGPT is spouting random nonsense.

permalink
report
parent
reply
9 points

I bought a fucking book more than one and it wasn’t as good. A fucking book can’t examine a picture and tell me what went wrong, the fucking book I bought didn’t have subsistion charts, the fucking book I bought didn’t respond to cntrl+F

Did you get this comment response or should I have faxed it to you?

permalink
report
parent
reply
-3 points

If you need ChatGPT to analyse a picture for you you lack very basic knowledge about baking. It can’t smell, it can’t touch, it can’t hear, and it has never fucking ever baked. It has never taken a wet and sticky dough, tensioned it, and, voila, suddenly it’s a pleasure to handle.

And substitution charts? For what? If you understood the underlying dough chemistry you wouldn’t be asking in the first place. As said: You lack the basic knowledge to know what questions to ask, and once you have that knowledge your questions will only be answered by experiment.

permalink
report
parent
reply
151 points

Largely because we understand that what they’re calling “AI” isn’t AI.

permalink
report
reply
-45 points

It absolutely is AI. A lot of stuff is AI.

It’s just not that useful.

permalink
report
parent
reply
3 points
*

You really should listen rather than talk. This is not AI, it’s just a word prediction model. The media calls it AI because it sells and the companies calls it AI because it brings the stock value up.

permalink
report
parent
reply
5 points

Yes, what you’re describing is also AI.

permalink
report
parent
reply
36 points
*

The decision tree my company uses to deny customer claims is not AI despite the business constantly referring to it as such.

There’s definitely a ton of “AI” that is nothing more than an If/Else statement.

permalink
report
parent
reply
9 points

That’s basically what video game AI is, and we’re happy enough to call it that

permalink
report
parent
reply
1 point

That’s called an expert system, and has been commonly called a form of AI for decades.

That is indeed what most of it is, my company was doing “sentiment analysis” and it was literally just checking it against a good and bad word list

When someone corporate says “AI” you should hear “extremely rudimentary machine learning” until given more details

permalink
report
parent
reply
11 points

for many years AI referred to that type of technology. It is not infact AGI but AI historically in the technical field refers more towards decision trees, and classification/ linear regression models.

permalink
report
parent
reply
7 points

There are significant differences between statistical models and AI.

I work for an analytics department at a fortune 100 company. We have a very clear delineation between what constitutes a model and what constitutes an AI.

permalink
report
parent
reply
1 point

That’s true. Statistical models are very carefully engineered and tested and current machine learning models are created by throwing a lot of training data at the software and hope for the best that the things that the model learns are not complete bullshit.

permalink
report
parent
reply
-2 points

Yeah, an AI is a model you can’t explain.

permalink
report
parent
reply
23 points

It’s useful at sucking down all the compute we complained crypto used

permalink
report
parent
reply
2 points

Yeah it’s funny how that little tidbit just went quietly into the bin not to talked about again.

permalink
report
parent
reply
2 points

The main difference is that crypto was/is burning huge amounts of energy to run a distributed ponzi scheme. LLMs are at least using energy to create a useful tool (even if there is discussion over how useful they are).

permalink
report
parent
reply
4 points
*

Optimizing compilers came directly out of AI research. The entirety of modern computing is built on things the field produced.

permalink
report
parent
reply
2 points
Deleted by creator
permalink
report
parent
reply
13 points

AI doesn’t necessarily mean human-level intelligence, if that’s what you mean. The AI field has wrestled with this for decades. There can be “strong AI”, which is aiming for that human-level intelligence, but that’s probably a far off goal. The “weak AI” is about pushing the boundaries of what computers can do, and that stuff has been massively useful even before we talk about the more modern stuff.

permalink
report
parent
reply
1 point
*

Sounds like people here are expecting to see GPAI and singularity stuff, but all they see is a pitiful LLM or other even more narrow AI applications. Remember, even optical character recognition (OCR) used to be called AI until it became so common that it wasn’t exciting any more. What AI developers call AI today, is just basic automation and few decades later.

permalink
report
parent
reply
77 points

This is a growing pet peeve of mine. If and when actual AI becomes a thing, it’ll be a major turning point for humanity comparable to things like harnessing fire or electricity.

…and most people will be confused as fuck. “We’ve had this for years, what’s the big deal?” -_-

permalink
report
parent
reply
11 points

As in AGI?

permalink
report
parent
reply
3 points
*

I’ve seen it refered to as AGI bit I think itns wrong. Chat GPT isnt intelligent in the slightest, it only makes guesses on what word is statistically more likely to come up next. There is no thikinking or problem solving involved.

A while ago I saw an article that with a tittle along the lines of “spark of AGI in ChatGPT 4” because it chose to use a calculator tool when facing a problme that required one. That would be AI (and not AGI). It has a problem, it learns and uses available tools to solve it.

AGI would be on a whole other level.

Edit: Grammar

permalink
report
parent
reply
19 points

I also believe that will happen! We will not be prepared since many don’t understand the differences between what current models do and what an actual general AI could potentially do.

It also saddens me that many don’t know or ignore how fundamental abstract reasoning is to our understanding of how human intelligence works. And that LLMs simply aren’t intelligent in that sense (or at all, if you take a tight definition of intelligence).

permalink
report
parent
reply
-2 points

I don’t get how recognizing a pattern is not AI. It recognizes patterns in data, and patterns in side of patterns, and does so at a massive scale. Humans are no different, we find patterns and make predictions on what to do next.

permalink
report
parent
reply
4 points

Given that AI isn’t purported to be AGI, how do you define AI such that multimodal transformers capable of developing abstract world models as linear representations and trained on unthinkable amounts of human content mirroring a wide array of capabilities which lead to the ability to do things thought to be impossible as recently as three years ago (such as explain jokes not in the training set or solve riddles not in the training set) isn’t “artificial intelligence”?

permalink
report
parent
reply
4 points

Yup. LLM RAG is just search 2.0 with a GPU.

For certain use cases it’s incredible, but those use cases shouldn’t be your first idea for a pipeline

permalink
report
parent
reply
1 point

THANK YOU! I’ve been saying this a long time, but have just kind of accepted that the definition of AI is no longer what it was.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 16K

    Monthly active users

  • 12K

    Posts

  • 557K

    Comments