Over half of all tech industry workers view AI as overrated::undefined

204 points

Best assessment I’ve heard: Current AI is an aggressive autocomplete.

permalink
report
reply
68 points

I’ve found that relying on it is a mistake anyhow, the amount of incorrect information I’ve seen from chatgpt has been crazy. It’s not a bad thing to get started with it but it’s like reading a grade school kids homework, you need to proofread the heck out of it.

permalink
report
parent
reply
40 points

What always strikes me as weird is how trusting people are of inherently unreliable sources. Like why the fuck does a robot get trust automatically? It’s a fuckin miracle it works in the first place. You double check that robot’s work for years and it’s right every time? Yeah okay maybe then start to trust it. Until then, what reason is there not to be skeptical of everything it says?

People who Google something and then accept whatever Google pulls out of webpages and puts at the top as fact… confuse me. Like all machines, there are failures. Why would we trust that the opposite is true?

permalink
report
parent
reply
25 points
*

At least a Google search gets you a reference you can point at. It might be wrong, it might not. Maybe it points to other references that you can verify.

ChatGPT outright makes shit up and there’s no way to see how it came to a given conclusion.

permalink
report
parent
reply
10 points

Because the average person hears “AI” and thinks Cortana/Terminator, not a bunch of if statements.

People are dumb when it comes to things they don’t understand. I’m dumb when it comes to mechanical engineering of any kind, but I’m competent with software. It’s all about where people’s strengths lie, but some people aren’t aware enough to know they don’t know something

permalink
report
parent
reply
8 points

My guess, wholly lacking any scientifc rigor, is that humans naturally trust each other. We don’t assume the info someone shares with us as wrong, unless there’s “a reason” to doubt. Chatting with any of these LLM bots feels like talking to a person (most of the time), so there’s usually “no reason” to doubt what it spews.

If human trust wasn’t so easy to get and abuse, many scams would be much harder to pull.

permalink
report
parent
reply
2 points

People trust a squid predicting football matches.

permalink
report
parent
reply
20 points

I feel like the AI in self-driving cars is the same way. They’re like driving with a 15 year old that just got their learners permit.

Turns out that getting a computer to do 80% of a good job isn’t so great. It’s that extra 20% that makes all the difference.

permalink
report
parent
reply
5 points

That 80% also doesn’t take that much effort. Automation can still be helpful depending on how much effort it is to repeatedly do it, but that 20% is really where we need to see progress for a massive innovation to happen.

permalink
report
parent
reply
12 points

I just reviewed a PR today and the code was… bad, like unusually bad for mycoworkers and left some comments.

Then my coworker said he used chatgpt without really thinking on what he was copypasting.

permalink
report
parent
reply
5 points

I have found that it’s like having a junior programmer assistant. It’s great for “write me python code for opening an in file from a command line argument, reading the contents into a key/value dict array, then closing the file.” It’s terrible for “write me a python code for pulling data into a redis database.”

I find it’s wrong 50% of the time for certain command line switches, Linux file structure, and aws cli.

I find it’s terrible for advanced stuff like, “using aws cli and jq, take all volumes in a vpc, and display the volume id, volume size in gb, instance id it’s attached to, private IP address of the instance, whether is a gp3 or gp2, and the vpc id in a comma separated format, sorted by volume size.”

Even worse at, “take all my gp2 volumes and make them gp3.”

permalink
report
parent
reply
3 points

I recently used it to update my resume with great success. But I also didn’t just blindly trust it.

Gave it my resume and then asked it to edit my resume to more closely align with a guide I found on Harvards website. Gave it the guide as well and it spit out a version of mine that much more closely resembled the provided guide.

Spent roughly 5 minutes editing the new version to correct for any problems it had and boom. Half an hour of worked parsed down to sub 10

I then had it use my new resume (I gave it a copy of the edited version) and asked it to write me a cover letter for a job (I provided the job description)

Boom. Cover letter. I spent about 10 minutes editing that piece. And then that new resume and cover letter lead to an interview and subsequent job offer.

AI is a tool not an all in one solution.

permalink
report
parent
reply
14 points

Nice one! I have heard it called a fuzzy JPG of the internet.

permalink
report
parent
reply
10 points

And that’s entirely correct

permalink
report
parent
reply
-30 points

No. It’s not and hasn’t been for at least a year. Maybe the ai your dealing with is, but it’s shown understanding of concepts in ways that make no sense for how it was created. Gotta go.

permalink
report
parent
reply
32 points

it’s shown understanding of concepts

No it hasn’t.

permalink
report
parent
reply
8 points

Depends on how you define understanding and how you test for it.

I assume we are talking LLM here?

permalink
report
parent
reply
7 points

Maybe if you Interpret it’s output as such.

permalink
report
parent
reply
3 points

It’s a tool. And like any tool it’s only as good as the person using it. I don’t think these people are very good at using it.

permalink
report
parent
reply
1 point
*

Too bad it’s bullshit.

If you are actually interested in the topic, here’s a few good reads:

As you can see, the past year has shed a lot of light on the topic.

One of my favorite facts is that it takes on average 17 years before discoveries in research find their way to the average practitioner in the medical field. While tech as a discipline may be more quick to update itself, it’s still not sub-12 months, and as a result a lot of people are continuing to confidently parrot things that have recently been shown in research circles to be BS.

permalink
report
parent
reply
176 points

Over half of tech industry workers have seen the “great demo -> overhyped bullshit” cycle before.

permalink
report
reply
89 points

You just have to leverage the agile AI blockchain cloud.

permalink
report
parent
reply
40 points

Once we’re able to synergize the increased throughput of our knowledge capacity we’re likely to exceed shareholder expectation and increase returns company wide so employee defecation won’t be throttled by our ability to process sanity.

permalink
report
parent
reply
20 points
*

Sounds like we need to align on triple underscoring the double-bottom line for all stakeholders. Let’s hammer a steak in the ground here and craft a narrative that drives contingency through the process space for F24 while synthesising synergy from a cloudshaping standooint in a parallel tranche. This journey is really all about the art of the possible after all so lift and shift a fit for purpose best practice and hit the ground running on our BHAG.

permalink
report
parent
reply
9 points

Don’t forget to make it connected to every device, ever

permalink
report
parent
reply
1 point

AIot?

permalink
report
parent
reply
2 points

Every billboard in SF is just these words shuffled

permalink
report
parent
reply
28 points

No SQL, block chain, crypto, metaverse, just to name a few recent examples.

AI is overhyped, but it is, so far, more useful than any of those other examples, though.

permalink
report
parent
reply
4 points

These are useful technologies if used when called for. They aren’t all in one solutions like the smart phone killing off cameras, pdas, media players… I think if people looked at them as tools which fix specific problems, we’d all be happier.

permalink
report
parent
reply
21 points

Every year sometimes.

permalink
report
parent
reply
151 points

Largely because we understand that what they’re calling “AI” isn’t AI.

permalink
report
reply
77 points

This is a growing pet peeve of mine. If and when actual AI becomes a thing, it’ll be a major turning point for humanity comparable to things like harnessing fire or electricity.

…and most people will be confused as fuck. “We’ve had this for years, what’s the big deal?” -_-

permalink
report
parent
reply
19 points

I also believe that will happen! We will not be prepared since many don’t understand the differences between what current models do and what an actual general AI could potentially do.

It also saddens me that many don’t know or ignore how fundamental abstract reasoning is to our understanding of how human intelligence works. And that LLMs simply aren’t intelligent in that sense (or at all, if you take a tight definition of intelligence).

permalink
report
parent
reply
-2 points

I don’t get how recognizing a pattern is not AI. It recognizes patterns in data, and patterns in side of patterns, and does so at a massive scale. Humans are no different, we find patterns and make predictions on what to do next.

permalink
report
parent
reply
11 points

As in AGI?

permalink
report
parent
reply
3 points
*

I’ve seen it refered to as AGI bit I think itns wrong. Chat GPT isnt intelligent in the slightest, it only makes guesses on what word is statistically more likely to come up next. There is no thikinking or problem solving involved.

A while ago I saw an article that with a tittle along the lines of “spark of AGI in ChatGPT 4” because it chose to use a calculator tool when facing a problme that required one. That would be AI (and not AGI). It has a problem, it learns and uses available tools to solve it.

AGI would be on a whole other level.

Edit: Grammar

permalink
report
parent
reply
13 points

AI doesn’t necessarily mean human-level intelligence, if that’s what you mean. The AI field has wrestled with this for decades. There can be “strong AI”, which is aiming for that human-level intelligence, but that’s probably a far off goal. The “weak AI” is about pushing the boundaries of what computers can do, and that stuff has been massively useful even before we talk about the more modern stuff.

permalink
report
parent
reply
1 point
*

Sounds like people here are expecting to see GPAI and singularity stuff, but all they see is a pitiful LLM or other even more narrow AI applications. Remember, even optical character recognition (OCR) used to be called AI until it became so common that it wasn’t exciting any more. What AI developers call AI today, is just basic automation and few decades later.

permalink
report
parent
reply
4 points

Yup. LLM RAG is just search 2.0 with a GPU.

For certain use cases it’s incredible, but those use cases shouldn’t be your first idea for a pipeline

permalink
report
parent
reply
4 points

Given that AI isn’t purported to be AGI, how do you define AI such that multimodal transformers capable of developing abstract world models as linear representations and trained on unthinkable amounts of human content mirroring a wide array of capabilities which lead to the ability to do things thought to be impossible as recently as three years ago (such as explain jokes not in the training set or solve riddles not in the training set) isn’t “artificial intelligence”?

permalink
report
parent
reply
1 point

THANK YOU! I’ve been saying this a long time, but have just kind of accepted that the definition of AI is no longer what it was.

permalink
report
parent
reply
-45 points

It absolutely is AI. A lot of stuff is AI.

It’s just not that useful.

permalink
report
parent
reply
36 points
*

The decision tree my company uses to deny customer claims is not AI despite the business constantly referring to it as such.

There’s definitely a ton of “AI” that is nothing more than an If/Else statement.

permalink
report
parent
reply
11 points

for many years AI referred to that type of technology. It is not infact AGI but AI historically in the technical field refers more towards decision trees, and classification/ linear regression models.

permalink
report
parent
reply
9 points

That’s basically what video game AI is, and we’re happy enough to call it that

permalink
report
parent
reply
1 point

That’s called an expert system, and has been commonly called a form of AI for decades.

That is indeed what most of it is, my company was doing “sentiment analysis” and it was literally just checking it against a good and bad word list

When someone corporate says “AI” you should hear “extremely rudimentary machine learning” until given more details

permalink
report
parent
reply
23 points

It’s useful at sucking down all the compute we complained crypto used

permalink
report
parent
reply
2 points

The main difference is that crypto was/is burning huge amounts of energy to run a distributed ponzi scheme. LLMs are at least using energy to create a useful tool (even if there is discussion over how useful they are).

permalink
report
parent
reply
2 points

Yeah it’s funny how that little tidbit just went quietly into the bin not to talked about again.

permalink
report
parent
reply
7 points

There are significant differences between statistical models and AI.

I work for an analytics department at a fortune 100 company. We have a very clear delineation between what constitutes a model and what constitutes an AI.

permalink
report
parent
reply
1 point

That’s true. Statistical models are very carefully engineered and tested and current machine learning models are created by throwing a lot of training data at the software and hope for the best that the things that the model learns are not complete bullshit.

permalink
report
parent
reply
-2 points

Yeah, an AI is a model you can’t explain.

permalink
report
parent
reply
4 points
*

Optimizing compilers came directly out of AI research. The entirety of modern computing is built on things the field produced.

permalink
report
parent
reply
3 points
*

You really should listen rather than talk. This is not AI, it’s just a word prediction model. The media calls it AI because it sells and the companies calls it AI because it brings the stock value up.

permalink
report
parent
reply
5 points

Yes, what you’re describing is also AI.

permalink
report
parent
reply
2 points
Deleted by creator
permalink
report
parent
reply
73 points

I think it will be the next big thing in tech (or “disruptor” if you must buzzword). But I agree it’s being way over-hyped for where it is right now.

Clueless executives barely know what it is, they just know they want it get ahead of it in order to remain competitive. Marketing types reporting to those executives oversell it (because that’s their job).

One of my friends is an overpaid consultant for a huge corporation, and he says they are trying to force-retro-fit AI to things that barely make any sense…just so that they can say that it’s “powered by AI”.

On the other hand, AI is much better at some tasks than humans. That AI skill set is going to grow over time. And the accumulation of those skills will accelerate. I think we’ve all been distracted, entertained, and a little bit frightened by chat-focused and image-focused AIs. However, AI as a concept is broader and deeper than just chat and images. It’s going to do remarkable stuff in medicine, engineering, and design.

permalink
report
reply
25 points

Personally, I think medicine will be the most impacted by AI. Medicine has already been increasingly implementing AI in many areas, and as the tech continues to mature, I am optimistic it will have tremendous effect. Already there are many studies confirming AI’s ability to outperform leading experts in early cancer and disease diagnoses. Just think what kind of impact that could have in developing countries once the tech is affordably scalable. Then you factor in how it can greatly speed up treatment research and it’s pretty exciting.

That being said, it’s always wise to remain cautiously skeptical.

permalink
report
parent
reply
29 points
20 points

Common US healthcare L

permalink
report
parent
reply
23 points

I’s ability to outperform leading experts in early cancer and disease diagnoses

It does, but it also has a black box problem.

A machine learning algorithm tells you that your patient has a 95% chance of developing skin cancer on his back within the next 2 years. Ok, cool, now what? What, specifically, is telling the algorithm that? What is actionable today? Do we start oncological treatment? According to what, attacking what? Do we just ask the patient to aggressively avoid the sun and use liberal amounts of sun screen? Do we start a monthly screening, bi-monthly, yearly, for how long do we keep it up? Should we only focus on the part that shows high risk or everywhere? Should we use the ML every single time? What is the most efficient and effective use of the tech? We know it’s accurate, but is it reliable?

There are a lot of moving parts to a general medical practice. And AI has to find a proper role that requires not just an abstract statistic from an ad-hoc study, but a systematic approach to healthcare. Right now, it doesn’t have that because the AI model can’t tell their handlers what it is seeing, what it means, and how it fits in the holistic view of human health. We can’t just blindly trust it when there’s human lives in the line.

As you can see, this seems to be relegating AI to a research role for the time being, and not on a diagnosing capacity yet.

permalink
report
parent
reply
5 points

You are correct, and this is a big reason for why “explainable AI” is becoming a bigger thing now.

permalink
report
parent
reply
3 points

There is a very complex algorithm for determining your risk of skin cancer: Take your age … then add a percent symbol after it. That is the probability that you have skin cancer.

permalink
report
parent
reply
5 points

Like you say, “AI” isn’t just LLMs and making images. We have previously seen, for example, expert systems, speech recognition, natural language processing, computer vision, machine learning, now LLM and generative art.

The earlier technologies have gone through their own hype cycles and come out the other end to be used in certain useful ways. AI has no doubt already done remarkable things in various industries. I can only imagine that will be true for LLMs some day.

I don’t think we are very close to AGI yet. Current AI like LLMs and machine vision require a lot of manual training and tuning. As far as I know, few AI technologies can learn entirely on their own and those that do are limited in scope. I’m not even sure AGI is really necessary to solve most problems. We may do AI “ala carte” for many years and one day someone will stitch a bunch of things together, et voila.

permalink
report
parent
reply
5 points

Thanks.

I’m glad you mentioned speech. Tortoise-TTS is an excellent text to speech AI tool that anyone can run on a GPU at home. I’ve been looking for a TTS tool that can generate a more natural -sounding voice for several years. Tortoise is somewhat labor intensive to use for now, but to my ear it sounds much better than the more expensive cloud-based solutions. It can clone voices convincingly, too. (Which is potentially problematic).

permalink
report
parent
reply
2 points

Ooh thanks for the heads up. Last time I played with TTS was years ago using Festival, which was good for the time. Looking forward to trying Tortoise TTS.

permalink
report
parent
reply
2 points

Honestly I believe AGI is currently a compute resource problem less than a software problem. A paper came out awhile ago showing that individual neurons in the human brain displayed behavior like decently sized deep learning models. If this is true the number of nodes required for artificial neural nets to even come close to human like intelligence maybe astronomically higher then predicted.

permalink
report
parent
reply
3 points

That’s my understanding as well, our brain is just an insane composition of incredibly simple mechanisms. Its compositions of compositions of compositions ad nauseam. We are manually simulating billions of years of evolution, using ourselves as a blueprint. We can get there… it’s hard to say when we’ll get there, but it’ll be interesting to watch.

permalink
report
parent
reply
63 points
*

It is overrated. At least when they look at AI as some sort of brain crutch that redeems them from learning stuff.

My boss now believes he can “program too” because he let’s ChatGPT write scripts for him that more often than not are poor bs.

He also enters chunks of our code into ChatGPT when we issue bugs or aren’t finished with everything in 5 minutes as some kind of “Gotcha moment”, ignoring that the solutions he then provides don’t work.

Too many people see LLMs as authorities they just aren’t…

permalink
report
reply
24 points

permalink
report
parent
reply
8 points

It bugs me how easily people (a) trust the accuracy of the output of ChatGPT, (b) feel like it’s somehow safe to use output in commercial applications or to place output under their own license, as if the open issues of copyright aren’t a ten-ton liability hanging over their head, and © feed sensitive data into ChatGPT, as if OpenAI isn’t going to log that interaction and train future models on it.

I have played around a bit, but I simply am not carefree/careless or am too uptight (pick your interpretation) to use it for anything serious.

permalink
report
parent
reply
5 points

Too many people see LLMs as authorities they just aren’t…

This is more a ‘human’ problem than an ‘AI’ problem.

In general it’s weird as heck that the industry is full force going into chatbots as a search replacement.

Like, that was a neat demo for a low hanging fruit usecase, but it’s pretty damn far from the ideal production application of it given that the tech isn’t actually memorizing facts and when it gets things right it’s a “wow, this is impressive because it really shouldn’t be doing a good job at this.”

Meanwhile nearly no one is publicly discussing their use as classifiers, which is where the current state of the tech is a slam dunk.

Overall, the past few years have opened my eyes to just how broken human thinking is, not as much the limitations of neural networks.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 16K

    Monthly active users

  • 12K

    Posts

  • 557K

    Comments