How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can’t manage this consistently with CRUD apps and people think that this number isn’t laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?
…
I don’t believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.
I had my fun with Copilot before I decided that it was making me stupider - it’s impressive, but not actually suitable for anything more than churning out boilerplate.
This. Many of these tools are good at incredibly basic boilerplate that’s just a hint outside of say a wizard. But to hear some of these AI grifters talk, this stuff is going to render programmers obsolete.
There’s a reality to these tools. That reality is they’re helpful at times, but they are hardly transformative at the levels the grifters go on about.
I use them like wikipedia: it’s a good starting point and that’s it (and this comparison is a disservice to wikipedia).
Yep! It’s a good way to get over the fear of a blank page, but I don’t trust it for more than outlines or summaries
Man, I need to build some new shit.
I can’t remember the last time I looked at a blank page.
I interviewed a candidate for a senior role, and they asked if they could use AI tools. I told them to use whatever they normally would, I only care that they get a working answer and that they can explain the code to me.
The problem was fairly basic, something like randomly generate two points and find the distance between them, and we had given them the details (e.g. distance is a straight line). They used AI, which went well until it generated the Manhattan distance instead of the Pythagorean theorem. They didn’t correct it, so we pointed it out and gave them the equation (totally fine, most people forget it under pressure). Anyway, they refactored the code and used AI again to make the same mistake, didn’t catch it, and we ended up pointing it out again.
Anyway, at the end of the challenge, we asked them how confident they felt about the code and what they’d need to do to feel more confident (nudge toward unit testing). They said their code was 100% correct and they’d be ready to ship it.
They didn’t pass the interview.
And that’s generally my opinion about AI in general, it’s probably making you stupider.
I’ve seen people defend using AI this way by comparing it to using a calculator in a math class, i.e. if the technology knows it, I don’t need to.
And I feel like, for the kind of people whose grasp of technology, knowledge, and education are so juvenile that they would believe such a thing, AI isn’t making them dumber. They were already dumb. What the AI does is make code they don’t understand more accessible, which is to say, it’s just enabling dumb people to be more dangerous while instilling them with an unearned confidence that only compounds the danger.
Yup. And I’m unwilling to be the QC in a coding assembly line, I want competent peers who catch things before I do.
But my point isn’t that AI actively makes individuals dumber, it’s making people in general dumber. I believe that to be true about a lot of technology. In the 80s, people were familiar with command-line interfaces, and jumping to some coding wasn’t a huge leap, but today, people can’t figure out how to do a thing unless there’s an app for it. AI is just the next step along that path, soon, even traditionally competent industries will be little more than QC and nobody will remember how the sausage is made.
If they can demonstrate that they know how the sausage is made and how to inspect a sausage of packages, I’m fine with it. But if they struggle to even open the sausage package, we’re going to have problems.
Yeah, I honestly don’t have any real issue with using it to accelerate your workflow. I think it’s hit or miss how much it does, but it’s probably slightly stepped up from code completion without “AI”.
But if you don’t understand every line of code “you” write completely, you’re being grossly negligent and begging for a shitshow.
Similar story, I had a junior dev put in a PR for SQL that gets lat and long and gives back distance. The request was using the Haversine formula but was using the km coefficient, rather than the one for miles.
I asked where they got it and they indicated AI. I sighed and pointed out why it was wrong and that we had PostGIS and that’s there is literally scalar functions available that will do the calculations way faster and they should use those.
There’s a clear over reliance on code generation. That said, it’s pretty good for things that I can eye scan and verify that’s what I would have typed anyway. But I’ve found it suggesting things I wouldn’t remotely permit to things that are “sort of” correct. I’ll let it pop on the latter case and go back and clean it up. But yeah, anyone blind trusting AI shouldn’t be allowed to make final commits.
I just don’t bother, under the assumption that I’ll spend more time correcting the mistakes than actually writing the code myself. Maybe that’s faulty, as I haven’t tried it myself (mostly because it’s hard to turn on in my editor, vim).
it’s pretty good for things that I can eye scan and verify that’s what I would have typed anyway. But I’ve found it suggesting things I wouldn’t remotely permit to things that are “sort of” correct.
Yeah. I haven’t bothered with it much but the best use I can see of it is just rubber ducking.
Last time I used it was to asked how to change contrast in a numpy image. It said to multiply each channel by contrast. (I don’t even think this is right and it should be ((original value-128) * contrast) + 128)
not original value * contrast
as it suggested), but it did remind me I can just run operations on colour channels.
Wait what’s my point again? Oh yeah, don’t trust anyone that can’t tell you what the output is supposed to do.
Wait wait wait so… this person forgot the pythagorean theorem?
Like that is the most basic task. It’s d = sqrt((x1 - x2)^2 + (y1 - y2)^2)
, right?
That was off the top of my head, this person didn’t understand that? Do I get a job now?
I have seen a lot of programmers talk about how much time it saves them. It’s entirely possible it makes them very fast at making garbage code. One thing I’ve known for a long time is that understanding code is much harder than writing it, and so asking an LLM to generate your code sounds like it’s just creating harder work for you, unless you don’t care about getting it right.
Yup, you’re hired as whatever position you want. :)
Our instructions were basically:
- randomly place N coordinates on a 2D grid, and a random target point
- report the closest of those N coordinates to the target point
It was technically different (we phrased it as a top-down game, but same gist). AI generated manhattan distance (abs(x2 - x1) + abs(x2 - x1)
) probably due to other clues in the text, but the instructions were clear. The candidate didn’t notice what it was doing, we pointed it out, then they asked for the algorithm, which we provided.
Our better candidates remember the equation like you did. But we don’t require it, since not all applicants finished college (this one did). We’re more concerned about code structure, asking proper questions, and software design process, but math knowledge is cool too (we do a bit of that).
I don’t want to believe that coders like these exist and are this confident in an AI’s ability to code.
My co-worker said told me another story.
His friend was in a programming class, and made it nearly to the end, when he asked my friend for help. Basically, he had already written the solution, but it wasn’t working, and he needed help debugging it. My friend looked at the code, and it looked AI generated because there were obvious mistakes throughout, so he asked his friend to walk him through the code, and that’s when his friend admitted to AI generating the whole thing. My friend refused to help.
They do exist, but this candidate wasn’t that. I think they were just under pressure and didn’t know the issue. The red flag for me wasn’t AI or not catching the AI issues, it was that when I asked how confident they were about the code (after us catching the same bug twice), they said 100% and they didn’t need any extra assurance (I would’ve wanted to write tests).
Copilot / LLM code completion feels like having a somewhat intelligent helper who can think faster than I can, however they have no understanding of how to actually code, but are good at mimicry.
So it’s helpful for saving time typing some stuff, and sometimes the absolutely weird suggestions make me think of other scenarios I should consider, but it’s not going to do the job itself.
So it’s helpful for saving time typing some stuff
Legitimately, this is the only use I found for it. If I need something extremely simple, and feeling too lazy to type it all out, it’ll do the bulk of it, and then I just go through and edit out all little mistakes.
And what gets me is that anytime I read all of the AI wank about how people are using these things, it kind of just feels like they’re leaving out the part where they have to edit the output too.
At the end of the day, we’ve had this technology for a while, it’s just been in the form of predictive suggestions on a keyboard app or code editor. You still had to steer in the right direction. Now it’s just smart enough to make it from start to finish without going off a cliff, but you still have to go back and fix it, the same way you had to steer it before.
I think we all had that first moment where copilot generates a good snippet, and we were blown away. But having used it for a while now, I find most of what it suggests feels like jokes.
Like it does save some typing / time spent checking docs, but you have to be very careful to check its work.
I’ve definitely seen a lot more impressively voluminous, yet flawed pull requests, since my employer started pushing for everyone to use it.
I foresee a real reckoning of unmaintainable codebases in a couple years.
Yes, and then you take the time to dig a little deeper and use something agent based like aider or crewai or autogen. It is amazing how many people are stuck in the mindset of “if the simplest tools from over a year aren’t very good, then there’s no way there are any good tools now.”
It’s like seeing the original Planet of the Apes and then arguing against how realistic the Apes are in the new movies without ever seeing them. Sure, you can convince people who really want unrealistic Apes to be the reality, and people who only saw the original, but you’ll do nothing for anyone who actually saw the new movies.
I’ve used crewai and autogen in production… And I still agree with the person you’re replying to.
The 2 main problems with agentic approaches I’ve discovered this far:
-
One mistake or hallucination will propagate to the rest of the agentic task. I’ve even tried adding a QA agent for this purpose but what ends up happening is those agents aren’t reliable and also leads to the main issue:
-
It’s very expensive to run and rerun agents at scale. The scaling factor of each agent being able to call another agent means that you can end up with an exponentially growing number of calls. My colleague at one point ran a job that cost $15 for what could have been a simple task.
One last consideration: the current LLM providers are very aware of these issues or they wouldn’t be as concerned with finding “clean” data to scrape from the web vs using agents to train agents.
If you’re using crewai btw, be aware there is some builtin telemetry with the library. I have a wrapper to remove that telemetry if you’re interested in the code.
Personally, I’m kinda done with LLMs for now and have moved back to my original machine learning pursuits in bioinformatics.
Also, a lot of people who are using AI have become quiet about it of late exactly because of reactions like this article’s. Okay, you’ll “piledrive” me if I mention AI? So I won’t mention AI. I’ll just carry on using it to make whatever I’m making without telling you.
There’s some great stuff out there, but of course people aren’t going to hear about it broadly if every time it gets mentioned it gets “piledriven.”
Pretty much me. I am using it everywhere but usually not interested in mentioning it to some internet trolls.
You can check my profile if you want, or not. 7 months ago I baked my first loaf of bread. I got the recipe from chatgpt. Over 7 months I have been going over with it on recipes and techniques, and as of this month I now have a part time gig job making artisan breads for a restaurant.
There is no way I could have progressed this fast without that tool. Keep in mind I have a family and a career in engineering, not exactly an abundance of time to take classes.
I mentioned this once on lemmy and some boomer shit starting screaming how learning how to bake with the help of an AI didn’t count and I need to buy baking books.
Edit: spelling
I don’t fear Artificial Intelligence, I fear Administrative Idiocy. The managers are the problem.
Fortunately, it’s my job as your boss to convince my boss and boss’ boss that AI can’t replace you.
We had a candidate spectacularly fail an interview when they used AI and didn’t catch the incredibly obvious errors it made. I keep a few examples of that handy to defend my peeps in case my boss or boss’s boss decide AI is the way to go.
I hope your actual boss would do that for you.
They’ll replace you first, so they can replace your employees… even though you are clearly right.
Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India. Listen, I would just be some random dude in India if I swapped places with some of my cousins, so I’m going to choose to take that personally and point out that using the word AI as some roundabout way to sell the labor of people that look like me to foreign governments is fucked up, you’re an unethical monster, and that if you continue to try { thisBullshit(); } you are going to catch (theseHands)
This aspect of it isn’t getting talked about enough. These companies are presenting these things as fully-formed AI, while completely neglecting the people behind the scenes constantly cleaning it up so it doesn’t devolve into chaos. All of the shortcomings and failures of this technology are being masked by the fact that there’s actual people working round the clock pruning and curating it.
You know, humans, with actual human intelligence, without which these miraculous “artificial intelligence” tools would not work as they seem to.
If the "AI’ needs a human support team to keep it “intelligent”, it’s less AI and more a really fancy kind of puppet.
I don’t think the author was referring to people pruning AI data but rather to mechanical turk instances like recently happened with Amazon.
Hacker News was silencing this article outright. That’s typically a sign that its factual enough to strike a nerve with the potential CxO libertarian [slur removed] crowd.
If this is satire, I don’t see it. Because i’ve seen enough of the GenAI crowd openly undermine society/the environment/the culture and be brazen about it; violence is a perfectly normal response.
What happened to HN? I have now heard HN silencing cetain posts multiple times. Is this enshittification?
HN is run by a VC firm, Y Combinator. One of its largest supporters is OpenAI CEO Sam Altman. Do the math.
I’m libertarian, I’m against this. I’m also against blockchain scams.
My ideas on digital currencies and something like artificial intelligence are simply an extension of the usual ancap\panarchy ideas. It’s actually a very good test for any libertarian you meet - they’ll usually agree that a “meta-society” consisting of voluntary exterritorial jurisdictions (which can be anything from crack-smoking ancap tribes to solarpunk communes), with some overarching security system to protect those jurisdictions from being ignored by somebody well-armed, is good, then you just have to ask why the systems they like for currencies and this are clearly manifestations of a different ideology.
Sorry, I don’t understand what you mean by that. The person I replied to had “[slur removed]” in their comment, they’re on lemmy.ml
I will learn enough judo to throw you into the sun
best line