You are viewing a single thread.
View all comments
208 points
*

https://nmn.gl/blog/ai-illiterate-programmers

Relevant quote

Every time we let AI solve a problem we could’ve solved ourselves, we’re trading long-term understanding for short-term productivity. We’re optimizing for today’s commit at the cost of tomorrow’s ability.

permalink
report
reply
33 points
*

I like the sentiment of the article; however this quote really rubs me the wrong way:

I’m not suggesting we abandon AI tools—that ship has sailed.

Why would that ship have sailed? No one is forcing you to use an LLM. If, as the article supposes, using an LLM is detrimental, and it’s possible to start having days where you don’t use an LLM, then what’s stopping you from increasing the frequency of those days until you’re not using an LLM at all?

I personally don’t interact with any LLMs, neither at work or at home, and I don’t have any issue getting work done. Yeah there was a decently long ramp-up period — maybe about 6 months — when I started on ny current project at work where it was more learning than doing; but now I feel like I know the codebase well enough to approach any problem I come up against. I’ve even debugged USB driver stuff, and, while it took a lot of research and reading USB specs, I was able to figure it out without any input from an LLM.

Maybe it’s just because I’ve never bought into the hype; I just don’t see how people have such a high respect for LLMs. I’m of the opinion that using an LLM has potential only as a truly last resort — and even then will likely not be useful.

permalink
report
parent
reply
6 points

Why would that ship have sailed?

Because the tools are here and not going anyway

then what’s stopping you from increasing the frequency of those days until you’re not using an LLM at all?

The actually useful shit LLMs can do. Their point is that using only majorly an LLM hurts you, this does not make it an invalid tool in moderation

You seem to think of an LLM only as something you can ask questions to, this is one of their worst capabilities and far from the only thing they do

permalink
report
parent
reply
6 points

Because the tools are here and not going anyway

Swiss army knives have had awls for ages. I’ve never used one. The fact that the tool exists doesn’t mean that anybody has to use it.

The actually useful shit LLMs can do

Which is?

permalink
report
parent
reply
2 points

Because the tools are here and not going anyway

I agree with this on a global scale; I was thinking about on a personal scale. In the context of the entire world, I do think the tools will be around for a long time before they ever fall out of use.

The actually useful shit LLMs can do.

I’ll be the first to admit I don’t know many use cases of LLMs. I don’t use them, so I haven’t explored what they can do. As my experience is simply my own, I’m certain there are uses of LLMs that I hadn’t considered. I’m personally of the opinion that I won’t gain anything out of LLMs that I can’t get elsewhere; however, if a tool helps you more than any other method, then that tool could absolutely be useful.

permalink
report
parent
reply
41 points

Hey that sounds exactly like what the last company I worked at did for every single project 🙃

permalink
report
parent
reply
10 points

Not even. Every time someone lets AI run wild on a problem, they’re trading all trust I ever had in them for complete garbage that they’re not even personally invested enough in to defend it when I criticize their absolute shit code. Don’t submit it for review if you haven’t reviewed it yourself, Darren.

permalink
report
parent
reply
3 points

My company doesn’t even allow AI use, and the amount of times I’ve tried to help a junior diagnose an issue with a simple script they made, only to be told that they don’t actually know what their code does to even begin troubleshooting…

“Why do you have this line here? Isn’t that redundant?”

“Well it was in the example I found.”

“Ok, what does the example do? What is this line for?”

Crickets.

I’m not trying to call them out, I’m just hoping that I won’t need to familiarize myself with their whole project and every fucking line in their script to help them, because at that point it’d be easier to just write it myself than try to guide them.

permalink
report
parent
reply
11 points

This guy’s solution to becoming crappier over time is “I’ll drink every day, but abstain one day a week”.

I’m not convinced that “that ship has sailed” as he puts it.

permalink
report
parent
reply
7 points

Capitalism is inherently short-sighted.

permalink
report
parent
reply
5 points

“Every time we use a lever to lift a stone, we’re trading long term strength for short term productivity. We’re optimizing for today’s pyramid at the cost of tomorrow’s ability.”

permalink
report
parent
reply
12 points

If you don’t understand how a lever works, then it’s a problem. Should we let any person with an AI design and operate a nuclear power plant?

permalink
report
parent
reply
10 points

Precisely. If you train by lifting stones you can still use the lever later, but you’ll be able to lift even heavier things by using both your new strength AND the leaver’s mechanical advantage.

By analogy, if you’re using LLMs to do the easy bits in order to spend more time with harder problems fuckin a. But the idea you can just replace actual coding work with copy paste is a shitty one. Again by analogy with rock lifting: now you have noodle arms and can’t lift shit if your lever breaks or doesn’t fit under a particular rock or whatever.

permalink
report
parent
reply
3 points

Also: assuming you know what the easy bits are before you actually have experience doing them is a recipe to end up training incorrectly.

I use plenty of tools to assist my programming work. But I learn what I’m doing and why first. Then once I have that experience if there’s a piece of code I find myself having to use frequently or having to look up frequently, I make myself a template (vscode’s snippet features are fucking amazing when you build your own snips well, btw).

permalink
report
parent
reply
6 points

“If my grandma had wheels she would be a bicycle. We are optimizing today’s grandmas at the sacrifice of tomorrow’s eco friendly transportation.”

permalink
report
parent
reply
1 point

🤣

permalink
report
parent
reply
3 points

Actually… Yes? People’s health did deteriorate due to over-reliance on technology over the generations. At least, the health of those who have access to that technology.

permalink
report
parent
reply
1 point

LLMs are absolutely not able to create wonders on par with the pyramids. They’re at best as capable as a junior engineer who has read all of Stack Overflow but doesn’t really understand any of it.

permalink
report
parent
reply
2 points

And also possibly checking in code with subtle logic flaws that won’t be discovered until it’s too late.

permalink
report
parent
reply
2 points

Nahhh, I never would have solved that problem myself, I’d have just googled the shit out of it til I found someone else that had solved it themselves

permalink
report
parent
reply

Greentext

!greentext@sh.itjust.works

Create post

This is a place to share greentexts and witness the confounding life of Anon. If you’re new to the Greentext community, think of it as a sort of zoo with Anon as the main attraction.

Be warned:

  • Anon is often crazy.
  • Anon is often depressed.
  • Anon frequently shares thoughts that are immature, offensive, or incomprehensible.

If you find yourself getting angry (or god forbid, agreeing) with something Anon has said, you might be doing it wrong.

Community stats

  • 7.5K

    Monthly active users

  • 1.2K

    Posts

  • 51K

    Comments