https://nmn.gl/blog/ai-illiterate-programmers
Relevant quote
Every time we let AI solve a problem we couldâve solved ourselves, weâre trading long-term understanding for short-term productivity. Weâre optimizing for todayâs commit at the cost of tomorrowâs ability.
I like the sentiment of the article; however this quote really rubs me the wrong way:
Iâm not suggesting we abandon AI toolsâthat ship has sailed.
Why would that ship have sailed? No one is forcing you to use an LLM. If, as the article supposes, using an LLM is detrimental, and itâs possible to start having days where you donât use an LLM, then whatâs stopping you from increasing the frequency of those days until youâre not using an LLM at all?
I personally donât interact with any LLMs, neither at work or at home, and I donât have any issue getting work done. Yeah there was a decently long ramp-up period â maybe about 6 months â when I started on ny current project at work where it was more learning than doing; but now I feel like I know the codebase well enough to approach any problem I come up against. Iâve even debugged USB driver stuff, and, while it took a lot of research and reading USB specs, I was able to figure it out without any input from an LLM.
Maybe itâs just because Iâve never bought into the hype; I just donât see how people have such a high respect for LLMs. Iâm of the opinion that using an LLM has potential only as a truly last resort â and even then will likely not be useful.
Why would that ship have sailed?
Because the tools are here and not going anyway
then whatâs stopping you from increasing the frequency of those days until youâre not using an LLM at all?
The actually useful shit LLMs can do. Their point is that using only majorly an LLM hurts you, this does not make it an invalid tool in moderation
You seem to think of an LLM only as something you can ask questions to, this is one of their worst capabilities and far from the only thing they do
Because the tools are here and not going anyway
Swiss army knives have had awls for ages. Iâve never used one. The fact that the tool exists doesnât mean that anybody has to use it.
The actually useful shit LLMs can do
Which is?
Because the tools are here and not going anyway
I agree with this on a global scale; I was thinking about on a personal scale. In the context of the entire world, I do think the tools will be around for a long time before they ever fall out of use.
The actually useful shit LLMs can do.
Iâll be the first to admit I donât know many use cases of LLMs. I donât use them, so I havenât explored what they can do. As my experience is simply my own, Iâm certain there are uses of LLMs that I hadnât considered. Iâm personally of the opinion that I wonât gain anything out of LLMs that I canât get elsewhere; however, if a tool helps you more than any other method, then that tool could absolutely be useful.
Not even. Every time someone lets AI run wild on a problem, theyâre trading all trust I ever had in them for complete garbage that theyâre not even personally invested enough in to defend it when I criticize their absolute shit code. Donât submit it for review if you havenât reviewed it yourself, Darren.
My company doesnât even allow AI use, and the amount of times Iâve tried to help a junior diagnose an issue with a simple script they made, only to be told that they donât actually know what their code does to even begin troubleshootingâŚ
âWhy do you have this line here? Isnât that redundant?â
âWell it was in the example I found.â
âOk, what does the example do? What is this line for?â
Crickets.
Iâm not trying to call them out, Iâm just hoping that I wonât need to familiarize myself with their whole project and every fucking line in their script to help them, because at that point itâd be easier to just write it myself than try to guide them.
âEvery time we use a lever to lift a stone, weâre trading long term strength for short term productivity. Weâre optimizing for todayâs pyramid at the cost of tomorrowâs ability.â
Precisely. If you train by lifting stones you can still use the lever later, but youâll be able to lift even heavier things by using both your new strength AND the leaverâs mechanical advantage.
By analogy, if youâre using LLMs to do the easy bits in order to spend more time with harder problems fuckin a. But the idea you can just replace actual coding work with copy paste is a shitty one. Again by analogy with rock lifting: now you have noodle arms and canât lift shit if your lever breaks or doesnât fit under a particular rock or whatever.
Also: assuming you know what the easy bits are before you actually have experience doing them is a recipe to end up training incorrectly.
I use plenty of tools to assist my programming work. But I learn what Iâm doing and why first. Then once I have that experience if thereâs a piece of code I find myself having to use frequently or having to look up frequently, I make myself a template (vscodeâs snippet features are fucking amazing when you build your own snips well, btw).
âIf my grandma had wheels she would be a bicycle. We are optimizing todayâs grandmas at the sacrifice of tomorrowâs eco friendly transportation.â
Yeah fake. No way you can get 90%+ using chatGPT without understanding code. LLMs barf out so much nonsense when it comes to code. You have to correct it frequently to make it spit out working code.
If weâre talking about freshman CS 101, where every assignment is the same year-over-year and itâs all machine graded, yes, 90% is definitely possible because an LLM can essentially act as a database of all problems and all solutions. A grad student TA can probably see through his âexplanationsâ, but theyâre probably tired from their endless stack of work, so why bother?
If weâre talking about a 400 level CS class, this kidâs screwed and even someone whoâs mastered the fundamentals will struggle through advanced algorithms and reconciling math ideas with hands-on-keyboard software.
Are you guys just generating insanely difficult code? I feel like 90% of all my code generation with o1 works first time? And if it doesnât, I just let GPT know and it fixes it right then and there?
the problem is more complex than initially thought, for a few reasons.
One, the user is not very good at prompting, and will often fight with the prompt to get what they want.
Two, often times the user has a very specific vision in mind, which the AI obviously doesnât know, so the user ends up fighting that.
Three, the AI is not omnisicient, and just fucks shit up, makes goofy mistakes sometimes. Version assumptions, code compat errors, just weird implementations of shit, the kind of stuff you would expect AI to do thatâs going to make it harder to manage code after the fact.
unless youâre using AI strictly to write isolated scripts in one particular language, ai is going to fight you at least some of the time.
I asked an LLM to generate tests for a 10 line function with two arguments, no if branches, and only one library function call. Itâs just a for loop and some math. Somehow it invented arguments, and the ones that actually ran didnât even pass. It made like 5 test functions, spat out paragraphs explaining nonsense, and it still didnât work.
This was one of the smaller deepseek models, so perhaps a fancier model would do better.
Iâm still messing with it, so maybe Iâll find some tasks itâs good at.
I just generated an entire angular component (table with filters, data services, using in house software patterns and components, based off of existing work) using copilot for work yesterday. It didnât work at first, but Iâm a good enough software engineer that I iterated on the issues, discarding bad edits and referencing specific examples from the extant codebase and got copilot to fix it. 3-4 days of work (if you were already familiar with the existing way of doing things) done in about 3-4 hours. But if you didnât know what was going on and how to fix it youâd end up with an unmaintainable non functional mess, full of bugs we have specific fixes in place to avoid but copilot doesnât care about because it doesnât have an idea of how software actually works, just what it should look like. So for anything novel or complex you have to feed it an example, then verify it didnât skip steps or forget to include something it didnât understand/predict, or make up a library/function call. So you have to know enough about the software youâre making to point that stuff out, because just feeding whatever error pops out of your compiler back into the AI may get you to working code, but it wonât ensure quality code, maintainability, or intelligibility.
My first attempt at coding with chatGPT was asking about saving information to a file with python. I wanted to know what libraries were available and the syntax to use them.
It gave me a three page write up about how to write a library myself, in python. Only it had an error on damn near every line, so I still had to go Google the actual libraries and their syntax and slosh through documentation
You mean o3 mini? Wasnât it on the level of o1, just much faster and cheaper? I noticed no increase in code quality, perhaps even a decrease. For example it does not remember things far more often, like variables that have a different name. It also easily ignores a bunch of my very specific and enumerated requests.
03 something⌠i think the bigger versionâŚ.
but, i saw a video where it wrote a working game of snake, and then wrote an ai training algorithm to make an ai that could play snake⌠all of the code ran on the first tryâŚ.
could be a lie though, i dunnoâŚ.
deserved to fail
The bullshit is that anon wouldnât be fsked at all.
If anon actually used ChatGPT to generate some code, memorize it, understand it well enough to explain it to a professor, and get a 90%, congratulations, thatâs called âstudyingâ.
Yeah, if you memorized the code and itâs functionality well enough to explain it in a way that successfully bullshit someone who can sight-read it⌠You know how that code works. You might need a linter, but you know how that code works and can probably at least fumble your way through a shitty 0.5v of it
I donât think thatâs true. Thatâs like saying that watching hours of guitar YouTube is enough to learn to play. You need to practice too, and learn from mistakes.
I donât think thatâs quite accurate.
The âunderstand it well enough to explain it to a professorâ clause is carrying a lot of weight here - if that part is fulfilled, then yeah, youâre actually learning something.
Unless of course, all of the professors are awful at their jobs too. Most of mine were pretty good at asking very pointed questions to figure out what you actually know, and could easily unmask a bullshit artist with a short conversation.
You donât need physical skills to program, there is nothing that needs to be honed in into the physical memory by repetition. If you know how to type and what to type, youâre ready to type. Of you know what strings to pluck, you still need to train your fingers to do it, itâs a different skill.
Itâs more like if played a song on Guitar Hero enough to be able to pick up a guitar and convince a guitarist that you know the song.
Code from ChatGPT (and other LLMs) doesnât usually work on the first try. You need to go fix and add code just to get it to compile. If you actually want it to do whatever your professor is asking you for, you need to understand the code well enough to edit it.
Itâs easy to try for yourself. You can go find some simple programming challenges online and see if you can get ChatGPT to solve a bunch of them for you without having to dive in and learn the code.
I mean I feel like depending on what kind of problems they started off with ChatGPT probably could just solve simple first year programming problems. But yeah as you get to higher level classes it will definitely not fully solve the stuff for you and youâd have to actually go in and fix it.
No heâs right. Before ChatGPT there was Stack Overflow. A lot of learning to code is learning to search up solutions on the Internet. The crucial thing is to learn why that solution works though. The idea of memorizing code like a language is impossible. Youâll obviously memorize some common stuff but things change really fast in the programming world.
If itâs the first course where they use Java, then one could easily learn it in 21 hours, with time for a full nightâs sleep. Unless thereâs no code completion and you have to write imports by hand. Then, youâre fucked.
If thereâs no code completion, I can tell you even people whoâs been doing coding as a job for years arenât going to write it correctly from memory. Because weâre not being paid to memorize this shit, weâre being paid to solve problems optimally.
My undergrad program had us write Java code by hand for some beginning assignments and exams. The TAs would then type whatever we wrote into Eclipse and see if it ran. They usually graded pretty leniently, though.
My first programming course (in Java) had a pen and paper exam. Minus points if you missed a bracket. :/