15 points

And of course the ai put rail signals in the middle.

Chain in, rail out. Always

!Factorio/Create mod reference if anyone is interested !<

permalink
report
reply
1 point
Spoiler

Your spoiler didn’t work.

permalink
report
parent
reply
69 points

And then 12 hours spent debugging and pulling it apart.

permalink
report
reply
29 points

And if you need anything else, you have to use a new prompt which will generate a brand new application, it’s fun!

permalink
report
parent
reply
2 points

That’s not really how agentic ai programming works anymore. Tools like cursor automatically pick files as “context”, and you can manually add them or the whole ckdebase as well. That obviously uses way more tokens though.

permalink
report
parent
reply
3 points
*

We’re in trouble when it learns to debug.

permalink
report
parent
reply
5 points

But then, as now, it won’t understand what it’s supposed to do, and will merely attempt to apply stolen code - ahem - training data in random permutations until it roughly matches what it interprets the end goal to be.

We’ve moved beyond a thousand monkeys with typewriters and a thousand years to write Shakespeare, and have moved into several million monkeys with copy and paste and only a few milliseconds to write “Hello, SEGFAULT”

permalink
report
parent
reply
9 points

And it still doesn’t work. Just “mostly works”.

permalink
report
parent
reply
3 points

A bunch of superfluous code that you find does nothing.

permalink
report
parent
reply
130 points

You can instantly get whatever you want, only it’s made from 100% technical debt

permalink
report
reply
28 points

That estimate seems a little low to me. It’s at least 115%.

permalink
report
parent
reply
21 points

even more. The first 100% of the tech debt is just understanding “your own” code.

permalink
report
parent
reply
2 points

Just start again from scratch for every feature 🤣

permalink
report
parent
reply
40 points

Im looking forward in the next 2 years when AI apps are in the wild and I get to fix them lol.

As a SR dev, the wheel just keeps turning.

permalink
report
reply
19 points

I’m being pretty resistant about AI code Gen. I assume we’re not too far away from “Our software product is a handcrafted bespoke solution to your B2B needs that will enable synergies without exposing your entire database to the open web”.

permalink
report
parent
reply
8 points
*

without exposing your entire database to the open web until well after your payment to us has cleared, so it’s fine.

Lol.

permalink
report
parent
reply
19 points

It has its uses. For templeting and/or getting a small project off the ground its useful. It can get you 90% of the way there.

But the meme is SOOO correct. AI does not understand what it is doing, even with context. The things JR devs are giving me really make me laugh. I legit asked why they were throwing a very old version of react on the front end of a new project and they stated they “just did what chatgpt told them” and that it “works”. Thats just last month or so.

The AI that is out there is all based on old posts and isnt keeping up with new stuff. So you get a lot of the same-ish looking projects that have some very strange/old decisions to get around limitations that no longer exist.

permalink
report
parent
reply
7 points

Yeah, I think personally LLMs are fine for like writing a single function, or to rubber duck with for debugging or thinking through some details of your implementation, but I’d never use one to write a whole file or project. They have their uses, and I do occasionally use something like ollama to talk through a problem and get some code snippets as a starting point for something. Trying to do too much more than that is asking for problems though. It makes it way harder to debug because it becomes reading code you haven’t written, it can make the code style inconsistent, and a non-insignifigant amount of the time even in short code segments it will hallucinate a non existent function or implement something incorrectly, so using it to write massive amounts of code makes that way more likely.

permalink
report
parent
reply
3 points

The AI also enabled some very bad practices.

It does not refactor and it makes writing repetitive code so easy you miss opportunities to abstract. In a week when you go to refactor you’re going to spend twice as long on that task.

As long as you know what you’re doing and guide it accordingly, it’s a good tool.

permalink
report
parent
reply
14 points

Holdup! You’ve got actual, employed, working, graduated juniors who are handing in code that they don’t even understand?

permalink
report
parent
reply
3 points

Our gluten-free code is handcrafted with all-natural intelligence.

permalink
report
parent
reply
3 points

You can get decent results from AI coding models, though…

…as long as somebody who actually knows how to program is directing it. Like if you tell it what inputs/outputs you want it can write a decent function - even going so far as to comment it along the way. I’ve gotten O1 to write some basic web apps with Node and HTML/CSS without having to hold its hand much. But we simply don’t have the training, resources, or data to get it to work on units larger than that. Ultimately it’d have to learn from large scale projects, and have the context size to be able to hold if not the entire project then significant chunks of it in context and that would require some very beefy hardware.

permalink
report
reply
12 points
*

Generally only for small problems. Like things lower than 300 lines of code. And the problem generally can’t be a novel problem.

But that’s still pretty damn impressive for a machine.

permalink
report
parent
reply
9 points

But that’s still pretty damn impressive for a machine.

Yeah. I’m so dang cranky about all the overselling, that how cool I think this stuff is often gets lost.

300 lines of boring code from thin air is genuinely cool, and gives me more time to tear my hair out over deployment problems.

permalink
report
parent
reply
2 points

and only if you’re doing something that has been previously done and publically released

permalink
report
parent
reply
1 point
*

Well, not exactly. For example, for a game I was working on I asked an LLM for a mathematical formula to align 3D normals. Then I couldn’t decipher what it wrote so I just asked it to write the code for me to do it. I can understand it in its code form, and it slid into my game’s code just fine.

Yeah, it wasn’t seamless, but that’s the frustrating hype part of LLMs. They very much won’t replace an actual programmer. But for me, working as the sole developer who actually knows how to code but doesn’t know how to do much of the math a game requires? It’s a godsend. And I guess somewhere deep in some forum somebody’s written this exact formula as a code snippet, but I think it actually just converted the formula into code and that’s something quite useful.

I mean, I don’t think you and I disagree on the limits of LLMs here. Obviously that formula it pulled out was something published before, and of course I had to direct it. But it’s these emergent solutions you can draw out of it where I find the most use. But of course, you need to actually know what you’re doing both on the code side and when it comes to “talking” to the LLM, which is why it’s nowhere near useful enough to empower users to code anything with some level of complexity without a developer there to guide it.

permalink
report
parent
reply

Programmer Humor

!programmer_humor@programming.dev

Create post

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

  • Keep content in english
  • No advertisements
  • Posts must be related to programming or programmer topics

Community stats

  • 8.5K

    Monthly active users

  • 1.2K

    Posts

  • 44K

    Comments