cross-posted from: https://programming.dev/post/8121843

~n (@nblr@chaos.social) writes:

This is fine…

“We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group.”

[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?

You are viewing a single thread.
View all comments
59 points
*

I think this is extremely important:

Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities.

Bad programmers + AI = bad code

Good programmers + AI = good code

permalink
report
reply
12 points
*

This. As an experienced developer I’ve released enough bugs to miss-trust my own work and spend as much time as I can afford in the budget on my own personal QA process. So it’s no burden at all to have to do that with AI code. And of course, a well structured company has further QA outside of that.

If anything, I find it easier to do that with code I didn’t write myself. Just yesterday I merged a commit with a ridiculous mistake that I should have seen. A colleague noticed it instantly when I was stuck and frustrated enough to reach out for a second opinion. I probably would’ve noticed if an AI had written it.

Also - in hindsight - an AI code audit would have also picked it up.

permalink
report
parent
reply
2 points

The quote above covered exactly what you just said: “yet were also more likely to rate their insecure answers as secure compared to those in our control group” at work :-)

permalink
report
parent
reply
-4 points

I find that the people who complain the most about AI code aren’t professional programmers. Everyone at my company and my friends who are in the industry are all very positive towards it

permalink
report
parent
reply
6 points

Good programmers + AI = extra, unnecessary work just to end up with equal quality code

permalink
report
parent
reply
0 points

Not even close to true but ok

permalink
report
parent
reply
10 points

I’m still of the opinion that…

Good programmers = best code

permalink
report
parent
reply
5 points

eh, I’ve known lots of good programmers who are super stuck in their ways. Teaching them to effectively use an LLM can help break you out of the mindset that there’s only one way to do things.

permalink
report
parent
reply
3 points

I think that’s one of the best use cases for AI in programming; exploring other approaches.

It’s very time-consuming to play out how your codebase would look like if you had decided differently at the beginning of the project. So actually comparing different implementations is very expensive. This incentivizes people to stick to what they know works well. Maybe even more so when they have more experience, which means they really know this works very well, and they know what can go wrong otherwise.

Being able to generate code instantly helps a lot in this regard, although it still has to be checked for errors.

permalink
report
parent
reply
7 points
*

I find it’s useful when writing new code because it can give you a quick first draft of each function, but most of the time I’m modifying existing applications and it’s less useful for that. And you still need to be able to judge for yourself whether the code it offers is any good.

permalink
report
parent
reply
28 points

LLMs amplify biases by design, so this tracks.

permalink
report
parent
reply
5 points

What do you mean? Sounds to me like any other tool, it takes skill to use it well. Same as stack overflow, built in code suggestions or IDE generated code.

Not to detract from the usefulness of it just in terms of the fact that it requires knowledge to use well.

permalink
report
parent
reply
1 point

As someone currently studying machine learning thoery and how these models are built, I’m explaining that built into the models at their core are functions that amplify the bias of the training data by identifying and using mathematical associations within the training data to create output. Because of that design, a naive approach to its use would result in amplified bias of not only the training data but also the person using the tool.

permalink
report
parent
reply

Programming

!programming@programming.dev

Create post

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person’s post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you’re posting long videos try to add in some form of tldr for those who don’t want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



Community stats

  • 2.2K

    Monthly active users

  • 1.9K

    Posts

  • 30K

    Comments