courtesy @self

can’t wait for the crypto spammers to hit every web page with a ChatGPT prompt. AI vs Crypto: whoever loses, we win

You are viewing a single thread.
View all comments View context
0 points

you appear to be posting this in good faith so I won’t start at my usual level, but … what? do you realize that you didn’t make a substantive contribution to the particular thing observed here, which is that somewhere in the mishmash dogshit that is popular LLM hosting there are reliable ways to RCE it with inputs? I think maybe (maybe!) you meant to, but you didn’t really touch on it at all

other than that:

Basically, the more work you take away from the LLM, the more reliable everything will work.

people here are aware, yes, and it stays continually entertaining

permalink
report
parent
reply
0 points

I think they were responding to the implication in self’s original comment that LLMs were claiming to evaluate code in-model and that calling out to an external python evaluator is ‘cheating.’ But actually as far as I know it is pretty common for them to evaluate code using an external interpreter. So I think the response was warranted here.

That said, that fact honestly makes this vulnerability even funnier because it means they are basically just letting the user dump whatever code they want into eval() as long as it’s laundered by the LLM first, which is like a high-school level mistake.

permalink
report
parent
reply
1 point

Yeah, that was exactly my intention.

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 2K

    Monthly active users

  • 432

    Posts

  • 9.6K

    Comments

Community moderators