User's banner
Avatar

200fifty

200fifty@awful.systems
Joined
1 posts • 37 comments
Direct message

When I was a kid (Nat Nanny)[https://en.wikipedia.org/wiki/Net_Nanny] was totally and completely lame, but the whole millennial generation grew up to adore content moderation. A strange authoritarian impulse.

Me when the mods unfairly ban me from my favorite video game forum circa 2009

(source: first HN thread)

permalink
report
reply

Wow, he seems so confident and secure in his masculinity! No one’s gonna think this guy has issues with his sexuality after he made this tweet, that’s for darn sure.

permalink
report
parent
reply

Like, seriously, get a hobby or something.

For real. I don’t even necessarily disagree with the broad-strokes idea of “if you’re comfortable, it’s good to take on challenges and get outside of your comfort zone because that’s how you grow as a person,” but why can’t he just apply this energy to writing a terrible novel or learning to paint watercolors or something, like a normal person? Why does the fact his life is comfortable mean he has to become a Nazi? :/

permalink
report
parent
reply

Look, you gotta forgive this guy for coming up with an insane theory that doesn’t make sense. After all, his brain was poisoned by testosterone, so his thinking skills have atrophied. An XXL hat size can only do so much, you know.

permalink
report
reply

I think they were responding to the implication in self’s original comment that LLMs were claiming to evaluate code in-model and that calling out to an external python evaluator is ‘cheating.’ But actually as far as I know it is pretty common for them to evaluate code using an external interpreter. So I think the response was warranted here.

That said, that fact honestly makes this vulnerability even funnier because it means they are basically just letting the user dump whatever code they want into eval() as long as it’s laundered by the LLM first, which is like a high-school level mistake.

permalink
report
parent
reply

“I know not with what technology GPT-6 will be built, but GPT-7 will be built with sticks and stones” -Albert Einstein probably

permalink
report
parent
reply

Even with good data, it doesn’t really work. Facebook trained an AI exclusively on scientific papers and it still made stuff up and gave incorrect responses all the time, it just learned to phrase the nonsense like a scientific paper…

permalink
report
parent
reply

Except it’s not really being automated out of our lives, is it? I find it hard to imagine how increasing the rate at which bullshit can be produced leads to a world with less bullshit in it.

permalink
report
parent
reply

First: our sessions and guests were mostly not controversial — despite what you may have heard

Man, you invite one Nazi to speak at your conference and suddenly you’re “the guys who invited a Nazi to speak at their conference.” How is that fair? :-(

permalink
report
reply