it’s a short comment thread so far, but it’s got a few posts that are just condensed orange site

The constant quest for “safety” might actually be making our future much less safe. I’ve seen many instances of users needing to yell at, abuse, or manipulate ChatGPT to get the desired answers. This trains users to be hateful to / frustrated with AI, and if the data is used, it teaches AI that rewards come from such patterns. Wrote an article about this – https://hackernoon.com/ai-restrictions-reinforce-abusive-user-behavior

But you think humans (by and large) do know what “facts” are?

No comments yet!

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.7K

    Monthly active users

  • 549

    Posts

  • 12K

    Comments

Community moderators