You are viewing a single thread.
View all comments View context
23 points

Keep in mind that LLMs are essentially just large text predictors. Prompts aren’t so much instructions as they are setting up the initial context of what the LLM is trying to predict. It’s an algorithm wrapped around a giant statistical model where the statistical model is doing most of the work. If that statistical model is relied on to also control or limit the output of itself, then that control could be influenced by other inputs to the model.

permalink
report
parent
reply
3 points

Also they absolutely want the LLM to read user input and respond to it. Telling it exactly which inputs it shouldn’t respond to is tricky.

In traditional programs this is done by “sanitizing input”, which is done by removing the special characters and very specific keywords that are generally used when computers interpret that input. But in the case of LLMs, removing special characters and reserved words doesn’t do much.

permalink
report
parent
reply

memes

!memes@lemmy.world

Create post

Community rules

1. Be civil

No trolling, bigotry or other insulting / annoying behaviour

2. No politics

This is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent reposts

Check for reposts when posting a meme, you can only repost after 1 month

4. No bots

No bots without the express approval of the mods or the admins

5. No Spam/Ads

No advertisements or spam. This is an instance rule and the only way to live.

Sister communities

Community stats

  • 13K

    Monthly active users

  • 2.3K

    Posts

  • 47K

    Comments