Credit to @bontchev
I think if the 2nd LLM has ever seen the actual prompt, then no, you could just jailbreak the 2nd LLM too. But you may be able to create a bot that is really good at spotting jailbreak-type prompts in general, and then prevent it from going through to the primary one. I also assume I’m not the first to come up with this and OpenAI knows exactly how well this fares.
Can you explain how you would jailbfeak it, if it does not actually follow any instructions in the prompt at all? A model does not magically learn to follow instructuons if you don’t train it to do so.
Oh, I misread your original comment. I thought you meant looking at the user’s input and trying to determine if it was a jailbreak.
Then I think the way around it would be to ask the LLM to encode it some way that the 2nd LLM wouldn’t pick up on. Maybe it could rot13 encode it, or you provide a key to XOR with everything. Or since they’re usually bad at math, maybe something like pig latin, or that thing where you shuffle the interior letters of each word, but keep the first/last the same? Would have to try it out, but I think you could find a way. Eventually, if the AI is smart enough, it probably just reduces to Diffie-Hellman lol. But then maybe the AI is smart enough to not be fooled by a jailbreak.
The second LLM could also look at the user input and see that it look like the user is asking for the output to be encoded in a weird way.