Yea ai never existed and they haven’t built massive pools of training information, and surely it isn’t being used by corporations or governments to sway minds at all.
That would be CRAZY
What would be crazy would be to let loose a propaganda-bot on the world without disabling such a simple vulnerability.
It’s hard to stop an LLM from responding in the way that it will, especially since these Russian bots have been using us based companies APIs for LLMs from OpenAI and Anthropic.
OpenAI and Anthropic can hardly stop their LLMs from giving bomb instructions, or participating in questionable sexual role playing that they would rather people not use their systems for. It’s very hard to tame an LLM.
Of course Russians paying for these APIs can’t stop the LLMs from acting how they normally would, besides giving them a side to argue on in the beginning.
You just don’t understand the technology. (I don’t either but I know more than you)
Input sanitation has been a thing for as long as SQL injection attacks have been. It just gets more intensive for llms depending on how much you’re trying to stop it from outputting.