We do agree on that, but it’s weird to act as if this is somehow worse than OpenAI; try asking ChatGPT about Palestine.
Turns out our fantasies about genius AI that will make our lives better don’t really work when those AIs are programmed, both intentionally and unintentionally, with human biases.
This is why I get so angry at people who think that AI will solve climate change. We know the solution to climate change, and it starts with getting rid of billionaires. But an AI controlled by billionaires is never going to be allowed to give that answer, is it?
Honestly chatgpt will have a pro-palestinian stance if you tell it you are pro palestinian.
Deepseek doesnt do that.
As with all things LLM, triggering or evading the censorship depends on the questions asked and how they’re phrased, but the censorship most definitely is there.
That could just come down to the nature of the debate. The freedom of Israelis isn’t really a question in the debate. People who see a difference between Palestinians and Hamas also see a difference between Israel’s administration and military and the general Israeli population.
My guess is that it’s set up to see contexts with conflicting positions associated as controversial but it will just go with responses that don’t have controversy associated with them.
A bias in the training data will result in a bias in the results and it doesn’t have morals to help it choose between conflicting data in its training. It’s possible that this bias was deliberately introduced, though it’s also possible that it was negligently introduced as it just sucked up data from the internet.
I’m curious though how it would respond if the second response is used to challenge the first one with a clarification that Palestinians are indeed people.
Edit: not saying that there isn’t any censorship going on with LLMs outside of China (I believe there absolutely is, depending on the model), just that that example doesn’t look like the other cases of censorship I’ve seen.