China has released a set of guidelines on labeling internet content that is generated or composed by artificial intelligence (AI) technology, which are set to take effect on Sept. 1.
This is a bad idea. It creates a stigma and bias against innocent Artificial beings. This is the equivalent of forcing a human to wear a collar. TM watermark
Would it be more effective to have something where cameras digitally sign the photos? Then, it also makes photos more attributable, which sounds like China’s thing.
This is the one area where blockchain could have been useful instead of greater-fool money schemes. A system where people can verify provenance of images or videos pertaining to matters of importance such as news stories. All reputable journalism already attributes their photos anyways. Cryptographic signing is just taking it to a logical conclusion. But of course the scary word ‘china’ is involved here therefore we must only contrarian post.
No, I don’t want my photos digitally signed and tracked, and I’m sure no whistleblower wants that either.
Apart from the privacy issues, I guess the challenge would be how you preserve the signature through ordinary editing. You could embed the unedited, signed photo into the edited one, but you’d need new formats and it would make the files huge. Or maybe you could deposit the original to some public and unalterable storage using something like a blockchain, but it would bring large storage and processing requirements. Or you could have the editing software apply a digital signature to track the provenance of an edit, but then anyone could make a signed edit and it wouldn’t prove anything about the veracity of the photo’s content.
That’s actually already a thing: https://www.theregister.com/2022/08/15/sony_launches_forgeryproof_incamera_digital/
That’s a different thing. C2PA is proving a photo is came from a real camera, with all the editing trails. All in a cryptographic manner. This in the topic is trying to prove what not real is not real, by self claiming. You can add the watermark, remove it, add another watermark of another AI, or whatever you want. You can just forge it outright because I didn’t see cryptographic proof like a digital sign is required.
Btw, the C2PA data can be stripped if you know how, just like any watermarks and digital signatures.
Stripping C2PA simply removes the reliability part, which is fine if you don’t need it. It is something that is effective when present and not when it isn’t.
Sort of. A camera with internet connectivity could automatically “notarize” photos. The signing authority would vouch that the photo (or other file) hasn’t been altered since the moment of signing. It wouldn’t be evidence that the photo was not manipulated before that moment.
That could make, EG, photos of a traffic accident good evidence in court. If there wasn’t time to for manipulation, then the photos must be real. It wouldn’t work for photos that could have been taken at any time.
You could upload a hash to the blockchain of a cryptocurrency for the same purpose. The integrity of the cryptocurrency would then vouch that the photo was unaltered since the upload. But that’s not cost-effective. You could even upload the hash to Reddit, since it’s not believable that they would manipulate timestamps to help some random guy somewhere in the world commit fraud.
China, oh you Remembering something about go green and bla bla, but continue to create coal plants.
The Chinese government has been caught using AI for propaganda and claiming to be real. So I don’t see it happening within the Chinese government etc.
About as enforceable as banning bitcoin.
This is a smart and ethical way to include AI into everyday use, though I hope the watermarks are not easily removed.
Think a layer deeper how can it misused to control naratives.
You read some wild allegation, no AI marks (they required to be visible), so must written by someone? Right? What if someone, even the government jumps out as said someone use an illiegal AI to generate the text? The questioning of the matter will suddently from verifying if the allegation decribed happened, to if it itself is real. The public sentiment will likely overwhelmed by “Is this fakenews?” or “Is the allegation true?” Compound that with trusted entities, discrediting anything become easier.
Give you a real example. Before Covid spread globally there was a Chinese whistleblower, worked in the hospital and get infected. He posted a video online about how bad it was, and quickly got taken down by the government. What if it happened today with the regulation in full force? Government can claim it is AI generated. The whistleblower doesn’t exist. Nor the content is real. 3 days later, they arrested a guy, claiming he spread fakenews using AI. They already have a very efficient way to control naratives, and this piece of garbage just give them an express way.
You though that only a China thing? No, every entities including governments are watching, especially the self-claimed friend of Putin and Xi, and the absolute free speech lover. Don’t think it is too far to reach you yet.
It’s still a good thing. The alternative is people posting AI content as though it is real content, which is a worldwide problem destroying entire industries. All AI content should by law have to be clearly labeled.
Then what AI generated slop without label are to the plain eyes? That label just encourge the laziness of the brain as an “easy filter.” Those slop without label just evelated itself to be somewhat real, becuase the label exist exploiting the laziness.
Before you said some AI slop are clearly identifiable, you can’t rule out everyone can, and every piece are that identifiable. And for those images that looks a little unrealistic, just decrease the resolution to very grainy and hide those details. That will work 9 out of 10. You can’t rule out that 0.1% content that pass sanity check can’t do 99.9% damage.
After all, human are emotional creatures, and sansationism is real. The urge of share something emotional is why misinformation and disinformation are so common these days. People will overlook details when the urge hits.
Somethimes, labeling can do more harm than good. It just give a false sense.
It will be relatively easy to strip that stuff off. It might help a little bit with internet searches or whatever, but anyone spreading deepfakes will probably not be stopped by that. Still better than nothing, I guess.
You can use things like steganography to embed data into the AI output.
Imagine a text has certain letters in certain places which can give you a probability rating that it’s AI generated, or errant pixels of certain colors.
Printers already do something like this, printing imperceptible dots on pages.
it will be relatively easy to strip off
How so? If it’s anything like llm text based “water marks” the watermark is an integral part of the output. For an llm it’s about downrating certain words in the output, I’m guessing for photos you could do the same with certain colors, so if this variation of teal shows up more than this variation then it’s made by ai.
I guess the difference with images is that since you’re not doing the “guess the next word” aspect and feeding the output from the previous step into the next one, you can’t generate the red green list from the previous output.
I’m going to develop a new AI designed to remove watermarks from AI generated content. I’m still looking for investors if you’re interested! You could get in on the ground floor!