drhead [he/him]
The executive branch could absolutely unilaterally cut off support to Israel. We already have laws that prohibit arms transfers to countries interfering with USAID operations, and we’re signatories to treaties that prohibit arms transfers to countries if we reasonably believe they will be used in the commission of war crimes. The easiest one for the president to prove would be the former, since we literally have reports from USAID saying this is happening. It’s also worth noting that we have treaties obligating us to provide certain amounts of aid to Israel, but enforcing these laws is the sole responsibility of the executive branch. Biden could choose to cut off arms transfers at any time, and if someone wants to argue that our obligations to provide aid for Israel supersede international treaties they can let the courts sort it out.
The only hoop you have to jump through is using a Nitter instance. And the most dangerous abusers are most likely going to be determined enough to where doing this or creating a new account is not a deterrent.
False security is worse than no security. If people trust that the block function is reliable at stopping people from seeing your posts, and then those people post things publicly that they wouldn’t share otherwise, that is leaving more people vulnerable than having no way to stop people from seeing your posts.
I literally have been using the majority of my spare time to work with AI-generated images for almost two years now. I have a very thorough understanding of what exactly you’d need to pull off a stunt like this.
The background is part of the image, the obviously given clothing is part of the image, both of those things are fairly consistent across all of the images and look like what would be used for facial recognition, which is something that we know most countries do when they have the technological means to do so, China included. If you want that consistent background and clothing, it needs to be part of the training images. Otherwise, your next best option is a lot of tedious manual editing, which would be more effort than it is worth if the images are to look plausible.
I also have looked at the images myself, and vividly remember GenZedong trying to point out skin lesions as proof that an image is AI generated (definitely not their proudest moment, though they may have thought otherwise). If you’d like to dig yourself into that hole, then show some examples. Most that I’ve seen pointed out can be more easily explained as skin lesions, markings on the background wall, something moving when the picture is taken. This is what real NN artifacts look like, I never saw anything like these in those images, and what I see far more of is consistency in details that neural nets struggle a lot with.
Had to review my notes on discord from when I was initially investigating this.
You’d need to specifically train a model to output images that look specifically like these photos. If they had enough real images of prisoners to even try to finetune an existing model trained on a broad range of faces, they would have enough real images to make whatever point they’re trying to make. That’s a mark against these photos being synthetic on practical grounds, in that there is no point in using synthetic image generation to inflate the count.
That database has around 2800 images on it. If we’re proposing that a substantial portion are synthetic, then that leaves only a couple hundred that could be used to actually train, which isn’t enough, you would severely overfit any model large enough to generate sufficiently high quality images. And the images shown are clearly beyond something like the photos on thispersondoesnotexist. Everything in the background of all images shown, for example, is coherent, including other people in the background. There are consistent objects across different pictures - many subjects were having pictures taken on the same background, and many have similar clothing. The alleged reason for these pictures is facial recognition (which is entirely believable since yeah, China does that, as does everyone else, and isn’t notable), having dark clothing on hand to ensure contrast makes sense, as does taking pictures in the same spot. This is all another mark against the photos being synthetic, on the grounds that even current image generation technology can’t fully do what is shown in these pictures to the same degree. “But they have special technology that we don’t–” no, we have no reason to believe they do, this is unsubstantiated bullshit. Higher quality models generally are larger and require even more data, which would just get you an overfitted model faster with your few hundred photos.
The only thing they really directly claim that these photos are is photos used for facial recognition. They show that at some point, Chinese police took photos of about 2800 people in Xinjiang, which isn’t surprising at all and doesn’t really prove much. That won’t stop them from trying to portray it as proof of an ongoing genocide, though, especially when they know that like 90% of people won’t question it at all. The base unit of propaganda is not lies, it’s emphasis. The most plausible explanation is that the photos are real, but are being misrepresented as something unusual.
If we’re talking about FTL, might as well mention Multiverse: https://subsetgames.com/forum/viewtopic.php?t=35332
I’m pretty sure this outright has more new content than the base game did.
I remember looking over those at the time, at that time the images both seemed a bit beyond then-current image generation technology and there never really seemed to be a compelling explanation over why “some RFA source went through great effort to fabricate images for this story” is more likely of an explanation than “some RFA source is misrepresenting pictures of what is actually mostly just boring normal prison stuff”.