OpenAI collapses media reality with Sora AI video generator | If trusting video from anonymous sources on social media was a bad idea before, it’s an even worse idea now::Hello, cultural singularity—soon, every video you see online could be completely fake.
I looked at these videos with very mixed emotions. On the one hand, I marveled at how far we’ve gotten. In a few years we went from generating sort of okay images in a very confined domain and essentially uncontrollable, to generating high resolution video that on first glance looks real.
But then the sadness struck me. I think we’re entering the post-truth era, where the truth is harder and harder to find because all the fake stuff looks so real. We can generate text, images, sound, and now also video of whatever we want in the blink of an eye. Combine this with the tendency of people to accept any “information” that fits their view, and the filter bubbles that already exist, and we can see that humanity will start living in separate bubbles. Every bubble will have their own truth, and even if someone proves that a video or image is fake, that information will probably not even reach them because the truth doesn’t generate enough clicks.
I want to stay optimistic, we’ve overcome so much stuff as a species, maybe we’ll right the ship at some point. But with all the shit that is already going on in the world, the last thing we need is the ability to fake videos like this in no time at all. At some point the separate filter bubbles will tear our stable western world as we knew it apart, and we’ll see shit like WW II again. The situation is already heating up.
It’s funny that in the human history there will be a gap of around 100 years where photos and video were considered to be solid proof and evidence that could determine the outcome of somebody’s future
we’re back at square one I guess
Naah, that was never a thing. EG: In 1917, two young girls created some photographs of fairies, the Cottingley Fairies. Arthur Conan Doyle, the inventor of Sherlock Holmes, endorsed them as real. When you have eliminated the impossible, whatever remains, however improbable, must be the truth. That quote is terrible advice.
The last 1 or 2 decades were really the golden age of credible evidence. Everyone has a video camera and can upload these videos almost immediately (proving that the videos were not edited later). Yet, at the same time, misinformation has become this huge topic.
We’re not back to square 1, either. You can still immediately upload a video (or a hash, or get it certified in some way). Say, you do this with dashcam footage after a collision, ASAP. That makes it almost unassailable as evidence, because you can’t have had time to forge it; certainly not in a way that is congruent with independent evidence and testimony.
If several people, upload videos of the same event at about the same time, then they either are all in it together and carefully prepared the videos beforehand, or not.
I’m actually glad that AI is making people realize that what they see is likely not real. For the history of media, the default has been for the written word or images or video to be taken as 100% truth, when in reality, it has always been very easy to deceive and manipulate. Now that we will suspect everything, maybe there will finally be critical thinking.