OpenAI collapses media reality with Sora AI video generator | If trusting video from anonymous sources on social media was a bad idea before, it’s an even worse idea now::Hello, cultural singularity—soon, every video you see online could be completely fake.
It’s like we’re going back to the pre-internet era but it’s obviously a little different. Before the internet, there were just a few major media providers on TV plus lots of local newspapers. I would say that, for the most part in the USA, the public trusted TV news sources even though their material interests weren’t aligned (regular people vs big media corporations). It felt like there wasn’t a reason not to trust them, since they always told an acceptable version of the truth and there wasn’t an easy way to find a different narrative (no internet or crazy cable news). Local newspapers were usually very trusted, since they were often locally owned and part of the community.
The internet broke all of those business models. Local newspapers died because why do you need a paper when there are news websites? Major media companies were big enough to weather the storm and could buy up struggling competitors. They consolidated and one in particular started aggressively spinning the news to fit a narrative for ratings and political gain of the ownership class. Other companies followed suit.
This, paired with the thousands of available narratives online, weakened the credibility of the major media companies. Anyone could find the other side of the story or fact check whatever was on TV.
Now what is happening? The internet is being polluted with garbage and lies. It hasn’t been good for some time now. Obviously anyone could type up bullshit, but for a minute photos were considered reliable proof (usually). Then photoshopping something became easier and easier, which made videos the new standard of reliable proof (in most cases).
But if anything can be fake now and difficult to identify as fake, then how can you fact check anything? Only those with the means will be able to produce undeniably real news with great difficulty, which I think will return power to major news companies or something equivalent.
I’m probably wrong about what the future holds, so what do you think is going to happen?
Now what is happening? The internet is being polluted with garbage and lies. It hasn’t been good for some time now.
Social media as content aggregation is generally garbage, but it’s a far stretch to apply that to the Internet or even the Web as a whole. Don’t forget Wikipedia is still a thing and almost every creator of primary source data publishes online.
But if anything can be fake now and difficult to identify as fake, then how can you fact check anything? Only those with the means will be able to produce undeniably real news with great difficulty, which I think will return power to major news companies or something equivalent.
That’s kind of always been true. And I agree, we need to find a way to maintain information sourcing organizations (e.g. news) that we can trust as the arbiters of this information. If Washington Post can actually put credible reporters on the ground to confirm something, and I know I can trust WaPo, I can fairly say with some confidence that it’s good information.
I think we all (or some of us at least) just need to be willing to pay for this service.
I don’t think you’re wrong, I have been thinking the same thing.
Everyone has been worried about “AI misinformation” - but if misinformation becomes so commoditized online that someone convinced the moon landing is fake finds two dozen different AI generated sources agreeing with them but disagreeing with each other (i.e. a video of Orson Wells filming it but also a video of Stanley Kubrick filming it) we may well end up in a world where people just stop paying attention to the bullshit online that has been destroying people’s minds for years now.
Couple this with the advances in AI correctly identifying misinformation and live fact checking it with citations to reputable and/or certified sources, combined with things like Elon Musk’s ‘uncensored’ Grok turning around and calling his conservative Twitter fans racist and small minded morons while pointing out why they are wrong, or Gab’s literal Adolf Hitler AI telling a user they were disgusting for asking if Jews were vermin - and we may just end up on a narrow path out of the mess we’ve found ourselves in well before AI was suddenly a thing.
I had been really worried about the AI misinformation angle, but given some recent developments in the past few months I’m actually hopeful about the future of a better informed public for the first time in years.
People just want to get confirmed in what they already believe, with the amount of fake news people are already getting dumber because they’re not suffering criticism.
If I believe moon landing was fake before I would have hundreds of source telling me I’m wrong and only a few scammy documentaries that would agree with my belief. But now there is fake to confirm any belief I have. Aliens are real, check this video proving it. Zuckerberg is a lizard? There are dozens of photo and video on twitter. And so on.
I’m really not optimistic about that at all.
Agreed, people are up in arms that misinformation will become easier. But I think the naive idea that the internet is inherently a reliable source of truth when it is mixed with subtler forms of misinformation, is much more insidious. Journalism used to be a highly respected field before we all forgot why it was so important.
I don’t really see the big problem yet. There’s still a hint of uncanny valley in that video.
Show it to your parents and ask what they think. Guaranteed they can’t tell it’s fake.
This is only the beginning. It’s only going to get harder and harder to know what is and isn’t real online.
Sure, you and I are aware of this and have an idea of what to look out for. But do my older parents or grandparents know about this stuff and what to look for? I seriously doubt it.
This kind of AI stuff bums me out. You get people legitimately sharing AI images (and potentially videos in the future) and saying “look what I made!”. It’s totally inauthentic.
My boss loves this shit, on the other hand. Looking forward to the day she can automate our jobs away, I assume.
Why are they working so hard on making humanity worse?
Because we’re all born selfish assholes*, and some people never learn to not be so.
*We’re all born as selfish idiots, how can we be otherwise? We’re helpless at birth, thrust from perfect comfort and safety into discomfort, utterly ignorant and wholly dependent, with no knowledge there are others, who are just as dependent and helpless when they’re born. Learning about others, and how to get along is part of maturing.
https://aftermath.site/openai-sora-scam-sillicon-valley
It was a scam peoples
Targeted stupids investors were not supposed to share it
Well I get that AI companies over promise and stuff, but that opinion piece really just confirmes what we’re already able to see in said clips. Sure, many animals look eerie es hell and that monobloc excavation video is one hell of an acid trip, but there’s already a lot there. More than I’m comfortable with.
The link does not say in what way people were not supposed to share.
The link is the same kind of self-delusion people show around all of these generative tools: “look the faces are weird, the bird has wrong feathers, the cat has only 2 legs, nothing to worry about” while forgetting that most everything else in a clip works well and that it is the first-of-the-first releases which will get gradually better.
This shit is sold as the new Pixar
Yeah in a few months it will be a tool to help animators with shadows and lightings In some years it could produce a decent GIF of Pepe having sex with Sonic
At least it is less a scam than an NFT, still a scam
Still a waste of money and energetic ressources
Invest in it if you can, sure you can have a nice taxe refund in 3-5 years