In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.
Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.
Note that what the EU is requesting is for OpenAI to disclose information, nobody says (yet?) that they can’t use copyrighted material, what they are asking is for OpenAI to be transparent with sharing the training method, and what material is being used.
The problem seems to be that OpenAI doesn’t want to be “Open” anymore.
In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.
Of couse, disclosing openly what materials are being used for training might leave them open for lawsuits, but whether or not it’s legal to use copyrighted material for training is something that is still in the air, so it’s a risk either way, whether they disclose it or not.
Your first comment and it is to support OpenAI.
edit:
Haaaa, OpenAI, this famous hippies led, non-profit firm.
2015–2018: Non-profit beginnings
2019: Transition from non-profit
Funded by Musk and Amazon. The friends of humanity.
Also:
In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.
Yeah, he closed the source code because he was afraid he would get copied by other people.
keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.
I feel like the AI model is going to become self aware before people like Sutskever do
With replies like this, it’s no wonder he was hesitant to post in the first place.
There’s no need for the hostility and finger pointing.
he was hesitant to post in the first place.
Was he hesitant? how do you know that?
You wouldn’t be saying that if it was your content that was being ripped off
Exactly this. I hate copyright as much as the next person and find it funny when corporate meddling leads to them fighting each other, but both sides of this leads to shitty precedent. While copyright enforcement already is a shitty precedent, its something we can fight. AI companies laundering massive amounts of data without having to hold up copyright could possibly lead to them also not having to abide to privacy laws in the future with similar arguments. Correct me if im wrong.
That’s, uh, exactly how they work? They need large amounts of training data, and that data isn’t being generated in house.
It’s being stolen, scraped from the internet.
if you read a copyrighted material without paying and then forgot most of it a month later with vague recollection of what you’ve read the fact is you still accessed and used the copyrighted material without paying.
Now let’s go a step further, you write something that is inspired by that copyrighted material and what you wrote become successful to some degree with eyes on it, but you refuse to admit that’s where you got the idea from because you only have a vague recollection. The fact is you got the idea from the copyrighted material.