In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.
I don’t think it should be allowed to be trained off any of this stuff for entertainment/art/etc. at all. Like the dream future of AI was all the shitty boring stuff handled for us so we could sit back, chill and focus on arts, real scientific research, general individual betterment etc.
Instead we have these companies trying to get them doing all the art and interesting things whilst we all either have no job, money, or good standard of living, or the dangerous / shitty jobs.
So to avoid being “under the boot of the ruling classes” you want the government to be in charge of deciding what is and is not the correct way to produce our entertainment and art?
I use Stable Diffusiuon to generate illustrations for tabletop roleplaying game adventures that I run for my friends. I use ChatGPT to brainstorm ideas for those adventures and come up with dialogue or descriptive text. How big a fine would I be facing under these laws?
I mean there has to be a price to pay here, we can’t have our cake and eat it unfortunately. Caveats like “individual use” could allow this type of use while prevent companies taking the piss.
You seem to be implying that the government is the ruling class too, which (I grant you) may at least in part be the case but at least they’re voted into place. Would you rather have companies that we have no control over realistically use it without limit?
Honest question, what would you see as a fair way to handle the situation?
I mean there has to be a price to pay here,
Why, because you say so?
Would you rather have companies that we have no control over realistically use it without limit?
Yes, because that means I can also use it without limit. And I see no reason to apply special restrictions to AI specifically, companies are already bound by lots of laws governing their behaviour and ultimately it’s their behaviour that is what’s important to control.
Honest question, what would you see as a fair way to handle the situation?
Handle it the way we already handle it. People are allowed to analyze publicly available data however they want. Training an AI is just a special case of analyzing that data, you’re using a program to find patterns in it that the AI is later able to make use of when generating new material.