You are viewing a single thread.
View all comments View context
7 points
*

The learning model is artificial, vs a human that is sentient. If a human learns from a piece of work, that’s fine if they emulate styles in their own work. However, sample that work, and the original artist is due compensation. This was a huge deal in the late 80s with electronic music sampling earlier musical works, and there are several cases of copyright that back original owners’ claim of royalties due to them.

The lawsuits allege that the models used copyrighted work to learn. If that is so, writers are due compensation for their copyrighted work.

This isn’t litigation against the technology. It’s litigation around what a machine can freely use in its learning model. Had ChatGPT, Meta, etc., used works in the public domain this wouldn’t be an issue. Yet it looks as if they did not.

EDIT

And before someone mentions that the books may have been bought and then used in the model, it may not matter. The Birthday Song is a perfect example of copyright that caused several restaurant chains to use other tunes up until the copyright was overturned in 2016. Every time the AI uses the copied work in its’ output it may be subject to copyright.

permalink
report
parent
reply
5 points

I can read a copy written work and create a work from the experience and knowledge gained. At what point is what I’m doing any different to the A.I.?

permalink
report
parent
reply
4 points

For one thing: when you do it, you’re the only one that can express that experience and knowledge. When the AI does it, everyone an express that experience and knowledge. It’s kind of like the difference between artisanal and industrial. There’s a big difference of scale that has a great impact on the livelihood of the creators.

permalink
report
parent
reply
3 points

Yes, it’s wonderful. Knowledge might finally become free in the advent of AI tools and we might finally see the death of the copyright system. Oh how we can dream.

permalink
report
parent
reply
2 points

For one thing, you can do the task completely unprompted. The LLM has to be told what to do. On that front, you have an idea in your head of the task you want to achieve and how you want to go about doing it, the output is unique because it’s determined by your perceptions. The LLM doesn’t really have perceptions, it has probabilities. It’s broken down the outputs of human creativity into numbers and is attempting to replicate them.

permalink
report
parent
reply
-1 points
*

The ai does have perceptions, fed into by us as inputs. I give the ai my perceptions, the ai creates a facsimile, and I adjust the perceptions I feed into the ai until I receive an output that meets the needs of my requirements, no different from doing it myself except I didn’t need to read all the books, and learn all the lessons myself. I still tailor the end product, just not to the same micro scale that we needed to traditionally.

permalink
report
parent
reply
2 points
*

There is a practical difference in the time required and sheer scale of output in the AI context that makes a very material difference on the actual societal impact, so it’s not unreasonable to consider treating it differently.

Set up a lemonade stand on a random street corner and you’ll probably be left alone unless you have a particularly Karen-dominated municipal government. Try to set up a thousand lemonade stands in every American city, and you’re probably going to start to attract some negative attention. The scale of an activity is a relevant factor in how society views it.

permalink
report
parent
reply
5 points

The creator of ChatGPT is sentient. Why couldn’t it be said that this is their expression of the learned works?

permalink
report
parent
reply
2 points
3 points

I’ve glanced at these a few times now and there are a lot of if ands and buts in there.

I’m not understanding how an AI itself infringes on the copyright as it has to be directed in its creation at this point (GPT specifically). How is that any different than me using a program that will find a specific piece of text and copy it for use in my own document. In that case the document would be presented by me and thus I would be infringing not the software. AI (for the time being) are simply software and incapable of infringement. And suing a company who makes the AI simply because they used data to train its software is not infringement as the works are not copied verbatim from their original source unless specifically requested by the user. That would put the infringement on the user.

permalink
report
parent
reply
3 points
*

It’s litigation around what a machine can freely use in its learning model.

No, its not that, either. It’s litigation around what resources a person can exploit to develop a product without paying for that right.

The machine is doing nothing wrong. It’s not feeding itself.

permalink
report
parent
reply

Technology

!technology@lemmy.ml

Create post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

Community stats

  • 2K

    Monthly active users

  • 2.7K

    Posts

  • 42K

    Comments

Community moderators