OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

144 points

Its a bit pedantic, but I’m not really sure I support this kind of extremist view of copyright and the scale of whats being interpreted as ‘possessed’ under the idea of copyright. Once an idea is communicated, it becomes a part of the collective consciousness. Different people interpret and build upon that idea in various ways, making it a dynamic entity that evolves beyond the original creator’s intention. Its like issues with sampling beats or records in the early days of hiphop. Its like the very principal of an idea goes against this vision, more that, once you put something out into the commons, its irretrievable. Its not really yours any more once its been communicated. I think if you want to keep an idea truly yours, then you should keep it to yourself. Otherwise you are participating in a shared vision of the idea. You don’t control how the idea is interpreted so its not really yours any more.

If thats ChatGPT or Public Enemy is neither here nor there to me. The idea that a work like Peter Pan is still possessed is such a very real but very silly obvious malady of this weirdly accepted but very extreme view of the ability to possess an idea.

permalink
report
reply
36 points
*

Ai isn’t interpreting anything. This isn’t the sci-fi style of ai that people think of, that’s general ai. This is narrow AI, which is really just an advanced algorithm. It can’t create new things with intent and design, it can only regurgitate a mix of pre-existing stuff based on narrow guidelines programmed into it to try and keep it coherent, with no actual thought or interpretation involved in the result. The issue isn’t that it’s derivative, the issue is that it can only ever be inherently derivative without any intentional interpretation or creativity, and nothing else.

Even collage art has to qualify as fair use to avoid copyright infringement if it’s being done for profit, and fair use requires it to provide commentary, criticism, or parody of the original work used (which requires intent). Even if it’s transformative enough to make the original unrecognizable, if the majority of the work is not your own art, then you need to get permission to use it otherwise you aren’t automatically safe from getting in trouble over copyright. Even using images for photoshop involves creative commons and commercial use licenses. Fanart and fanfic is also considered a grey area and the only reason more of a stink isn’t kicked up over it regarding copyright is because it’s generally beneficial to the original creators, and credit is naturally provided by the nature of fan works so long as someone doesn’t try to claim the characters or IP as their own. So most creators turn a blind eye to the copyright aspect of the genre, but if any ever did want to kick up a stink, they could, and have in the past like with Anne Rice. And as a result most fanfiction sites do not allow writers to profit off of fanfics, or advertise fanfic commissions. And those are cases with actual humans being the ones to produce the works based on something that inspired them or that they are interpreting. So even human made derivative works have rules and laws applied to them as well. Ai isn’t a creative force with thoughts and ideas and intent, it’s just a pattern recognition and replication tool, and it doesn’t benefit creators when it’s used to replace them entirely, like Hollywood is attempting to do (among other corporate entities). Viewing AI at least as critically as actual human beings is the very least we can do, as well as establishing protection for human creators so that they can’t be taken advantage of because of AI.

I’m not inherently against AI as a concept and as a tool for creators to use, but I am against AI works with no human input being used to replace creators entirely, and I am against using works to train it without the permission of the original creators. Even in the artist/writer/etc communities it’s considered to be a common courtesy to credit other people/works that you based a work on or took inspiration from, even if what you made would be safe under copyright law regardless. Sure, humans get some leeway in this because we are imperfect meat creatures with imperfect memories and may not be aware of all our influences, but a coded algorithm doesn’t have that excuse. If the current AIs in circulation can’t function without being fed stolen works without credit or permission, then they’re simply not ready for commercial use yet as far as I’m concerned. If it’s never going to be possible, which I just simply don’t believe, then it should never be used commercially period. And it should be used by creators to assist in their work, not used to replace them entirely. If it takes longer to develop, fine. If it takes more effort and manpower, fine. That’s the price I’m willing to pay for it to be ethical. If it can’t be done ethically, then imo it shouldn’t be done at all.

permalink
report
parent
reply
4 points

Your broader point would be stronger if it weren’t framed around what seems like a misunderstanding of modern AI. To be clear, you don’t need to believe that AI is “just” a “coded algorithm” to believe it’s wrong for humans to exploit other humans with it. But to say that modern AI is “just an advanced algorithm” is technically correct in exactly the same way that a blender is “just a deterministic shuffling algorithm.” We understand that the blender chops up food by spinning a blade, and we understand that it turns solid food into liquid. The precise way in which it rearranges the matter of the food is both incomprehensible and irrelevant. In the same way, we understand the basic algorithms of model training and evaluation, and we understand the basic domain task that a model performs. The “rules” governing this behavior at a fine level are incomprehensible and irrelevant-- and certainly not dictated by humans. They are an emergent property of a simple algorithm applied to billions-to-trillions of numerical parameters, in which all the interesting behavior is encoded in some incomprehensible way.

permalink
report
parent
reply
-16 points
*

Bro I don’t think you have any idea what you’re talking about. These AIs aren’t blenders, they are designed to recognize and replicate specific aspects of art and writing and whatever else, in a way that is coherent and recognizable. Unless there’s a blender that can sculpt Michelangelo’s David out of apple peels, AI isn’t like a blender in any way.

But even if they were comparable, a blender is meant to produce chaos. It is meant to, you know, blend the food we put into it. So yes, the outcome is dictated by humans. We want the individual pieces to be indistinguishable, and deliberate design decisions get made by the humans making them to try and produce a blender that blends things sufficiently, and makes the right amount of chaos with as many ingredients as possible.

And here’s the thing, if we wanted to determine what foods were put into a blender, even assuming we had blindfolds on while tossing random shit in, we could test the resulting mixture to determine what the ingredients were before they got mashed together. We also use blenders for our own personal use the majority of the time, not for profit, and we use our own fruits and vegetables rather than stuff we stole from a neighbor’s yard, which would be, you know, trespassing and theft. And even people who use blenders to make something that they sell or offer publicly almost always list the ingredients, like restaurants.

So even if AI was like a blender, that wouldn’t be an excuse, nor would it contradict anything I’ve said.

permalink
report
parent
reply
1 point

I disagree with your interpretation of how an AI works, but I think the way that AI works is pretty much irrelevant to the discussion in the first place. I think your argument stands completely the same regardless. Even if AI worked much like a human mind and was very intelligent and creative, I would still say that usage of an idea by AI without the consent of the original artist is fundamentally exploitative.

You can easily train an AI (with next to no human labor) to launder an artist’s works, by using the artist’s own works as reference. There’s no human input or hard work involved, which is a factor in what dictates whether a work is transformative. I’d argue that if you can put a work into a machine, type in a prompt, and get a new work out, then you still haven’t really transformed it. No matter how creative or novel the work is, the reality is that no human really put any effort into it, and it was built off the backs of unpaid and uncredited artists.

You could probably make an argument for being able to sell works made by an AI trained only on the public domain, but it still should not be copyrightable IMO, cause it’s not a human creation.

TL;DR - No matter how creative an AI is, its works should not be considered transformative in a copyright sense, as no human did the transformation.

permalink
report
parent
reply
0 points

I thought this way too, but after playing with ChatGPT and Mid Journey near daily, I have seen many moments of creativity way beyond the source it was trained on. I think a good example that I saw was on a YouTube video (sorry I cannot recall which to link) where thr prompt was animals made of sushi and wow, was it ever good and creative on how it made them and it was photo realistic. This is just not something you an find anywhere on the Internet. I just did a search and found some hand drawn Japanese style sushi with eyes and such, but nothing like what I saw in that video.

I have also experienced it suggested ways to handle coding on my VR Theme Park app that is very unconventional and not something anyone has posted about as near as I can tell. It seems to be able to put 2 and 2 together and get 8. Likely as it sees so much of everything at once that it can connect the dots on ways we would struggle too. It is more than regurgitated data and it surprises me near daily.

permalink
report
parent
reply
1 point

Just because you think it seems creative due to your lack of experience with human creativity, that doesn’t mean it is uniquely creative. It’s not, it can’t be by its very nature, it can only regurgitate an amalgamation of stuff fed into it. What you think you see is the equivalent of paradoilia.

permalink
report
parent
reply
-3 points

if it’s being done for profit, and fair use requires it to provide commentary, criticism, or parody of the original work used. Even if it’s transformative enough to make the original unrecognizable

I’m going to need a source for that. Fair use is a flexible and context-specific, It depends on the situation and four things: why, what, how much, and how it affects the work. No one thing is more important than the others, and it is possible to have a fair use defense even if you do not meet all the criteria of fair use.

permalink
report
parent
reply
11 points

I’m a bit confused about what point you’re trying to make. There is not a single paragraph or example in the link you provided that doesn’t support what I’ve said, and none of the examples provided in that link are something that qualified as fair use despite not meeting any criteria. In fact one was the opposite, as something that met all the criteria but still didn’t qualify as fair use.

The key aspect of how they define transformative is here:

Has the material you have taken from the original work been transformed by adding new expression or meaning?

These (narrow) AIs cannot add new expression or meaning, because they do not have intent. They are just replicating and rearranging learned patterns mindlessly.

Was value added to the original by creating new information, new aesthetics, new insights, and understandings?

These AIs can’t provide new information because they can’t create something new, they can only reconfigure previously provided info. They can’t provide new aesthetics for the same reason, they can only recreate pre-existing aesthetics from the works fed to them, and they definitely can’t provide new insights or understandings because again, there is no intent or interpretation going on, just regurgitation.

The fact that it’s so strict that even stuff that meets all the criteria might still not qualify as fair use only supports what I said about how even derivative works made by humans are subject to a lot of laws and regulations, and if human works are under that much scrutiny then there’s no reason why AI works shouldn’t also be under at least as much scrutiny or more. The fact that so much of fair use defense is dependent on having intent, and providing new meaning, insights, and information, is just another reason why AI can’t hide behind fair use or be given a pass automatically because “humans make derivative works too”. Even derivative human works are subject to scrutiny, criticism, and regulation, and so should AI works.

permalink
report
parent
reply
-5 points

Neural networks are based on the same principles as the human brain, they are literally learning in the exact same way humans are. Copyrighting the training of neural nets is the essentially the same thing as copyrighting interpretation and learning by humans.

permalink
report
parent
reply
3 points

These AIs are not neural networks based on the human brain. They’re literally just algorithms designed to perform a single task.

permalink
report
parent
reply
15 points

Well, I’d consider agreeing if the LLMs were considered as a generic knowledge database. However I had the impression that the whole response from OpenAI & cie. to this copyright issue is “they build original content”, both for LLMs and stable diffusion models. Now that they started this line of defence I think that they are stuck with proving that their “original content” is not derivated from copyrighted content 🤷

permalink
report
parent
reply
1 point

Well, I’d consider agreeing if the LLMs were considered as a generic knowledge database. However I had the impression that the whole response from OpenAI & cie. to this copyright issue is “they build original content”, both for LLMs and stable diffusion models. Now that they started this line of defence I think that they are stuck with proving that their “original content” is not derivated from copyrighted content 🤷

Yeah I suppose that’s on them.

permalink
report
parent
reply
6 points

Copyright definitely needs to be stripped back severely. Artists need time to use their own work, but after a certain time everything needs to enter the public space for the sake of creativity.

permalink
report
parent
reply
1 point

If you sample someone else’s music and turn around and try to sell it, without first asking permission from the original artist, that’s copyright infringement.

So, if the same rules apply, as your post suggests, OpenAI is also infringing on copyright.

permalink
report
parent
reply
31 points

If you sample someone else’s music and turn around and try to sell it, without first asking permission from the original artist, that’s copyright infringement.

I think you completely and thoroughly do not understand what I’m saying or why I’m saying it. No where did I suggest that I do not understand modern copyright. I’m saying I’m questioning my belief in this extreme interpretation of copyright which is represented by exactly what you just parroted. That this interpretation is both functionally and materially unworkable, but also antithetical to a reasonable understanding of how ideas and communication work.

permalink
report
parent
reply
3 points

That’s life under capitalism.

I agree with you in essence (I’ve put a lot of time into a free software game).

However, people are entitled to the fruits of their labor, and until we learn to leave capitalism behind artists have to protect their work to survive. To eat. To feed their kids. And pay their rent.

Unless OpenAi is planning to pay out royalties to everyone they stole from, what their doing is illegal and immoral under our current, capitalist paradigm.

permalink
report
parent
reply
2 points

Because in practical terms, writers and artists’ livelihoods are being threatened by AIs who were trained on their work without their consent or compensation. Ultimately the only valid justification for copyright is to enable the career of professional creators who contribute to our culture. We knew how ideas and communication worked when copyright was first created. That is why it’s a limited time protection, a compromise.

All the philosophical arguments about the nature of ideas and learning, and how much a machine may be like a person don’t change that if anyone dedicates years of their efforts to develop their skills only to be undercut by an AI who was trained on their own works, is an incredibly shitty position to be in.

permalink
report
parent
reply
3 points
*

A sample is a fundamental part of a song’s output, not just its input. If LLMs are changing the input’s work to a high enough degree is it not protected as a transformative work?

permalink
report
parent
reply
-3 points
*

it’s more like a collage of everyone’s words. it doesn’t make anything creative because ot doesn’t have a body or life or real social inputs you could say. basically it’s just rearranging other people’s words.

A song that’s nothing but samples. but so many samples it hides that fact. this is my view anyway.

and only a handful of people are getting rich of the outputs.

if we were in some kinda post capitalism economy or if we had UBI it wouldn’t bother me really. it’s not the artists ego I’m sticking up for, but their livelihood

permalink
report
parent
reply
-1 points

To add to that, Harry Potter is the worst example to use here. There is no extra billion that JK Rowling needs to allow her to spend time writing more books.

Copyright was meant to encourage authors to invest in their work in the same way that patents do. If you were going to argue about the issue of lifting content from books, you should be using books that need the protection of copyright, not ones that don’t.

permalink
report
parent
reply
6 points

Copyright was meant

I just don’t know that I agree that this line of reasoning is useful. Who cares what it was meant for? What is it now, currently and functionally, doing?

permalink
report
parent
reply
92 points
*

If I memorize the text of Harry Potter, my brain does not thereby become a copyright infringement.

A copyright infringement only occurs if I then reproduce that text, e.g. by writing it down or reciting it in a public performance.

Training an LLM from a corpus that includes a piece of copyrighted material does not necessarily produce a work that is legally a derivative work of that copyrighted material. The copyright status of that LLM’s “brain” has not yet been adjudicated by any court anywhere.

If the developers have taken steps to ensure that the LLM cannot recite copyrighted material, that should count in their favor, not against them. Calling it “hiding” is backwards.

permalink
report
reply
24 points
*

You are a human, you are allowed to create derivative works under the law. Copyright law as it relates to machines regurgitating what humans have created is fundamentally different. Future legislation will have to address a lot of the nuance of this issue.

permalink
report
parent
reply
2 points

And allowed get sued anyway

permalink
report
parent
reply
7 points

Another sensationalist title. The article makes it clear that the problem is users reconstructing large portions of a copyrighted work word for word. OpenAI is trying to implement a solution that prevents ChatGPT from regurgitating entire copyrighted works using “maliciously designed” prompts. OpenAI doesn’t hide the fact that these tools were trained using copyrighted works and legally it probably isn’t an issue.

permalink
report
parent
reply
4 points

If Google took samples from millions of different songs that were under copyright and created a website that allowed users to mix them together into new songs, they would be sued into oblivion before you could say “unauthorized reproduction.”

You simply cannot compare one single person memorizing a book to corporations feeding literally millions of pieces of copyrighted material into a blender and acting like the resulting sausage is fine because “only a few rats fell into the vat, what’s the big deal”

permalink
report
parent
reply
2 points

Terrible analogy.

permalink
report
parent
reply
2 points

Which one? And why exactly?

permalink
report
parent
reply
-2 points
*

Google crawls every link available on all websites to index and give to people. That’s a better example. Which is legal and up to the websites to protect their stuff

permalink
report
parent
reply
1 point

It’s not a problem that it reads something. The problem is the thing that it produces should break copyright. Google search is not producing something, it reads everything to link you to that original copyrighted work. If it read it and then just spit out what’s read on its own, instead of sending you to the original creators, that wouldn’t be OK.

permalink
report
parent
reply
3 points

you bought the book to memorize from, anyway.

permalink
report
parent
reply
6 points

No, I shoplifted it from an Aldi

permalink
report
parent
reply
2 points

Let’s not pretend that LLMs are like people where you’d read a bunch of books and draw inspiration from them. An LLM does not think nor does it have an actual creative process like we do. It should still be a breach of copyright.

permalink
report
parent
reply
17 points

… you’re getting into philosophical territory here. The plain fact is that LLMs generate cohesive text that is original and doesn’t occur in their training sets, and it’s very hard if not impossible to get them to quote back copyrighted source material to you verbatim. Whether you want to call that “creativity” or not is up to you, but it certainly seems to disqualify the notion that LLMs commit copyright infringement.

permalink
report
parent
reply
5 points
*

This topic is fascinating.

I really do think i understand both sides here and want to find the hard line that seperates man from machine.

But it feels, to me, that some philosophical discussion may be required. Art is not something that is just manufactured. “Created” is the word to use without quotation marks. Or maybe not, i don’t know…

permalink
report
parent
reply
4 points

I wasn’t referring to whether the LLM commits copyright infringement when creating a text (though that’s an interesting topic as well), but rather the act of feeding it the texts. My point was that it is not like us in a sense that we read and draw inspiration from it. It’s just taking texts and digesting them. And also, from a privacy standpoint, I feel kind of disgusted at the thought of LLMs having used comments such as these ones (not exactly these, but you get it), for this purpose as well, without any sort of permission on our part.

That’s mainly my issue, the fact that they have done so the usual capitalistic way: it’s easier to ask for forgiveness than to ask for permission.

permalink
report
parent
reply
1 point

*could

permalink
report
parent
reply
-3 points
*
Deleted by creator
permalink
report
parent
reply
2 points

Hate to break it to you, but that’s all you are too.

permalink
report
parent
reply
0 points
*
Deleted by creator
permalink
report
parent
reply
2 points

What if you are just a vector solver but don’t realize it? We wouldn’t know we have neurons in our heads if scientists didn’t tell us. What even is consciousness?

permalink
report
parent
reply
1 point
*
Deleted by creator
permalink
report
parent
reply
87 points

We have to distinguish between LLMs

  • Trained on copyrighted material and
  • Outputting copyrighted material

They are not one and the same

permalink
report
reply
26 points

Yeah, this headline is trying to make it seem like training on copyrighted material is or should be wrong.

permalink
report
parent
reply
25 points

Legally the output of the training could be considered a derived work. We treat brains differently here, that’s all.

I think the current intellectual property system makes no sense and AI is revealing that fact.

permalink
report
parent
reply
5 points

I think this brings up broader questions about the currently quite extreme interpretation of copyright. Personally I don’t think its wrong to sample from or create derivative works from something that is accessible. If its not behind lock and key, its free to use. If you have a problem with that, then put it behind lock and key. No one is forcing you to share your art with the world.

permalink
report
parent
reply
7 points

Most books are actually locked behind paywalls and not free to use? Or maybe I don’t understand what you meant?

permalink
report
parent
reply
4 points

Following that, if a sailor is the sea were to put a copy of a protected book on the internet and ChatGPT was trained on it, how that argument would go? The copyright owner didn’t place it there, so it’s not “their decision”. And savvy people can make sure it’s accessible if they want to.

My belief is, if they can use all non locked data for free, then the model should be shared for free too and it’s outputs shouldn’t be subject to copyright. Just for context

permalink
report
parent
reply
1 point

Output from an AI has just been recently considered as not copyrightable.

I think it stemmed from the actors strikes recently.

It was stated that only work originating from a human can be copyrighted.

permalink
report
parent
reply
4 points

Output from an AI has just been recently considered as not copyrightable.

Where can I read more about this? I’ve seen it mentioned a few times, but never with any links.

permalink
report
parent
reply
3 points

They clearly only read the headline If they’re talking about the ruling that came out this week, that whole thing was about trying to give an AI authorship of a work generated solely by a machine and having the copyright go to the owner of the machine through the work-for-hire doctrine. So an AI itself can’t be authors or hold a copyright, but humans using them can still be copyright holders of any qualifying works.

permalink
report
parent
reply
0 points

Should we distinguish it though? Why shouldn’t (and didn’t) artists have a say if their art is used to train LLMs? Just like publicly displayed art doesn’t provide a permission to copy it and use it in other unspecified purposes, it would be reasonable that the same would apply to AI training.

permalink
report
parent
reply
1 point

Ah, but that’s the thing. Training isn’t copying. It’s pattern recognition. If you train a model “The dog says woof” and then ask a model “What does the dog say”, it’s not guaranteed to say “woof”.

Similarly, just because a model was trained on Harry Potter, all that means is it has a good corpus of how the sentences in that book go.

Thus the distinction. Can I train on a comment section discussing the book?

permalink
report
parent
reply
67 points

Vanilla Ice had it right all along. Nobody gives a shit about copyright until big money is involved.

permalink
report
reply
4 points

Yep. Legally every word is copyrighted. Yes, law is THAT stupid.

permalink
report
parent
reply
2 points

People think it’s a broken system, but it actually works exactly how the rich want it to work.

permalink
report
parent
reply
45 points
*
Deleted by creator
permalink
report
reply
14 points
*
Deleted by creator
permalink
report
parent
reply

AI and your brain are very different things

How do you know that guy isn’t an AI?

permalink
report
parent
reply
5 points
*
Deleted by creator
permalink
report
parent
reply
3 points

If I read Harry Potter and wrote a novel of my own, no doubt ideas from it could consciously or subconsciously influence it and be incorporated into it. Hey is that any different from what an LLM does?

permalink
report
parent
reply
1 point
*
Deleted by creator
permalink
report
parent
reply
9 points

Your brain isn’t an AI model

OR IS IT?

permalink
report
parent
reply
13 points

You joke but AI advocates seem to forget that people have fundamentally different rights than tools and objects. A photocopier doesn’t get the right to “memorize” and “learn” from a text that a human being does. As much as people may argue that AIs work different, AIs are still not people.

And if they ever become people, the situation will be much more complicated than whether they can imitate some writer. But we aren’t there yet, even their advocates just uses them as tools.

permalink
report
parent
reply
3 points

You should read this article by Kit Walsh, who’s a senior staff attorney at the EFF too. The EFF is a digital rights group who most recently won a historic case: border guards now need a warrant to search your phone.

permalink
report
parent
reply
3 points

How do you see that as a difference? Tools are extensions of ourselves.

Restricting the use of LLMs is only restricting people.

permalink
report
parent
reply
6 points

Exactly. If I write some Loony toons fan fiction, Warner doesn’t own that. This ridiculous view of copyright (that’s not being challenged in the public discourse) needs to be confronted.

permalink
report
parent
reply
8 points
*
Deleted by creator
permalink
report
parent
reply
4 points
*

They can own it, actually. If you use the characters of Bugs Bunny, etc., or the setting (do they have a canonical setting?) then Warner does own the rights to the material you’re using.

For example, see how the original Winnie the Pooh material just entered public domain, but the subsequent Disney versions have not. You can use the original stuff (see the recent horror movie for an example of legal use) but not the later material like Tigger or Pooh in a red shirt.

Now if your work is satire or parody, then you can argue that it’s fair use. But generally, most companies don’t care about fan fiction because it doesn’t compete with their sales. If you publish your Harry Potter fan fiction on Livejournal, it wouldn’t be worth the money to pay the lawyers to take it down. But if you publish your Larry Cotter and the Wizard’s Rock story on Amazon, they’ll take it down because now it’s a competing product.

permalink
report
parent
reply
2 points
*
Deleted by creator
permalink
report
parent
reply
0 points

I think its more like writing a loony toons fanfic based only on pirated material

permalink
report
parent
reply
1 point
*
Deleted by creator
permalink
report
parent
reply
6 points
*

It’s honestly a good question. It’s perfectly legal for you to memorize a copyrighted work. In some contexts, you can recite it, too (particularly the perilous fair use). And even if you don’t recite a copyrighted work directly, you are most certainly allowed to learn to write from reading copyrighted books, then try to come up with your own writing based off what you’ve read. You’ll probably try your best to avoid copying anyone, but you might still make mistakes, simply by forgetting that some idea isn’t your own.

But can AI? If we want to view AI as basically an artificial brain, then shouldn’t it be able to do what humans can do? Though at the same time, it’s not actually a brain nor is it a human. Humans are pretty limited in what they can remember, whereas an AI could be virtually boundless.

If we’re looking at intent, the AI companies certainly aren’t trying to recreate copyrighted works. They’ve actively tried to stop it as we can see. And LLMs don’t directly store the copyrighted works, either. They’re basically just storing super hard to understand sets of weights, which are a challenge even for experienced researchers to explain. They’re not denying that they read copyrighted works (like all of us do), but arguably they aren’t trying to write copyrighted works.

permalink
report
parent
reply
1 point

No, because you paid for a single viewing of that content with your cinema ticket. And frankly, I think that the price of a cinema ticket (= a single viewing, which it was) should be what OpenAI should be made to pay.

permalink
report
parent
reply
3 points
*
Deleted by creator
permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 11K

    Posts

  • 506K

    Comments