Apparently, stealing other peopleâs work to create product for money is now âfair useâ as according to OpenAI because they are âinnovatingâ (stealing). Yeah. Move fast and break things, huh?
âBecause copyright today covers virtually every sort of human expressionâincluding blogposts, photographs, forum posts, scraps of software code, and government documentsâit would be impossible to train todayâs leading AI models without using copyrighted materials,â wrote OpenAI in the House of Lords submission.
OpenAI claimed that the authors in that lawsuit âmisconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.â
Either youâre vastly overestimating the degree of understanding and insight AIs possess, or youâre vastly underestimating your own capabilities. :)
This whole AI craze has just shown me that people are losing faith in their own abilities and their ability to learn things. Iâve heard so many who use AI to generate âartworkâ argue that they tried to do art âfor yearsâ without improving, and hence have come to conclusion that creativity is a talent that only some have, instead of a skill you can learn and hone. Just because they didnât see results as fast as theyâd have liked.
Very well said! Creativity is definitely a skill that requires work, and for which there are no short cuts. It seems to me that the vast majority of people using AI for artwork are just looking for a short cut, so they can get the results without having to work hard and practice. The one valid exception is when itâs used by disabled people who have physical limitations on what they can do, which is a point thatâs brought up occasionally - and if that was the one and only use-case for these models, I think a lot of artists would actually be fine with that.
I started drawing seriously when I was 14. Looking at my old artwork, I didnât start improving fast until I was around 19 or 20. Not to say I didnât improve at all during those five to six years but the pace did get faster once I had âlearned to learnâ so to say. That is to say it can take a lot of patience to get to a point where you actually start seeing improvement fast enough to stay motivated. But it is 100% worth it because at the end you have a lot of things you have created with your own two hands.
And regarding the point on physical limitations, I canât blame anyone in a situation like that for using AI if they have no other chance for realising their imaginations. For others, it is completely possible and not reserved for people who have some mythical innate talent. Just grab a pen or a brush and enjoy the process of honing a fine skill regardless of the end result. â¤ď¸
Alternatively, you might be vastly overestimating human âunderstanding and insightâ, or how much of it is really needed to create stuff.
Average humans, sure, donât have a lot of understanding and insight, and little is needed to be able to draw a doodle on some paper. But trained artists have a lot of it, because part of the process is learning to interpret artworks and work out why the artist used a particular composition or colour or object. To create really great art, you do actually need a lot of understanding and insight, because everything in your work will have been put there deliberately, not just to fill up space.
An AI doesnât know why itâs put an apple on the table rather than an orange, it just does it because human artists have done it - it doesnât know what apples mean on a semiotic level to the human artist or the humans that look at the painting. But humans do understand what apples represent - they may not pick up on it consciously, but somewhere in the backs of their minds, theyâll see an apple in a painting and itâll make the painting mean something different than if the fruit had been an orange.
it doesnât know what apples mean on a semiotic level
Interestingly, LLMs seem to show emerging semiotic organization. By analyzing the activation space of the neural network, related concepts seem to get trained into similar activation patterns, which is what allows LLMs to zero shot relationships when executed at a âtemperatureâ (randomness level) in the right range.
Pairing an LLM with a stable diffusion model, allows the resulting AI to⌠well, judge by yourself: https://llm-grounded-diffusion.github.io/