Apparently, stealing other peopleâs work to create product for money is now âfair useâ as according to OpenAI because they are âinnovatingâ (stealing). Yeah. Move fast and break things, huh?
âBecause copyright today covers virtually every sort of human expressionâincluding blogposts, photographs, forum posts, scraps of software code, and government documentsâit would be impossible to train todayâs leading AI models without using copyrighted materials,â wrote OpenAI in the House of Lords submission.
OpenAI claimed that the authors in that lawsuit âmisconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.â
Average humans, sure, donât have a lot of understanding and insight, and little is needed to be able to draw a doodle on some paper. But trained artists have a lot of it, because part of the process is learning to interpret artworks and work out why the artist used a particular composition or colour or object. To create really great art, you do actually need a lot of understanding and insight, because everything in your work will have been put there deliberately, not just to fill up space.
An AI doesnât know why itâs put an apple on the table rather than an orange, it just does it because human artists have done it - it doesnât know what apples mean on a semiotic level to the human artist or the humans that look at the painting. But humans do understand what apples represent - they may not pick up on it consciously, but somewhere in the backs of their minds, theyâll see an apple in a painting and itâll make the painting mean something different than if the fruit had been an orange.
it doesnât know what apples mean on a semiotic level
Interestingly, LLMs seem to show emerging semiotic organization. By analyzing the activation space of the neural network, related concepts seem to get trained into similar activation patterns, which is what allows LLMs to zero shot relationships when executed at a âtemperatureâ (randomness level) in the right range.
Pairing an LLM with a stable diffusion model, allows the resulting AI to⌠well, judge by yourself: https://llm-grounded-diffusion.github.io/
Iâm unconvinced that the fact theyâre getting better at following instructions, like putting objects where the prompter specifies, or changing the colour, or putting the right number of them, etc means the model actually understands what the objects mean beyond their appearance. It doesnât understand the cultural meanings attached to each object, and thus is unable to truly make a decision about why it should place an apple rather than an orange, or how the message within the picture changes when itâs a red sports car rather than a beige people-carrier.
how the message within the picture changes when itâs a red sports car rather than a beige people-carrier.
Well, thatâs part of the LLM step, so letâs put it to the test:
Image prompt:
Create an ad for a car. The target audience are young adults, with high income, and thrill seeking. Come up with a brand name fitting such a car, and include it in the image. For the car, come up with a shape, and color, best fitting of the target audience. Come up with an image layout, art style, and camera angle, best fitting of the target audience. Include between zero and two additional items that will make the overall picture more attractive to the target audience.
ChatGPT:
[(âan advertisement for a carâ, [45, 58, 422, 396]), (âa high-performance carâ, [123, 193, 266, 128]), (âan edgy and dynamic brand name logoâ, [188, 10, 136, 30]), (âa sleek, red sports carâ, [168, 254, 176, 102])] Background prompt: An adrenaline-pumping car advertisement targeting young adults with high income and a thrill-seeking spirit. The layout includes a bold brand name logo, a sleek red sports car, and a dynamic composition to captivate the audience. Negative prompt: additional items
How did it know to pick a âsleek red sports carâ? Or the rest of the elements.