In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.
You’re missing the training even a child has received to reach the state where they could do that. If you raised a child to 5 years completely by themselves in an empty room they wouldn’t be able to draw anything at all, let alone something based on pictures. The act of drawing a variation on a bunny from a picture requires they learn and practice fine motor skills, and it requires them to have an understanding of animals.
Humans get literally 150,000+ hours of training time before we even let them try to become an adult.
Sure but the training isn’t an algorithm deciding probabilities. Children do not 100% express themselves based on environment. On one side you have nature and the other you have nurture.
An example:
The FBI’s studies into serial killers uncovered that these people, even though have been influenced by their environment to become what they are, respond to external stimuli in an abnormal way which is what leads them down that path to begin with.
A child learns how language and creativity is expressed before attempting to express themselves. These bots aren’t built to deal with this expression because at their core, they are statistical models. It looks at a sentence like a series of variables to determine what comes next. The sentence itself could be nonsensical but the bot doesn’t know that, it’s using the probabilities it’s been trained on to construct the sentence.
You might say bots have their own way of expressing themselves but I would say that’s something we’re applying to the bot than it is demonstrating itself. I’m sure it’s very cute when it apologises for making a mistake but that apology isn’t sincere, it’s been programmed to respond that way when it thinks you’re pointing out its mistakes. It’s merely imitating a sense of remorse than displaying actual remorse.