We’re not talking about perceptions as in making an AI literally perceive anything. I can feed you prompts and ideas of my own and get an output no different than if I was using AI tools, the difference being ai tools have already gathered the collective knowledge you’d get from say doing a course in photoshop, taking an art class, reading an encyclopaedia or a novel, going to school for music theory, etc.
I get that part but I think what gets taken more seriously is how 'human" the responses seem which is a testament to how good the LLM model is. But that’s set dressing when GPT has been known to give incorrect, outdated or contradictory answers. Not always but unless you know what kind of answer to expect, you have to verify what it’s telling you which means you’ll be spending half the time fact-checking the LLM.
Exactly, how is the end result not that of the user if they need to craft and modify and adjust and manipulate the prompts inputs and outputs of ai to produce something new or coherent?
It’s just a tool. A tool that will improve access to human knowledge and improve each individuals ability to create and produce more complex works with less effort. Each of which will feed back into the algorithm expanding the knowledge and capacity of ai and human ingenuity.