No, because LLMs are just a mathematical blender with ONE goal in mind: construct a good sentence. They have no thoughts, they have no corrective motion, they just spit out sentences.
You MIGHT get to passing a Turing test with enough feedback tied in, but then the “conciousness” is specifically coming from the systemic complexity at that point and still very much not the LLMs.
In my opinion you are giving way too much credit to human beings. We are mainly just machines that spit out sentences.
So you’re saying it’s not good enough for a sentient personality, but it might be good enough for an average politician?