I highly doubt it. They may be able to simulate the appearance of reasoning, but I won’t believe that they’ve accomplished this goal until their robots start killing humans over ideological differences.
To be fair, it might be too late by then, but it also might be true that it’s not just the fairy tales with happy endings that are not realistic. No sense worrying about T-1000s coming for you in real life when that whole movie was mostly special effects, if the world is about to die then I don’t see it coming from machines. We don’t know where free will comes from or even if it’s just a math equation or something truly beyond explanation, but computers don’t seem to have it.
Scarily enough, the Quran (of all the things that implies, I am not saying this is actually reality, only that parallels should not fall into place that way under random chance) points out that this conclusion was engineered in some sense, that electronics were never going to give us godhood due to the limitations of reality. It’s kind of blunt in saying it, so I get why the skepticism needs to stay involved, but the idea is that our “household gods” of Siri and Alexa and such are really just basic circuitry compared to a housefly or mosquito, let alone to anything larger or capable of emotional attachment.
Sorry if this is preachy, I’m a writer who hasn’t done enough writing lately and I’m just at a stage where I feel like it’s too late for my writing to matter.
Yeah, no worries, I get it.
I’m a perennial optimist, so I look more at the Star Trek future than any of the dystopias, though dystopia is my favorite type of book (setting? genre?). In every dystopia, we get the same general theme of the human spirit pushing against evil, with the difference to other stories being the lack of success.
I think people take these warnings to heart and avoid worst of it. I don’t think we’ll get to the Star Trek utopia, but I think we’ll get closer than any of the various dystopias people concoct. Humans are late at responding to issues, but we generally do respond.
I think the same is true for AI. It’ll start as a helpful piece of tech, transform into a monster, then we’ll correct and control it. We’ve done that in the past with slavery, nuclear weapons, and fascism, and I think we’ll continue to overcome climate, AI, and other challenges, albeit much later than we should.
“Hey! That’s just a machine programmed to kill me, it’s not making the decision to kill me itself!”