Elon Musk has launched a new AI company called xAI with the goal of understanding the true nature of the universe. The team at xAI includes AI researchers who have worked at companies like OpenAI, Google, Microsoft and DeepMind. Little is known about xAI currently except that Musk seeks funding from SpaceX and Tesla to start it. The xAI team plans to host a Twitter Spaces discussion on July 14th to introduce themselves. xAI is separate from Muskās X Corp but will work closely with his other companies like Tesla and Twitter.
Given that all of the existing āAIā models are in fact not intelligent at all, and are basically glorified predictive textā¦ any insights an AI could come up with about the true nature of the universe would likely be like one of those sayings that initially sounds deep and meaningful, but is in fact completely inane and meaningless. Calling it now: itāll come out with āif you immediately know the candlelight is fire, then the meal was cooked a long time agoā.
To me, what is surprising is that people refuse to see the similarity between how our brains work and how neural networks work. I mean, itās in the name. We are fundamentally the same, just on different scales. I belive we work exactly like that, but with way more inputs and outputs and way deeper network, but fundamental principles i think are the same.
The difference is that somehow the nets in our brains are creating emergent behaviour while the nets in code, even with a lot more power arenāt. I feel we are probably missing something pivotal in constructing them.
Given the absolutely vast amounts of data that goes into these models, especially the most recent ones, Iām sceptical that there was absolutely nothing in the training data from a WikiHow about stacking objects or tutorials about how to create code that can draw animals. I read an article a few months ago about someone asking an āAIā to create a crochet pattern for a narwhal, and the resulting pattern did indeed look something like a narwhal, in that it had all the right parts in roughly the right place, even if it was still a ghastly abomination. Thereās no evidence that the āAIā actually understood what it was creating: there are plenty of narwhal crochet patterns online which were included in its datasets, and it simply predicted a pattern based on those.
Iām inclined to believe the unicorn code is the same. It doesnāt need to understand the concept of a head or even a unicorn to be able to predict a code for a unicorn without a horn. In the vastness of the internet, there is undoubtedly a tutorial out there that has some version of āyou can turn your unicorn into a horse by removing this bit of codeā. There probably are tutorials out there for āif you want your unicorn facing the other way, do it like thisā, too. Its training data will always include the lines of code for the horn as part of the code for the head. Itās not like thereās code out there for āhow to draw a unicorn with a horn on its buttā (although Iām open to being proved wrong on this, Iām sure somebody on the internet has a thing for unicorns with horns on their butts instead of their heads, but itās unlikely to be the most predictable structure for the code). So predictive text ability alone would predict itās unlikely for the horn code to be anywhere near the butt code.
The training data likely also includes all the many, many texts out there describing how to test for a theory of mind, so the ability to predict what someone writing about theory of mind would say (including descriptions of how a child/animal passing a theory of mind test will predict where objects are) doesnāt prove that an āAIā has a theory of mind.
So I remain very, very sceptical that there is any general intelligence in the latest versions. They just have larger datasets and more refined predictive abilities, so the results are more accurate and less prone to hallucination. Thatās not the same as evidence of actual consciousness. Iād be more convinced if it correctly completed a brand new puzzle, which has never been done before and has not been posted about on the internet or written about in scientific journals or text books. But so far all the evidence of general intelligence is predicting the response to a question or puzzle for which there is ample data about the correct response.
AI coming up with sayings of that type is something already being done ( https://inspirobot.me/ ). Youtube reaction videos exist referring to that site (like āAi Generates Hilarious Motivational Postersā by jacksepticeye).
any insights an AI could come up with about the true nature of the universe would likely be like one of those sayings that initially sounds deep and meaningful, but is in fact completely inane and meaningless.
Thereās a term for that: