You are viewing a single thread.
View all comments View context
-37 points
*

What is your brain doing if not statistical text prediction?

The show Westworld portrayed it pretty good. The idea of jumping from text prediction to conscience doesn’t seem that unlikely. It’s basically text prediction on a loop with some exterior inputs to interact.

permalink
report
parent
reply
14 points

How to tell me you’re stuck in your head terminally online without telling me you’re stuck in your head terminally online.

But have something more to read.

permalink
report
parent
reply
-4 points
*

Why being so rude?

Did you actually read the article or just googled until you find something that reinforced your prestablished opinion to use as a weapon against a person that you don’t even know?

I will actually read it. Probably the only one of us two who would do that.

If it’s convincing I may change my mind. I’m not a radical, like many other people are, and my opinions are subject to change.

permalink
report
parent
reply
12 points
*

Funny to me how defensive you got so quick, accusing of not reading the linked paper before even reading it yourself.

The reason OP was so rude is that your very premise of “what is the brain doing if not statistical text prediction” is completely wrong and you don’t even consider it could be. You cite a TV show as a source of how it might be. Your concept of what artificial intelligence is comes from media and not science, and is not founded in reality.

The brain uses words to describe thoughts, the words are not actually the thoughts themselves.

https://advances.massgeneral.org/neuro/journal.aspx?id=1096

Think about small children who haven’t learned language yet, do those brains still do “stastical text prediction” despite not having words to predict?

What about dogs and cats and other “less intelligent” creatures, they don’t use any words but we still can teach them to understand ideas. You don’t need to utter a single word, not even a sound, to train a dog to sit. Are they doing “statistical text prediction” ?

permalink
report
parent
reply
10 points

It’s a basic argument of generative complexity, I found the article some years ago while trying to find an earlier one (I don’t think by the same author) that argued along the same complexity lines, essentially saying that if we worked like AI folks think we do we’d need to be so and so much trillion parameters and our brains would be the size of planets. That article talked about the need for context switching in generating (we don’t have access to our cooking skills while playing sportsball), this article talks about the necessity to be able to learn how to learn. Not just at the “adjust learning rate” level, but mechanisms that change the resulting coding, thereby creating different such contexts, or at least that’s where I see the connection between those two. In essence: To get to AGI we need AIs which can develop their own topology.

As to “rudeness”: Make sure to never visit the Netherlands. Usually how this goes is that I link the article and the AI faithful I pointed it out to goes on a denial spree… because if they a) are actually into the topic, not just bystanders and b) did not have some psychological need to believe (including “my retirement savings are in AI stock”) they c) would’ve come across the general argument themselves during their technological research. Or came up with it themselves, I’ve also seen examples of that: If you have a good intuition about complexity (and many programmers do) it’s not unlikely a shower thought to have. Not as fleshed out as in the article, of course.

permalink
report
parent
reply
1 point
*

They have a conclusion that they’ve come to the conversation with and anything that challenges that gets down voted without consideration.

The assumptions you aren’t allowed to challenge, in order: AI is bad; Computer intelligence will never match or compete with human intelligence; computer intelligence isn’t really intelligence at all, it’s this other thing [insert ‘something’ here like statistical inference or whatever].

“AI is bad” is more of a dictum extending from cultural hedgemony than anything else. It’s an implicit recognition that in many ways, silicon valley culture is an effective looting of the commons, and therefore, one should reject all things that extend from that culture. It’s not a logical or rational argument against AI necessarily, but more of an emotional reaction to the culture which developed it. As a self preservation mechanism this makes some sense, but obviously, it’s not slowing down the AI takeover of all things (which is really just putting highlighter on a broader point that silicon valley tech companies were already in control of major aspects of our lives).

Computer intelligence never match human intelligence is usually some combination of goal post moving, or a redefining of intelligence on the fly (this I’ve specifically presented for the third critique, because it warrants addressing). This is an old trope that goes back almost to the beginning of computer intelligence (it’s not clear to me our definitions of machine intelligence are very relevant). It quite litterally started with multiplying large numbers. Then, for literally decades, things like chess and strategy, forwards facing notions in time were held up as some thing only “intelligent systems could do”. Then post deep blue, that got relegated to very clever programmers and we changed intelligence to be something about learning. Then systems like Alpha go etc came about, where they basically learned the rules to the game by playing, and we relegated those systems to ‘domain specific’ intelligences. So in this critique you are expected to accept and confirm the moving of goalposts around machine intelligence.

Finally, it’s the "what computers do isn’t intelligence, it’s some_other_thing.exe™. In the history of machine intelligence, that some other thing has been counting very quickly, having large-ish memory banks, statistical inference, memorization, etc. The biggest issues with this critique, and when you scratch and sniff it, you very quickly catch an aroma of Chomsky leather chair (and more so if we’re talking about LLMs), and maybe even a censer of a Catholic Church. The idea that humans are fundementally different and in some way special is frankly, fundemental, to most western idealogies in a way we don’t really discuss in the context of this conversation. But the concept of spirit, and that there is something “entirely unique” about humans versus “all of the rest of everything” is at the root of Abrahamic traditions and therefore also at the root of a significant portion of global culture. In many places in the world, it’s still heretical to imply that human beings are no more special or unique than the oak or the capibara or flatworm or dinoflagellate. This assumption, I think, is on great display with Chomsky’s academic work on the concept of the LAD, or language acquisition device.

Chomsky gets a huge amount of credit for shaking up linguistics, but what we don’t often talk about, is how effectively, his entire academic career got relinquished to the dust bin, or at least is now in that pile of papers where we’re not sure if we should “save or throw away”. Specifically, much of Chomsky’s work was predicted on the identification of something in humans which would be called a language acquisition device or LAD. And that this LAD would be found in as a region in human brains and would explain how humans gain language. And just very quickly notice the overall shape of this argument. It’s as old as the Egyptians in at least trying to find the “seat of the soul”, and follows through abrahamism as well. What LLMs did that basically shattered this nothing was show at least one case where no special device was necessary to acquire language; where in fact no human components at all were necessary other than a large corpus of training data; that maybe languages and the very idea of language or language acquisition are not special or unique to humans. LLMs don’t specifically address the issue of a LAD, but they go a step farther in not needing to. Chomsky spent the last of verbal days effectively defending this wrong notion he had (which has already been addressed in neuroscience and linguistics literature), which is an interesting and bitter irony for a linguist, specifically against LLMs.

To make the point more directly, we lack a good coherent testable definition of human intelligence, which makes any comparisons to machine intelligence somewhat arbitrary and contrived, often to support the interlocutors assumptions. Machine intelligence may get dismissed as statistical inference, sure, but then why can you remember things sometimes but not others? Why do you perform better when you are well rested and well fed versus tired and hungry, if not for there being an underlying distribution of neurons, some of which are ready to go, and some of which are a bit spent and maybe need a nap?

And so I would advocate caution about investing heavily into a conversation where these assumptions are being made. It’s probably not going to be a satisfying conversation because almost assuredly they assumptee hasn’t dove very deeply into these matters. And look at the downvote ratio. It’s rampant on Lemmy. Lemmy’s very much victim to it’s pack mentality and dog piling nature.

permalink
report
parent
reply
14 points

Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without “inner monologue” that function just fine

Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn’t going to work.

permalink
report
parent
reply
-11 points
*

Yep, of course. We do more things.

But language is a big thing in the human intelligence and consciousness.

I don’t know, and I would assume that anyone really know. But people without internal monologue I have a feeling that they have it but they are not aware of it. Or maybe they talk so much that all the monologue is external.

permalink
report
parent
reply
13 points

Interesting you focus on language. Because that’s exactly what LLMs cannot understand. There’s no LLM that actually has a concept of the meaning of words. Here’s an excellent essay illustrating my point.

The fundamental problem is that deep learning ignores a core finding of cognitive science: sophisticated use of language relies upon world models and abstract representations. Systems like LLMs, which train on text-only data and use statistical learning to predict words, cannot understand language for two key reasons: first, even with vast scale, their training and data do not have the required information; and second, LLMs lack the world-modeling and symbolic reasoning systems that underpin the most important aspects of human language.

The data that LLMs rely upon has a fundamental problem: it is entirely linguistic. All LMs receive are streams of symbols detached from their referents, and all they can do is find predictive patterns in those streams. But critically, understanding language requires having a grasp of the situation in the external world, representing other agents with their emotions and motivations, and connecting all of these factors to syntactic structures and semantic terms. Since LLMs rely solely on text data that is not grounded in any external or extra-linguistic representation, the models are stuck within the system of language, and thus cannot understand it. This is the symbol grounding problem: with access to just formal symbol system, one cannot figure out what these symbols are connected to outside the system (Harnad, 1990). Syntax alone is not enough to infer semantics. Training on just the form of language can allow LLMs to leverage artifacts in the data, but “cannot in principle lead to the learning of meaning” (Bender & Koller, 2020). Without any extralinguistic grounding, LLMs will inevitably misuse words, fail to pick up communicative intents, and misunderstand language.

permalink
report
parent
reply
5 points

language is a big thing in the human intelligence and consciousness.

But an LLM isn’t actually language. It’s numbers that represent tokens that build words. It doesn’t have the concept of a table, just the numerical weighting of other tokens related to “tab” & “le”.

permalink
report
parent
reply
8 points

What is your brain doing if not statistical text prediction?

Um, something wrong with your brain buddy? Because that’s definitely not at all how mine works.

permalink
report
parent
reply
-3 points
*

Then why you just expressed in a statistical prediction manner?

You saw other people using that kind of language while being derogatory to someone they don’t like on the internet. You saw yourself in the same context and your brain statistically chose to use the same set of words that has been seen the most in this particular context. Literally chatgtp could have been given me your exact same answer if it would have been trained in your same echo chamber.

Have you ever debated with someone from the polar opposite political spectrum and complain that “they just repeat the same propaganda”? Doesn’t it sound like statistical predictions to you? Very simple those, there can be more complex one, but our simplest ways are the ones that define what are the basics of what we are made of.

If you at least would have given me a more complex expression you may had an argument (as humans our process could far more complex an hide a little what we seem to actually be doing it). But in instances like this one, when one person (you) responded with a so obvious statistical prediction on what is needed to be said in a particular complex just made my case. thanks.

permalink
report
parent
reply
2 points

But people who agree with my political ideology are considered and intelligent. People who disagree with me are stupider than chatgpt 3.5 and just say the same shit and can’t be reasoned with.

permalink
report
parent
reply
4 points

conscience

ok buddy

permalink
report
parent
reply
0 points
*

It’s “free will”. They chose to say what they wanted.

At least this is what the old religions teach. I don’t know what AI preachers you’re learning this nonsense from.

permalink
report
parent
reply
0 points
*

Church?

Free will vs determinism doesn’t have to do with religion.

I do think that the universe is deterministic and that humans (or any other being) do no have free will per se. In the sense that given the same state of the universe at some point the next states are determined and if it were to be repeated the evolution of the state of the universe would be the same.

Nothing to do with religion. Just with things not happening because of nothing, every action is consequence of another action, that includes all our brain impulses. I don’t think there are “souls” outside the state of the matter that could take decisions by themselves with determined.

But this is mostly philosophical of what “free will” means. Is it free will as long as you don’t know that the decision was already made from the very beginning?

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 15K

    Posts

  • 651K

    Comments