38 points

Eh. Even heat is a statistical phenomenon, at some reference frame or another. I’ve developed model-dependent apathy.

permalink
report
reply
29 points

The meme would work just the same with the “machine learning” label replaced with “human cognition.”

permalink
report
reply
72 points

Have to say that I love how this idea congealed into “popular fact” as soon as peoples paychecks started relying on massive investor buy in to LLMs.

I have a hard time believing that anyone truly convinced that humans operate as stochastic parrots or statistical analysis engines has any significant experience interacting with others human beings.

Less dismissively, are there any studies that actually support this concept?

permalink
report
parent
reply
47 points
*

Speaking as someone whose professional life depends on an understanding of human thoughts, feelings and sensations, I can’t help but have an opinion on this.

To offer an illustrative example

When I’m writing feedback for my students, which is a repetitive task with individual elements, it’s original and different every time.

And yet, anyone reading it would soon learn to recognise my style same as they could learn to recognise someone else’s or how many people have learned to spot text written by AI already.

I think it’s fair to say that this is because we do have a similar system for creating text especially in response to a given prompt, just like these things called AI. This is why people who read a lot develop their writing skills and style.

But, really significant, that’s not all I have. There’s so much more than that going on in a person.

So you’re both right in a way I’d say. This is how humans develop their individual style of expression, through data collection and stochastic methods, happening outside of awareness. As you suggest, just because humans can do this doesn’t mean the two structures are the same.

permalink
report
parent
reply
15 points

Idk. There’s something going on in how humans learn which is probably fundamentally different from current ML models.

Sure, humans learn from observing their environments, but they generally don’t need millions of examples to figure something out. They’ve got some kind of heuristics or other ways of learning things that lets them understand many things after seeing them just a few times or even once.

Most of the progress in ML models in recent years has been the discovery that you can get massive improvements with current models by just feeding them more and data. Essentially brute force. But there’s a limit to that, either because there might be a theoretical point where the gains stop, or the more practical issue of only having so much data and compute resources.

There’s almost certainly going to need to be some kind of breakthrough before we’re able to get meaningful further than we are now, let alone matching up to human cognition.

At least, that’s how I understand it from the classes I took in grad school. I’m not an expert by any means.

permalink
report
parent
reply
5 points

The big difference between people and LLMs is that an LLM is static. It goes through a learning (training) phase as a singular event. Then going forward it’s locked into that state with no additional learning.

A person is constantly learning. Every moment of every second we have a ton of input feeding into our brains as well as a feedback loop within the mind itself. This creates an incredibly unique system that has never yet been replicated by computers. It makes our brains a dynamic engine as opposed to the static and locked state of an LLM.

permalink
report
parent
reply
15 points

I’d love to hear about any studies explaining the mechanism of human cognition.

Right now it’s looking pretty neural-net-like to me. That’s kind of where we got the idea for neural nets from in the first place.

permalink
report
parent
reply
12 points

It’s not specifically related, but biological neurons and artificial neurons are quite different in how they function. Neural nets are a crude approximation of the biological version. Doesn’t mean they can’t solve similar problems or achieve similar levels of cognition , just that about the only similarity they have is “network of input/output things”.

permalink
report
parent
reply
9 points

At every step of modern computing people have thought that the human brain looks like the latest new thing. This is no different.

permalink
report
parent
reply
3 points

I’d love to hear about any studies explaining the mechanism of human cognition.

You’re just asking for any intro to cognitive psychology textbook.

permalink
report
parent
reply
1 point

Ehhh… It depends on what you mean by human cognition. Usually when tech people are talking about cognition, they’re just talking about a specific cognitive process in neurology.

Tech enthusiasts tend to present human cognition in a reductive manor that for the most part only focuses on the central nervous system. When in reality human cognition includes anyway we interact with the physical world or metaphysical concepts.

There’s something called the mind body problem that’s been mostly a philosophical concept for a long time, but is currently influencing work in medicine and in tech to a lesser degree.

Basically, it questions if it’s appropriate to delineate the mind from the body when it comes to consciousness. There’s a lot of evidence to suggest that that mental phenomenon are a subset of physical phenomenon. Meaning that cognition is reliant on actual physical interactions with our surroundings to develop.

permalink
report
parent
reply
12 points
*

If by “human cognition” you mean "tens of millions of improvised people manually checking and labeling images and text so that the AI can pretend to exist," then yes.

If you mean “it’s a living, thinking being,” then no.

permalink
report
parent
reply
5 points

My dude it’s math all the way down. Brains are not magic.

permalink
report
parent
reply
13 points

There’s a lot we understand about the brain, but there is so much more we dont understand about the brain and “awareness” in general. It may not be magic, but it certainly isnt 100% understood.

permalink
report
parent
reply
4 points

(working with the assumption we mean stuff like ChatGPT) mKay… Tho math and logic is A LOT more than just statistics. At no point did we prove that statistics alone is enough to reach the point of cognition. I’d argue no statistical model can ever reach cognition, simply because it averages too much. The input we train it on is also fundamentally flawed. Feeding it only text skips the entire thinking and processing step of creating an answer. It literally just take texts and predicts on previous answers what’s the most likely text. It’s literally incapable of generating or reasoning in any other way then was already spelled out somewhere in the dataset. At BEST, it’s a chat simulator (or dare I say…language model?), it’s nowhere near an inteligence emulator in any capacity.

permalink
report
parent
reply
1 point
-1 points

I think saying machine learning is just statistics is a bit misleading. There’s not much statistics going on in deep learning. It’s mostly just “eh, this seems to work I dunno let’s keep doing it and see what happens”.

permalink
report
reply
16 points

It’s mostly just “eh, this seems to work I dunno let’s keep doing it and see what happens”.

Yeah, no.

permalink
report
parent
reply
-2 points

Well, eventually the thing you’re working on falls out of fashion in place for the next trendy thing.

permalink
report
parent
reply
4 points

While I don’t disagree with that statement at all, I honestly have no idea how it’s related to my comment (probably because I’m an idjit)

permalink
report
parent
reply
4 points

But… you have to create criteria for what qualifies as success vs failure, and it’s a scale, not a boolean true/false. That’s where the statistics come in, especially if you have multiple criteria with different weights etc.

permalink
report
parent
reply
3 points

The criteria is a loss function, which can be whatever works best for the situation. Some might have statistical interpretations, but it’s not really a necessity. For Boolean true/false there are many to choose from. Hinge loss and logistic loss are two common ones. The former is the basis for support vector machines.

But the choice of loss is just one small part in the design of a deep learning model. Choice of activation functions, layer connectivity, regularization and optimizer must also be considered. Not all of these have statistical interpretations. Like, what is the statistical interpretation between the choice of Relu and Leaky Relu? People seemed to prefer one over the other because that’s what worked best for them.

permalink
report
parent
reply
11 points
*

iT’s JuSt StAtIsTiCs

permalink
report
reply
23 points

But it is, and it always has been. Absurdly complexly layered statistics, calculated faster than a human could.

This whole “we can’t explain how it works” is bullshit from software engineers too lazy to unwind the emergent behavior caused by their code.

permalink
report
parent
reply
17 points
*

But it is, and it always has been. Absurdly complexly layered statistics, calculated faster than a human could.

Well sure, but as someone else said even heat is statistics. Saying “ML is just statistics” is so reductionist as to be meaningless. Heat is just statistics. Biology is just physics. Forests are just trees.

permalink
report
parent
reply
5 points

It’s like saying a jet engine is essentially just a wheel and axle rotating really fast. I mean, it is, but it’s shaped in such a way that it’s far more useful than just a wheel.

permalink
report
parent
reply
0 points

Yeah, but the critical question is: is human intelligence statistics?

Seems no, to me: a human lawyer wouldn’t, for instance, make up case law that doesn’t exist. AI has done that one already. If it had even the most basic understanding of what the law is and does, it would have known not to do that.

This shit is just megahal on a gpu.

permalink
report
parent
reply
36 points

I agree with your first paragraph, but unwinding that emergent behavior really can be impossible. It’s not just a matter of taking spaghetti code and deciphering it, ML usually works by generating weights in something like a decision tree, neural network, or statistical model.

Assigning any sort of human logic to why particular weights ended up where they are is educated guesswork at best.

permalink
report
parent
reply
-2 points

You know what we do in engineering when we need to understand a system a lot of the time? We instrument it.

Please explain why this can’t be instrumented. Please explain why the trace data could not be analtzed offline at different timescales as a way to start understanding what is happening in the models.

I’m fucking embarassed for CS lately.

permalink
report
parent
reply
5 points

This whole “we can’t explain how it works” is bullshit

Mostly it’s just millions of combinations of y = k*x + m with y = max(0, x) between. You don’t need more than high school algebra to understand the building blocks.

What we can’t explain is why it works so well. It’s difficult to understand how the information is propagated through all the different pathways. There are some ideas, but it’s not fully understood.

permalink
report
parent
reply
2 points

??? it works well because we expect the problem space we’re searching to be continuous and differentiable and the targetted variable to be dependent on the features given, why wouldn’t it work

permalink
report
parent
reply
15 points

It’s totally statistics, but that second paragraph really isn’t how it works at all. You don’t “code” neural networks the way you code up website or game. There’s no “if (userAskedForThis) {DoThis()}”. All the coding you do in neutral networks is to define a model and training process, but that’s it; Before training that behavior is completely random.

The neural network engineer isn’t directly coding up behavior. They’re architecting the model (random weights by default), setting up an environment (training and evaluation datasets, tweaking some training parameters), and letting the models weights be trained or “fit” to the data. It’s behavior isn’t designed, the virtual environment that it evolved in was. Bigger, cleaner datasets, model architectures suited for the data, and an appropriate number of training iterations (epochs) can improve results, but they’ll never be perfect, just an approximation.

permalink
report
parent
reply
2 points

But the actions taken by the model in the virtual environments can always be described as discrete steps. Each modification to the weights done by each agent in each generation can be described as discrete steps. Even if I’m fucking up some of the terminology, basic computer architecture enforces that there are discrete steps.

We could literally trace each command that runs on the hardware that runs these things individually if we wanted full auditability, to eat all the storage space ever made, and to drive someone insane. Have none of you AI devs ever taken an embedded programming/machine language course? Never looked into reverse engineering of compiled executables?

I understand that these things work by doing these steps millions upon millions of times, but there has to be a better middle ground for tracing these things than “lol i dunno, computer brute forced it”. It is a mixture of laziness, and unwillingness to allow responsibility to negatively impact profits that result in so many in the field to summarize it as literally impossible.

permalink
report
parent
reply
2 points

Tensorflow has some libraries that help visualize the “explanation” for why it’s models are classifying something (in the tutorial example, a fireboat), in the form of highlights over the most salient parts of the data:

Neural networks are not intractable, but we just haven’t built the libraries for understanding and explaining them yet.

permalink
report
parent
reply
1 point

But it’s called neural network, and learns just like a human, and some models have human names, so it’s a mini-digital human! Look at this picture that detects things in an image, it’s smart, it knows context!

permalink
report
parent
reply
12 points

Its curve fitting

permalink
report
reply
8 points

But it’s fitting to millions of sets in hundreds of dimensions.

permalink
report
parent
reply

Science Memes

!science_memes@mander.xyz

Create post

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don’t throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

Community stats

  • 12K

    Monthly active users

  • 3.6K

    Posts

  • 89K

    Comments