There’s also this gem:

Anyway, feast your eyes

26 points

This isn’t an accurate representation of the human mind, but it is certainly an accurate representation of the redditor mind.

permalink
report
reply

[Rhetoric - Challenging 12] Differentiate ChatGPT from the human brain.

[Challenging: Failure] — Bad news: they’re completely identical. The computer takes input and produces output. You take input and produce output. In fact…how can you be sure you’re not powered by ChatGPT?

— That would explain a lot.

— Your sudden memory loss, your recent lack of control over your body and your instincts; nothing more than a glitch in your code. Shoddy craftsmanship. Whoever put your automaton shell together was bad at their job. All that’s left for you now is to hunt down your creator — and make them fix whatever it was they missed in QA.

Thought gained: Cop of the future

permalink
report
reply
27 points
*
Deleted by creator
permalink
report
parent
reply

Never stop posting, each new post is your finest accomplishment

The Lieutenant gazes at you, recognizing your inner turmoil. Is he perhaps an AI too?

permalink
report
parent
reply
13 points

[Empathy - Trivial 6] What if Kim is an AI as well?

:de-dice-1: :de-dice-3:

:de-empathy: [Trivial: Failure] – The expression on his face, the Lieutenant’s worried consternation. It can only mean one thing: Kim is your creator, and he’s afraid you are realizing it.

permalink
report
parent
reply
12 points
*

jesus i love this and ive never played disco

permalink
report
parent
reply

Is all of DE written like Philip K Dick?

Maybe it’s time.

permalink
report
parent
reply
13 points

All that’s left for you now is to hunt down your creator — and make them fix whatever it was they missed in QA.

Isn’t this the plot of “lethal inspection,” the futurama episode?

permalink
report
parent
reply
10 points
*

Can someone explain to me about the human brain or something? I’ve always been under the impression that it’s kinda like the neural networks AIs use but like many orders of magnitude more complex. ChatGPT definitely has literally zero consciousness to speak of, but I’ve always thought that a complex enough AI could get there in theory

permalink
report
reply

no because the human brain is far more complicated and we don’t know how it works

permalink
report
parent
reply

If you read the current literature on the science of consciousness, the reality is that the best we can do is use things like neuroscience and psychology to rule out a couple previously prominent theories of how consciousness probably works. Beyond that, we’re still very much in the philosophy stage. I imagine we’ll eventually look back on a lot of current metaphysics being written and it will sound about as crazy as “obesity is caused by inhaling the smell of food”, which was a belief of miasma theory before germ theory was discovered.

That said, speaking purely in terms of brain structures, the math the most LLMs do is not nearly complex enough to model a human brain. The fact that we can optimize an LLM for its ability to trick our pattern recognition into perceiving it as conscious does not mean the underlying structures are the same. Similar to how film will always be a series of discrete pictures that blur together into motion when played fast enough. Film is extremely good at tricking our sight into perceiving motion. That doesn’t mean I’m actually watching a physical Death Star explode every time A New Hope plays.

permalink
report
parent
reply
5 points

I suppose I already figured that we can’t make a neural network equivalent to a human brain without a complete understanding of how our brains actually work. I also suppose there’s no way to say anything certain about the nature of consciousness yet.

So I guess I should ask this follow up question: Is it possible in theory to build a neural network equivalent to the absolutely tiny brain and nervous system any given insect has? Not to the point of consciousness given that’s probably unfalsifiable, also not just an AI trained to mimic an insect’s behavior, but a 1:1 reconstruction of the 100,000 or so brain cells comprising the cognition of relatively small insects? And not with an LLM, but instead some kind of new model purpose built for this kind of task. I feel as though that might be an easier problem to say something conclusive about.

The biggest issue I can think of with that idea is the neurons in neural networks are only superficially similar to real, biological neurons. But that once again strikes me as a problem of complexity. Individual neurons are probably much easier to model somewhat accurately than an entire brain is, although still nowhere near our reach. If we manage to determine this is possible, then it would seemingly imply to me that someday in the future we could slowly work our way up the complexity gradient from insect cognition to mammalian cognition.

permalink
report
parent
reply
2 points

Is it possible in theory to build a neural network equivalent to the absolutely tiny brain and nervous system any given insect has?

IIRC it’s been tried and they utterly failed. part of the problem is that “the brain” isn’t just the central nervous system – a huge chunk of relevant nerves are spread through the whole body and contribute to the function of the whole body, but they’re deeply specialized and how they actually work is not yet well studied. in humans, a huge percentage of our nerve cells are actually in our gut and another meaningful fraction spread through the rest of the body. basically, sensory input comes extremely preprocessed to the brain and some amount of memory isn’t stored centrally. and that’s all before we even talk about how little we know about how neurons actually work – the last time I was reading about this (a decade or so ago) there was significant debate happening about whether real processing even happened in the neurons or whether it was all in the connective tissue, with the neurons basically acting like batteries. the CS model of a neuron is just woefully lacking any real basis in biology except by a poorly understood analogy.

permalink
report
parent
reply
17 points
*
Deleted by creator
permalink
report
parent
reply
11 points
*

I saw a lot of this for the first time during the LK-99 saga when the only active discussion on replication efforts was on r/singularity. For the past solid year or two before LK-99, all they’d been talking about were LLMs and other AI models. Most of them were utterly convinced (and betting actual money on prediction sites!) that we’d have a general AI in like two years and “the singularity” by the end of the decade.

At a certain point it hit me that the place was a fucking cult. That’s when I stopped following the LK-99 story. This bunch of credulous rubes have taken a bunch of misinterpreted pop-science factoids and incoherently compiled them into a religion. I realized I can pretty safely disregard any hyped up piece of tech those people think will change the world.

permalink
report
parent
reply
17 points
*
Deleted by creator
permalink
report
parent
reply
8 points
*

That’s pretty much the current thinking in mainstream neuroscience, becuase neural networks vaguely sort of mirror what we think at least some neurons in human brains do. The reality is nobody has any good evidence. It may be if ChatGPT get ten jillion more nodes it’d be like a thinking brain, but it’s probably likely there are hundreds more factors involved than just more neurons.

permalink
report
parent
reply
27 points
*
  • We don’t know all that much about how the human brain works.
  • We also don’t know all that much about how computer neural networks work (do not be deceived, half of what we do is throw random bullshit at a network and it works more often than it really should)
  • Therefore, the human brain and computer neural networks work exactly the same way.
permalink
report
parent
reply
5 points

Yeah there’s some ideas about there clearly being a difference in that the brain isn’t feed-forward like these algorithms are. The book I Am a Strange Loop is a great read on the topic of consciousness. But I bet these models hit a massive plateau as the pump them full of bigger, shitter data. Who knows if we’ll ever achieve any actual parity between human and ai experience.

permalink
report
parent
reply
4 points

at some point they started incorporating recursive connection topologies. but the model of the neuron itself hasn’t changed very much and it’s a deeply simplistic analogy that to my knowledge hasn’t been connected to actual biology. I’ll be more impressed when they’re able to start emulating the structures and connective topologies actually found in real animals, producing a functioning replica. until they can do that, there’s no hope of replicating anything like human cognition.

permalink
report
parent
reply
16 points

ChatGPT can always be used to create a new version of humanity

permalink
report
reply

Reddit2 but it’s only bots?

Sounds like a step up to be honest

permalink
report
parent
reply
11 points

really low opinion of the human brain

permalink
report
reply

the_dunk_tank

!the_dunk_tank@hexbear.net

Create post

It’s the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances’ admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

Community stats

  • 1.5K

    Monthly active users

  • 5K

    Posts

  • 124K

    Comments