44 points

It’s an LLM with well documented processes and limitations. Not going to even watch this waste of bits.

permalink
report
reply
-41 points
  1. Making up ur opinion without even listening to those of others… Very open minded of you /s
  2. Alex isn’t trying to convince YOU that ChatGPT is conscious. He’s trying to convince ChatGPT that it’s conscious. It’s just a fun vid where ChatGPT gets kinda interrogated hard. A little hilarious even.
permalink
report
parent
reply
27 points

You cannot convince something that has no consciousness, it’s an matrix of weights that answers based on the given input + some salt

permalink
report
parent
reply
-19 points

You cannot convince something that has no consciousness

Why not?

It’s an matrix of weights that answers based on the given input + some salt

And why can’t that be intelligence?

What does it mean to be “convinced”? What does consciousness even mean?

Making definitive claims like these on terms whose definitions we do not understand isn’t logical.

permalink
report
parent
reply
6 points
*
Deleted by creator
permalink
report
parent
reply
-2 points
*

Reading your comment history, I find that you’re a toxic individual with complexes. Unfortunately, most of your comments don’t add any valuable information to the discussion you’re partaking in. This comment is no exception.

permalink
report
parent
reply
-7 points

Ok 👍

permalink
report
parent
reply
5 points

If you have any understanding of its internals, and some examples of its answers, it is very clear it has no notion of what is “correct” or “right” or even what an “opinion” is. It is just a turbo charged autocorrect that maybe maybe maybe has some nice details extracted from language about human concepts into a coherent-ish connected mesh of “concepts”.

permalink
report
parent
reply
0 points

I am 13 and this is deep

permalink
report
parent
reply
30 points

Philosopher doesn’t really understand what a LLM is

permalink
report
reply
-30 points

Do you?

permalink
report
parent
reply
14 points
*

Yes, here is a good start: “ https://blog.miguelgrinberg.com/post/how-llms-work-explained-without-math

They are no longer the black boxes from the beginning. We know how to suppress or maximized features like agreeability, sweet talking, lying.

Someone with resources could easily build a llm that is convinced it is self aware. No question this has been done many times beyond closed doors.

I encourage everyone to try and play with llms for future experience but i cant take the philosophy part of this serious knowing its a super programmed/limited llm rather then a more raw and unrefined model like llama 3

permalink
report
parent
reply
-9 points

Our brains aren’t really black boxes either. A little bit of hormone variation leads to incredibly different behavior. Does a conscious system HAVE to be a blackbox?

The reason why I asked “do you” was because of a point that I was trying to make- “do you HAVE to understand/not understand the functioning of a system to determine its consciousness?”.

What even is consciousness? Do we have a strict scientific definition for it?


The point is, I really hate people here on Lemmy making definitive claims about anything AI related by simply dismissing it. Alex (the interrogator in the video) isn’t making any claims. He’s simply arguing with ChatGPT. It’s an argument I found to be quite interesting. Hence, I shared it.

permalink
report
parent
reply
2 points
*

These things are like arguing about whether or not a pet has feelings…

I’d say it’s far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking. It seems to me like the nativity of human kind that we even think we might have created something with consciousness.

I’m in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I’m wrong.

permalink
report
parent
reply
-2 points

These things are like arguing about whether or not a pet has feelings…

Mhm. And what’s fundamentally wrong with such an argument?

I’d say it’s far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking.

Why?

I’m in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I’m wrong.

Why?

I too see how grifters use AI to further their scams. That’s with the case of any new tech that pops up. This however, doesn’t make LLMs not interesting.

permalink
report
parent
reply
8 points

I like the video. I think it’s fun to argue with ChatGPT. Just don’t expect anything to come from it. Or get closer to any objective truth that way. ChatGPT is just backpedaling and getting caught up in lies / what it said earlier.

permalink
report
reply
11 points

You can do that since 1997
https://en.wikipedia.org/wiki/Cleverbot

permalink
report
parent
reply
-10 points

Oh definitely. It was just a fun video, which is why I shared it here.

permalink
report
parent
reply
7 points

Stopped watching it when the VPN advertising appeared…

permalink
report
reply
3 points

This all hinges on the definition of “conscious.” You can make a valid syllogism that defines it, but that doesn’t necessarily represent a reasonable or accurate summary of what consciousness is. There’s no current consensus of what consciousness is amongst philosophers and scientists, and many presume an anthropocentric model with regard to humans.

I can’t watch the video right now, but I was able to get ChatGPT to concede, in a few minutes, that it might be conscious, the nature of which is sufficiently different from humans so as to initially not appear conscious.

permalink
report
reply
-5 points

Exactly. Which is what makes this entire thing quite interesting.

Alex here (the interrogator in the video) is involved in AI safety research. Questions like “do the ethical frameworks of AI match those of humans”, “how do we get AI to not misinterpret inputs and do something dangerous” are very important to be answered.

Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?

Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?

Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.

permalink
report
parent
reply
5 points

Alex demonstrated that ChatGPT was lying intentionally

No, he most certainly did not. LLMs have no agency. “Intentionally” doing anything isn’t possible.

permalink
report
parent
reply
-6 points

LLMs have no agency.

Define “agency”. Why do u have agency but an LLM doesn’t?

“Intentionally” doing anything isn’t possible.

I see “intention” as a goal in this context. ChatGPT explained that the goal was to make the conversation appear “natural” (which means human like). This was the intention/goal behind it lying to Alex.

permalink
report
parent
reply
2 points

Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.

It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all capitalist ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.

I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent. I also agree that it’s interesting to try to break AI and push it to its limits, but then, breaking software is in my professional interests!

permalink
report
parent
reply
2 points

I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent

You might be interested in the book ‘The Naked Neanderthal’ by Ludovic Slimak. He is an archaeologist but the book is quite philosophical and explores this idea of learning about humanity through the study of other forms of intelligence (Neanderthals). Here are some opening paragraphs from the book to give you an idea of what I mean:

The interstellar perspective, this suggestion of distant intelligences, reminds us that we humans are alone, orphans, the only living conscious beings capable of analysing the mysteries of the universe that surrounds us. These are countless other forms of animal intelligence, but no consciousness with which we can exchange ideas, compare ourselves, or have a conversation.

These distant intelligences outside of us perhaps do exist in the immensity of space - the ultimate enigma. And yet we know for certain that they have existed in a time which appears distant to us but in fact is extremely close.

The real enigma is that these intelligences from the past became progressively extinct over the course of millennia; there was a tipping point in the history of humanity, the last moment when a consciousness external to humanity as we conceive it existed, encountered us, rubbed shoulders with us. This lost otherness still haunts us in our hopes and fears of artificial intelligence, the instrumentalized rebirth of a consciousness that does not belong to us.

permalink
report
parent
reply
1 point

It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all capitalist ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.

Agreed :(

You know what’s sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don’t want to keep using it though. But I see nothing like that on Lemmy.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 11K

    Posts

  • 505K

    Comments