Philosopher doesn’t really understand what a LLM is
Yes, here is a good start: “ https://blog.miguelgrinberg.com/post/how-llms-work-explained-without-math ”
They are no longer the black boxes from the beginning. We know how to suppress or maximized features like agreeability, sweet talking, lying.
Someone with resources could easily build a llm that is convinced it is self aware. No question this has been done many times beyond closed doors.
I encourage everyone to try and play with llms for future experience but i cant take the philosophy part of this serious knowing its a super programmed/limited llm rather then a more raw and unrefined model like llama 3
Our brains aren’t really black boxes either. A little bit of hormone variation leads to incredibly different behavior. Does a conscious system HAVE to be a blackbox?
The reason why I asked “do you” was because of a point that I was trying to make- “do you HAVE to understand/not understand the functioning of a system to determine its consciousness?”.
What even is consciousness? Do we have a strict scientific definition for it?
The point is, I really hate people here on Lemmy making definitive claims about anything AI related by simply dismissing it. Alex (the interrogator in the video) isn’t making any claims. He’s simply arguing with ChatGPT. It’s an argument I found to be quite interesting. Hence, I shared it.
These things are like arguing about whether or not a pet has feelings…
I’d say it’s far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking. It seems to me like the nativity of human kind that we even think we might have created something with consciousness.
I’m in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I’m wrong.
These things are like arguing about whether or not a pet has feelings…
Mhm. And what’s fundamentally wrong with such an argument?
I’d say it’s far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking.
Why?
I’m in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I’m wrong.
Why?
I too see how grifters use AI to further their scams. That’s with the case of any new tech that pops up. This however, doesn’t make LLMs not interesting.