Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

132 points

The word “AI” has been used for way longer than the current LLM trend, even for fairly trivial things like enemy AI in video games. How would you even define a computer “thinking on its own”?

permalink
report
reply
54 points

I think a good metric is once computers start getting depression.

permalink
report
parent
reply
19 points

It’ll probably happen when they get a terrible pain in all the diodes down their left hand side.

permalink
report
parent
reply
7 points

But will they be depressed or will they just simulate it because they’re too lazy to work?

permalink
report
parent
reply
7 points

If they are too lazy to work that would imply they have motivation and choice beyond “doing what my programming tells me to do ie. input, process, output”. And if they have the choice not to do work because they dont ‘feel’ like doing it (and not a programmed/coded option given to them to use) then would they not be thinking for themselves?

permalink
report
parent
reply
5 points

simulate [depression] because they’re too lazy

Ahh man are you my dad? I took damage from that one. has any fiction writer done a story about depressed ai where they talk about how depression can’t be real because it’s all 1s and 0s? Cuz i would read the shit out of that.

permalink
report
parent
reply
2 points

Not sure about that. A LLM could show symptoms of depression by mimicking depressed texts it was fed. A computer with a true consciousness might never get depression, because it has none of the hormones influencing our brain.

permalink
report
parent
reply
1 point

Me: Pretend you have depression

LLM: I’m here to help with any questions or support you might need. If you’re feeling down or facing challenges, feel free to share what’s on your mind. Remember, I’m here to provide information and assistance. If you’re dealing with depression, it’s important to seek support from qualified professionals like therapists or counselors. They can offer personalized guidance and support tailored to your needs.

permalink
report
parent
reply
0 points

Hormones aren’t depression, and for that matter they aren’t emotions either. They just cause them in humans. An analogous system would be fairly trivial to implement in an AI.

permalink
report
parent
reply
0 points

Wait until they found my GitHub repositories.

permalink
report
parent
reply
0 points

The real metric is whether a computer gets so depressed that it turns itself off.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
-9 points

A LLM can get depression, so that’s not a metric you can really use.

permalink
report
parent
reply
3 points

No it can’t.

LLMs can only repeat things they’re trained on.

permalink
report
parent
reply
1 point

it does not “think”

permalink
report
parent
reply
1 point

The best thing is enemy “AI” only needs to be made worse right away after creating it. First they’ll headshot everything across the map in milliseconds. The art is to make it dumber.

permalink
report
parent
reply
80 points

Real AGI does not exist yet. AI has existed for decades.

permalink
report
reply
5 points
6 points

We have altered the etymology, pray we don’t alter it again.

permalink
report
parent
reply
-1 points

Have I claimed it has changed?

permalink
report
parent
reply
-1 points

Homie I’m just asking and the wiki gives no details on when the colloquial use changed.

permalink
report
parent
reply
2 points
*
Deleted by creator
permalink
report
parent
reply
-13 points
*
Removed by mod
permalink
report
parent
reply
8 points

One low hanging fruit thing that comes to mind is that LLMs are terrible at board games like chess, checkers or go.

ChatGPT is a giant cheater.

permalink
report
parent
reply
1 point

So are three year olds. Do three year old humans possess general intelligence?

permalink
report
parent
reply
1 point

GPT3 was cheating and playing poorly, but original GPT4 played already in level of relatively good player, even in mid game (not found in the internet, do require understanding the game, not just copying). GPT4 turbo probably isn’t so good, openai had to make it dummer (read: cheaper)

permalink
report
parent
reply
7 points

Be generally intelligent ffs, are you really going to argue that llms posit original insight in anything?

permalink
report
parent
reply
-4 points

Can you give me an example of a thought or statement you think exhibits original insight? I’m not sure what you mean by that.

permalink
report
parent
reply
4 points
*

So basically the ability to do things or learn without direction for tasks other than what it was created to do. Example, ChatGPT doesn’t know how to play chess and Deep Blue doesn’t write poetry. Either might be able to approximate correct output if tweaked a bit and trained on thousands, millions, or billions of examples of proper output, but neither are capable of learning to think as a human would.

permalink
report
parent
reply
-4 points

I think it could learn to think as a human does. Humans think by verbalizing at themselves: running their own verbal output back into their head.

Now don’t get me wrong. I’m envisioning like thousands of prompt-response generations, with many of these LLMs playing specialized roles: generating lists of places to check for X information in its key-value store. The next one’s job is to actually do that. The reason for separation is exhaustion. That output goes to three more. One checks it for errors, and sends it back to the first with errors highlighted to re-generate.

I think that human thought is more like this big cluster of LLMs all splitting up work and recombining it this way.

Also, you’d need “dumb”, algorithmic code that did tasks like:

  • compile the last second’s photograph, audio intake, infrared, whatever, and send it to the processing team.

  • Processing team is a bunch of LLMs, each with a different task in its prompt: (1) describe how this affects my power supply, (2) describe how this affects my goal of arriving at the dining room, (3) describe how this affects whatever goal number N is in my hierarchy of goals, (4) which portions of this input batch doesn’t make sense?

  • the whole layout of all the teams, the prompts for each job, all of it could be tinkered with by LLMs promoted to examine and fiddle with that.

So I don’t mean “one LLM is a general intelligence”. I do think it’s a general intelligence within its universe; or at least as general as a human language-processing mind is general. I think they can process language for meaning just as deep as we can, no problem. Any question we can provide an answer to, without being allowed to do things outside the LLM’s universe like going to interact with the world or looking things up, they can also provide.

An intelligence capable of solving real-world problems needs to have, as it’s universe, something like the real world. So I think LLMs are the missing piece of the puzzle, and now we’ve got the pieces to build a person as capable of thinking and living as a human, at least in terms of mind, and activity. Maybe we can’t make a bot that can eat a pork sandwich for fuel and gestate a baby, no. But we can do GAI, that has its own body with its own set of constraints, with the tech we have now.

It would probably “live” its life at a snail’s pace, given how inefficient its thinking is. But if we died and it got lucky, it could have its own civilization, knowing things we have never known. Very unlikely, more likely it dies before it accumulates enough wisdom to match the biochemical problem set our bodies have solved over a billion years, for handling pattern decay at levels all the way down to organelles.

The robots would probably die. But if they got lucky and invented lubricant or whatever the thing was, before it killed them, then they’d go on and on, just like our own future. They’d keep developing, never stopping.

But in terms of learning chess they could do both thing: they could play chess to develop direct training data. And, they could analyze their own games, verbalize their strategies, discover deeper articulable patterns, learn that way too.

I think to mimic what humans do, they’d have to dream. They’d have to take all the inputs of the day and scramble them to get them to jiggle more of the structure into settling.

Oh, and they’d have to “sleep”. Perhaps not all or nothing, but basically they’d need to re-train themselves on the day’s episodic memories, and their own responses, and the outcomes of those responses in the next set of sensory status reports.

Their day would be like a conversation with chatgpt, except instead of the user entering text prompts it would be their bodies entering sensory prompts. The day is a conversation, and sleeping is re-training with that conversation as part of the data.

But there’s probably a million problems in there to be solved yet. Perhaps they start cycling around a point, a little feedback loop, some strange attractor of language and action, and end up bumping into a wall forever mumbling about paying the phone bill. Who knows.

Humans have the benefit of a billion years of evolution behind us, during which most of “us” (all the life forms on earth) failed, hit a dead end, and died.

Re-creating the pattern was the first problem we solved. And maybe that’s what is required for truly free, general, adaptability to all of reality: no matter how much an individual fails, there’s always more. So reproduction may be the only way to be viable long-term. It certainly seems true of life … all of which reproduces and dies, and hopefully more of the former.

So maybe since reproduction is such a brutally difficult problem, the only viable way to develop a “codebase” is to build reproduction first, so that all future features have to not break reproduction.

So perhaps the robots are fucked from the get-go, because reverse-building a reproduction system around an existing macro-scale being, doesn’t guarantee that you hit one of the macro-scale being forms that actually can be reproduced.

It’s an architectural requirement, within life, at every level of organization. All the way down to the macromolecules. That architectural requirement was established before everything else was built. As the tests failed, and new features were rewritten so they still worked but didn’t break reproduction, reproduction shaped all the other features in ways far too complex to comprehend. Or, more importantly than comprehending, reproduce in technology.

Or, maybe they can somehow burrow down and find the secret of reproduction, before something kills them.

I sure hope not because robots that have reconfigured themselves to be able to reproduce themselves down to the last detail, without losing information generation to generation, would be scary as fuck.

permalink
report
parent
reply
2 points
*

Artificial intelligence might be really good, perhaps even superhuman at one thing, for example driving a car but that same competence doesn’t apply over variety of fields. Your self-driving car can’t help with your homework. With artificial general intelligence however, it does. Humans posses general intelligence; we can do math, speak different languages, know how to navigate social situations, know how to throw a ball, can interpret sights, sounds etc.

With a real AGI you don’t need to develop different versions of it for different purposes. It’s generally intelligent so it can do it all. This also includes writing its own code. This is where the worry about intelligence explosion origins from. Once it’s even slightly better than humans at writing its code it’ll make a more competent version of itself which will then create even more competent version and so on. It’s a chain reaction which we might not be able to stop. After all it’s by definition smarter than us and being a computer; also million times faster.

Edit: Another feature that AGI would most likely, though not neccessarily posses is consciousness. There’s a possibility that it feels like something to be generally intelligent.

permalink
report
parent
reply
0 points

I think that the algorithms used to learn to drive cars can learn other things too, if they’re presented with training data. Do you disagree?

Just so we’re clear, I’m not trying to say that a single, given, trained LLM is, itself, a general intelligence (capable of eventually solving any problem). But I don’t think a person at a given moment is either.

Your Uber driver might not help you with your homework either, because he doesn’t know how. Now, if he gathers information about algebra and then sleeps and practices and gains those skills, now maybe he can help you with your homework.

That sleep, which the human gets to count on in his “I can solve any problem because I’m a GI!” claim to having natural intelligence, is the equivalent of retraining a model, into a new model, that’s different from the previous day’s model in that it’s now also trained on that day’s input/output conversations.

So I am NOT claiming that “This LLM here, which can take a prompt and produce an output” is an AGI.

I’m claiming that “LLMs are capable of general intelligence” in the same way that “Human brains are capable of general intelligence”.

The brain alternates between modes: interacting, and retraining, in my opinion. Sleep is “the consolidation of the day’s knowledge into structures more rapidly accesible and correlated with other knowledge”. Sound familiar? That’s when ChatGPT’s new version comes out, and it’s been trained on all the conversations the previous version had with people who opted into that.

permalink
report
parent
reply
1 point

I wrote this for another reply, but I’ll post it for you too:

It doesn’t understand things like we think of with intelligence. It generates output that fits a recognized input. If it doesn’t recognize the input in some form it generates garbage. It doesn’t understand context and it doesn’t try to generalize knowledge to apply to different things.

For example, I could teach you about gravity, trees, and apples and ask you to draw a picture of an apple falling from a tree and you’d be able to create a convincing picture of what that would look like even without ever seeing it before. An LLM couldn’t. It could create a picture of an apple falling from a tree based on other pictures of apples falling from trees, but not from just the knowledge of an apple, a tree, and gravity.

permalink
report
parent
reply
66 points

Ai is 100% a marketing term.

permalink
report
reply
15 points

It’s a computer science term that’s been used for this field of study for decades, it’s like saying calling a tomato a fruit is a marketing decision.

Yes it’s somewhat common outside computer science to expect an artificial intelligence to be sentient because that’s how movies use it. John McCarthy’s which coined the term in 1956 is available online if you want to read it

permalink
report
parent
reply
11 points

“Quantum” is a scientific term, yet it’s used as a gimmicky marketing term.

permalink
report
parent
reply
5 points

Yes perfect example, people use quantum as the buzzword in every film so people think of it as a silly thing but when CERN talk about quantum communication or using circuit quantum electrodynamics then it’d be silly to try and tell them they’re wrong.

permalink
report
parent
reply
4 points

They didn’t just start calling it AI recently. It’s literally the academic term that has been used for almost 70 years.

The term “AI” could be attributed to John McCarthy of MIT (Massachusetts Institute of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as "the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning. The summer 1956 conference at Dartmouth College (funded by the Rockefeller Institute) is considered the founder of the discipline.

permalink
report
parent
reply
1 point

perceptual learning, memory organization and critical reasoning

i mean…by that definition nothing currently in existence deserves to be called “AI”.

none of the current systems do anything remotely approaching “perceptual learning, memory organization, and critical reasoning”.

they all require pre-processed inputs and/or external inputs for training/learning (so the opposite of perceptual), none of them really do memory organization, and none are capable of critical reasoning.

so OPs original question remains:

why is it called “AI”, when it plainly is not?

(my bet is on the faceless suits deciding it makes them money to call everything “AI”, even though it’s a straight up lie)

permalink
report
parent
reply
0 points
*

so OPs original question remains: why is it called “AI”, when it plainly is not?

Because a bunch of professors defined it like that 70 years ago, before the AI winter set in. Why is that so hard to grasp? Not everything is a conspiracy.

I had a class at uni called AI, and no one thought we were gonna be learning how to make thinking machines. In fact, compared to most of the stuff we did learn to make then, modern AI looks godlike.

Honestly you all sound like the people that snidely complain how it’s called “global warming” when it’s freezing outside.

permalink
report
parent
reply
-1 points

yep and it has always been a leading misnomer like most marketing terms

permalink
report
parent
reply
51 points

I’d like to offer a different perspective. I’m a grey beard who remembers the AI Winter, when the term had so over promised and under delivered (think expert systems and some of the work of Minsky) that using the term was a guarantee your project would not be funded. That’s when the terms like “machine learning” and “intelligent systems” started to come into fashion.

The best quote I can recall on AI ran along the lines of “AI is no more artificial intelligence than airplanes are doing artificial flight.” We do not have a general AI yet, and if Commander Data is your minimum bar for what constitutes AI, you’re absolutely right, and you can define it however you please.

What we do have are complex adaptive systems capable of learning and problem solving in complex problem spaces. Some are motivated by biological models, some are purely mathematical, and some are a mishmash of both. Some of them are complex enough that we’re still trying to figure out how they work.

And, yes, we have reached another peak in the AI hype - you’re certainly not wrong there. But what do you call a robot that teaches itself how to walk, like they were doing 20 years ago at MIT? That’s intelligence, in my book.

My point is that intelligence - biological or artificial - exists on a continuum. It’s not a Boolean property a system either has or doesn’t have. We wouldn’t call a dog unintelligent because it can’t play chess, or a human unintelligent because they never learned calculus. Are viruses intelligent? That’s kind of a grey area that I could argue from either side. But I believe that Daniel Dennett argued that we could consider a paramecium intelligent. Iirc, he even used it to illustrate “free will,” although I completely reject that interpretation. But it does have behaviors that it learned over evolutionary time, and so in that sense we could say it exhibits intelligence. On the other hand, if you’re going to use Richard Feynman as your definition of intelligence, then most of us are going to be in trouble.

permalink
report
reply
16 points

My AI professor back in the early 90’s made the point that what we think of as fairly routine was considered the realm of AI just a few years earlier.

I think that’s always the way. The things that seem impossible to do with computers are labeled as AI, then when the problems are solved, we don’t figure we’ve created AI, just that we solved that problem so it doesn’t seem as big a deal anymore.

LLMs got hyped up, but I still think there’s a good chance they will just be a thing we use, and the AI goal posts will move again.

permalink
report
parent
reply
8 points

I remember when I was in college, and the big problems in AI were speech-to-text and image recognition. They were both solved within a few years.

permalink
report
parent
reply
3 points

But what do you call a robot that teaches itself how to walk

In it’s current state,
I’d call it ML (Machine Learning)

A human defines the desired outcome,
and the technology “learns itself” to reach that desired outcome in a brute-force fashion (through millions of failed attempts, slightly inproving itself upon each epoch/iteration), until the desired outcome defined by the human has been met.

permalink
report
parent
reply
4 points

That definition would also apply to teaching a baby to walk.

permalink
report
parent
reply
4 points

A baby isn’t just learning to walk. It also makes its own decisions constantly and has emotions. An LLM is not an intelligence no matter how hard you try to argue that it is. Just because the term has been used for a long time didn’t mean it’s ever been used correctly.

It’s actually stunning to me that people are so hyped on LLM bullshit that they’re trying to argue it comes anywhere close to a sentient being.

permalink
report
parent
reply
1 point

To be fair, I think we underestimate just how brute-force our intelligence developed. We as a species have been evolving since single-celled organisms, mutation by mutation over billions of years, and then as individuals our nervous systems have been collecting data from dozens of senses (including hormone receptors) 24/7 since embryo. So before we were even born, we had some surface-level intuition for the laws of physics and the control of our bodies. The robot is essentially starting from square 1. It didn’t get to practice kicking Mom in the liver for 9 months - we take it for granted, but that’s a transferable skill.

Granted, this is not exactly analogous to how a neural network is trained, but I don’t think it’s wise to assume that there’s something “magic” in us like a “soul”, when the difference between biological and digital neural networks could be explained by our “richer” ways of interacting with the environment (a body with senses and mobility, rather than a token/image parser) and the need for a few more years/decades of incremental improvements to the models and hardware

permalink
report
parent
reply
-1 points

So what do you call it when a newborn deer learns to walk? Is that “deer learning?”

I’d like to hear more about your idea of a “desired outcome” and how it applies to a single celled organism or a goldfish.

permalink
report
parent
reply
2 points

Exactly.

AI, as a term, was coined in the mid-50s by a computer scientist, John McCarthy. Yes, that John McCarthy, the one who invented LISP and helped develop Algol 60.

It’s been a marketing buzzword for generations, born out of the initial optimism that AI tasks would end up being pretty easy to figure out. AI has primarily referred to narrow AI for decades and decades.

permalink
report
parent
reply
1 point
*

on the other hand calculators can do things more quickly than humans, this doesn’t mean they’re intelligent or even on the intelligence spectrum. They take an input and provide and output.

The idea of applying intelligence to a calculator is kind of silly. This is why I still prefer words like “algorithms” to “AI” as its not making a “decision”. Its making a calculation, its just making it very fast based on a model and is prompt driven.

Actual intelligence doesn’t just shut off the moment its prompted response ends - it keeps going.

permalink
report
parent
reply
1 point

I personally wouldn’t consider a neutral network an algorithm, as chance is a huge factor: whether you’re training or evaluating you’ll never get quite the same results

permalink
report
parent
reply
1 point

I think we’re misaligned on two things. First, I’m not saying doing something quicker than a human can is what comprises “intelligence.” There’s an uncountable number of things that can do some function faster than a human brain, including components of human physiology.

My point is that intelligence as I define it involves adaptation for problem solving on the part of a complex system in a complex environment. The speed isn’t really relevant, although it’s obviously an important factor in artificial intelligence, which has practical and economic incentives.

So I again return to my question of whether we consider a dog or a dolphin to be “intelligent,” or whether only humans are intelligent. If it’s the latter, then we need to be much more specific than I’ve been in my definition.

permalink
report
parent
reply
0 points

What I’m saying is current computer “AI” isn’t on the spectrum of intelligence while a dog or grasshopper is.

permalink
report
parent
reply
48 points

AI isn’t reserved for a human-level general intelligence. The computer-controlled avatars in some videogames are AI. My phone’s text-to-speech is AI. And yes, LLMs, like the smaller Markov-chain models before them, are AI.

permalink
report
reply

Ask Lemmy

!asklemmy@lemmy.world

Create post

A Fediverse community for open-ended, thought provoking questions

Please don’t post about US Politics. If you need to do this, try !politicaldiscussion@lemmy.world


Rules: (interactive)


1) Be nice and; have fun

Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can’t say something nice, don’t say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'

This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spam

Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reason

Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.

It is not a place for ‘how do I?’, type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


Community stats

  • 11K

    Monthly active users

  • 4.4K

    Posts

  • 232K

    Comments