James Cameron on AI: “I warned you guys in 1984 and you didn’t listen”::undefined
ITT: People describing the core component of human consciousness, pattern recognition, as not a big deal because it’s code and not a brain.
So all you do is create phrases based on things you’ve read in the past and recognizing similar interactions between other people and recreating them? 🤔
No we also transfer generic material to similar looking (but not too similar looking) people and then teach those new people the pattern matching.
My point: Reductionism just isn’t useful when discussing intelligence.
Man… I must be smart as heck to be able to come up with my own thoughts then…
Forming your own thoughts because you reasoned by yourself?
AI just goes “I’ve seen X before, someone answered Y, therefore I will answer Y.” In its current state it can’t decide “I’ll answer something nonsensical just for the lulz” because it doesn’t know if Y is right or wrong, it just knows that over billions of lines of texts it has seen X with Y most often so X = Y. If X was always answered with a nonsensical answer it would repeat it even if it has access to information that proves that answer wrong. Which is also why there’s a lot of bad info being shared by AI.
The technology is definitely impressive, but some people are jumping the gun by assuming more human-like characteristics in AI than it actually has. It’s not actually able to understand the concepts behind the patterns that it matches.
AI personhood is only selectively used as an argument to justify how their creators feed copyrighted work into it, but even they treat it as a tool, not like something that could potentially achieve consciousness.
And we were warned about Perceptron in the 1950s. Fact of the matter is, this shit is still just a parlor trick and doesn’t count as “intelligence” in any classical sense whatsoever. Guessing the next word in a sentence because hundreds of millions of examples tell it to isn’t really that amazing. Call me when any of these systems actually comprehend the prompts they’re given.
EXACTLY THIS. it’s a really good parrot and anybody who thinks they can fire all their human staff and replace with ChatGPT is in for a world of hurt.
Not if most their staff were pretty shitty parrots and the job is essentially just parroting…
At first blush, this is one of those things that most people assume is true. But one of the problems here is that a human can comprehend what is being asked in, say, a support ticket. So while an LLM might find a useful prompt and then spit out a reply that may pr may not be correct, a human can actually deeply understand what’s being asked, then select an auto-reply from a drop down menu.
Making things worse for the LLM side of things, that person doesn’t consume absolutely insane amounts of power to be trained to reply. Neither do most of the traditional “chatbot” systems that have been around for 20 years or so. Which begs the question, why use an LLM that is as likely to get something wrong as it is to get it right when existing systems have been honed over decades to get it right almost all of the time?
If the work being undertaken is translating text from one language to another, LLMs do an incredible job. Because guessing the next word based on hundreds of millions of samples is a uniquely good way to guess at translations. And that’s good enough almost all of the time. But asking it to write marketing copy for your newest Widget from WidgetCo? That’s going to take extremely skilled prompt writers, and equally skilled reviewers. So in that case the only thing you’re really saving is the amount of wall clock time for a human to type something. Not really a dramatic savings, TBH.
It’s getting old telling people this, but… the AI that we have right now? Isn’t even really AI. It’s certainly not anything like in the movies. It’s just pattern-recognition algorithms. It doesn’t know or understand anything and it has no context. It can’t tell the difference between a truth and a lie, and it doesn’t know what a finger is. It just paints amalgamations of things it’s already seen, or throws together things that seem common to it— with no filter nor sense of “that can’t be correct”.
I’m not saying there’s nothing to be afraid of concerning today’s “AI”, but it’s not comparable to movie/book AI.
Edit: The replies annoy me. It’s just the same thing all over again— everything I said seems to have went right over most peoples’ heads. If you don’t know what today’s “AI” is, then please stop assuming about what it is. Your imagination is way more interesting than what we actually have right now. This is why we should have never called what we have now “AI” in the first place— same reason we should never have called things “black holes”. You take a misnomer and your imagination goes wild, and none of it is factual.
That type of reductionism isn’t really helpful. You can describe the human brain to also just be pattern recognition algorithms. But doing that many times, at different levels, apparently gets you functional brains.
Not much, because it turns out there’s more to AI than a hypothetical sum of what we already created.
I’m not saying there’s nothing to be afraid of concerning today’s “AI”, but it’s not comparable to movie/book AI.
I just listened to 2 different takes on AI by true experts and it’s way more than what you’re saying. If the AI doesn’t have good goals programmed in, we’re fucked.It’s also being controlled by huge corporations that decide what those goals are. Judging from the past, this is not good.
If the AI doesn’t have good goals programmed in, we’re fucked
When they built a new building at my college they decided to to use “AI” (back when SunOS ruled the world) to determine the most efficient route for the elevator to take.
The parameter they gave it to measure was “how long does each wait to get to their floor”. So it optimized for that and found it could get it down to 0 by never letting anyone get on, so they never got to their floor, so their wait time was unset (which = 0).
They tweaked the parameters to ensure everyone got to their floor and as far as I can tell it worked well. I never had to wait much for an elevator.
Mate, a bad actor could put today’s LLM, face recognition softwares and functionality into an armed drone, show it a picture of Sara Connor and tell it to go hunting and it would be able to handle the rest. We are just about there. Call it what you want.
Regardless of if its true AI or not (I understand its just machine learning) Cameron’s sentiment is still mostly true. The Terminator in the original film wasn’t some digital being with true intelligence, it was just a machine designed with a single goal. There was no reasoning or planning really, just an algorithm that said "get weapons, kill Sarah Connor. It wasn’t far off from an Boston Dynamics robot using machine learning to complete a task.
You don’t understand. Our current AI? Doesn’t know the difference between an object and a painting. Furthermore, everything it perceives is “normal and true”. You give it bad data and suddenly it’s broken. And “giving it bad data” is way easier than it sounds. A “functioning” AI (like a Terminator) requires the ability to “understand” and scrutinize— not just copy what others tell it without any context or understanding, and combine results.
Isn’t that also referred to as Virtual Intelligence vs Artificial Intelligence? What we have now I’d just very well trained VI. It’s not AI because it only outputs variations of what’s it been trained using algorithms, right? Actual AI would be capable of generating information entirely distinct from any inputs.
THANK YOU. What we have today is amazing, but there’s still a massive gulf to cross before we arrive at artificial general intelligence.
What we have today is the equivalent of a four-year-old given a whole bunch of physics equations and then being told “hey, can you come up with something that looks like this?” It has no understanding besides “I see squiggly shape in A and squiggly shape in B, so I’ll copy squiggly shape onto C”.
GAI - General Artificial Intelligence is what most people jump too. And, for those wondering, that’s the beginning of the end game type. That’s the kind that will understand context. The ability to ‘think’ on its own with little to no input from humans. What we have now is basically autocorrect on super steroids.
I really think the only thing to be concerned of is human bad actors with AI and not AI. AI alignment will be significantly easier than human alignment as we are for sure not aligned and it is not even our nature to be aligned.
What a pompous statement. Stories of AI causing trouble like this predate him by decades. He’s never told an original story, they’re all heavily based on old sci-fi stories. And exactly how were people supposed to “listen”? “Jimmy said we shouldn’t work on AI, we all need to agree as a species to never do that. Thank you for saving us all Prophet Cameron!”
No one has told an “original” story.
It’s a self indulgent and totally assinine remark but at least he’s saying something.
One minute the Internet simps for this guy; the next, he’s a hack.
Go figure
One minute the Internet simps for this guy; the next, he’s a hack.
if one person thinks one thing and another person thinks a different thing, that doesn’t make them both hypocrites even if they are both on the internet
Plenty of people have told “original” stories.
My remark was self indulgent and totally as[s]inine, but I’m just saying something too, where’s my pass?
The internet doesn’t act as a single cohesive entity.
Plenty of people have told “original” stories.
Stories that are popular can be legal rip offs from something that is vaguely similar but never gained much exposure. So, what?
“I told a story that the world is familiar with and has sufficient relevance to the issues of today but nobody heeded the warning”
“uhh, yeah, but it wasn’t original”
My remark was self indulgent and totally as[s]inine, but I’m just saying something too, where’s my pass?
Sorry, I guess you probably have made a significant contribution that’s relevant to this glaring issue in society that we’re all trying to come to grips with?
The internet doesn’t act as a single cohesive entity
No, but a significant portion of it does act as a single, cohesive entity. Enough to perpetuate memes into popularity that glorify one and simultaneously villify another.
That’s self evident.
Here’s the thing. The Terminator movies were a warning against government/army AI. Actually slightly before that I guess wargames was too. But, honestly I’m not worried about military AI taking over.
I think if the military setup an AI, they would have multiple ways to kill it off in seconds. I mean, they would be in a more dangerous position to have an AI “gone wild”. But not because of the movies, but because of how they work, they would have a lot of systems in place to mitigate disaster. Is it possible to go wrong? Yes. Likely? No.
I’m far more worried about the geeky kid that now has access to open source AI that can be retasked. Someone that doesn’t understand the consequences of their actions fully, or at least can’t properly quantify the risks they’re taking. But, is smart enough to make use of these tools to their own end.
Some of you might still be teenagers, but those that aren’t, remember back. Wouldn’t you potentially think it’d be cool to create an auto gpt or some form of adversarial AI with an open ended success criteria that are either implicitly dangerous and/or illegal, or are broad enough to mean the AI will see the easiest path to success is to do dangerous and/or illegal things to reach its goal. You know, for fun. Just to see if it would work.
I’m not convinced the AI is quite there yet to be dangerous, or maybe it is. I’ve honestly not kept close tabs on this. But, when it does reach that level of maturity, a lot of the tools are still open source, they can be modified, any protections removed “for the lols” or “just to see what it can do” and someone without the level of control a government/military entity has could easily lose control of their own AI. That’s what scares me, not a Joshua or Skynet.
The biggest risk of AI at the moment is the same posed by the Industrial Revolution: Many professions will become obsolete, and it might be used as leverage to impose worse living conditions over those who still have jobs.
That’s a real concern. In the long run it will likely backfire. AI needs human input to work. If it starts getting other AI fed as its input, things will start to go bad in a fairly short order. Also, that is another point. Big business is likely another probable source of runaway AI. I trust business use of AI less than anyone else.
There’s also a critical mass to unemployment to which revolution is inevitable. There would likely be UBI and an assured standard of living when we get close to that, and you’d be able to try to make extra money from your passion. I don’t doubt that corporations will happily dump their employees for AI at a moment’s notice once it’s proved out. Big business is extremely predictable in that sense. Zero forward planning beyond the current quarter. But I have some optimism that some common sense would prevail from some source, and they’d not just leave 50%+ of the population to die slowly.