None of it is even AI, Predicting desired text output isn’t intelligence
Language is a method for encoding human thought. Mastery of language is mastery of human thought. The problem is, predictive text heuristics don’t have mastery of language and they cannot predict desired output
I thought this was an inciteful comment. Language is a kind of ‘view’ (in the model view controller sense) of intelligence. It signifies a thought or meme. But, language is imprecise and flawed. It’s a poor representation since it can be misinterpreted or distorted. I wonder if language based AIs are inherently flawed, too.
Edit: grammar, ironically
Many languages lack words for certain concepts. For example, english lacks a word for the joy you feel at another’s pain. You have to go to Germany in order to name Schadenfreude. However, English is perfectly capable of describing what schadenfreude is. I sometimes become nonverbal due to my autism. In the moment, there is no way I could possibly describe what I am feeling. But that is a limitation of my temporarily panicked mind, not a limitation of language itself. Sufficiently gifted writers and poets have described things once thought indescribable. I believe language can describe anything with a book long enough and a writer skilled enough.
“Mastery of language is mastery of human thought.” is easy to prove false.
The current batch of AIs is an excellent data point. These things are very good at language, and they still can’t even count.
The average celebrity provides evidence that it is false. People who excel at science often suck at talking, and vice-versa.
We didn’t talk our way to the moon.
Even when these LLMs master language, it’s not evidence that they’re doing any actual thinking, yet.
I do agree, but on the other hand…
What does your brain do while reading and writing, if not predict patterns in text that seem correct and relevant based on the data you have seen in the past?
I’ve seen this argument so many times and it makes zero sense to me. I don’t think by predicting the next word, I think by imagining things both physical and metaphysical, basically running a world simulation in my head. I don’t think “I just said predicting, what’s the next likely word to come after it”. That’s not even remotely similar to how I think at all.
Always remember that it will only get better, never worse.
They said “computers will never do x” and now x is assumed.
It usually also gets worse while it gets better.
But I take your point. This stuff will continue to advance.
But the important argument today isn’t over what it can be, it’s an attempt to clarify for confused people.
While the current LLMs are an important and exciting step, they’re also largely just a math trick, and they are not a sign that thinking machines are almost here.
Some people are being fooled into thinking general artificial intelligence has already arrived.
If we give these unthinking LLMs human rights today, we expand orporate control over us all.
These LLMs can’t yet take a useful ethical stand, and so we need to not rely on then that way, if we don’t want things to go really badly.
There’s a difference between “this is AI that could be better!” and “this could one day turn into AI.”
Everyone is calling their algorithms AI because it’s a buzzword that trends well.
AI is whatever machines can’t do yet.
Playing chess was the sign of AI, until a computer best Kasparov, then it suddenly wasn’t AI anymore. Then it was Go, it was classifying images, it was having a conversation, but whenever each of these was achieved, it stopped being AI and became “machine learning” or “model”.
We never called if statements AI until the last year or so. It’s all marketing buzz words. It has to be more than just “it makes a decision” to be AI, or else rivers would be AI because they “make a decision” on which path to take to the ocean based on which dirt is in the way.
Yeah, and highlighting that difference is what is important right now.
This is the first AI to masquerade as general artificial intelligence and people are getting confused.
This current thing doesn’t have or need rights or ethics. It can’t produce new intellectual property. It’s not going to save Timmy when he falls into the well. We’re going to need a new Timmy before all this is over
At this point i just interpret AI to be "we have lots of select statements and inner joins "
Pick a number from 1 to 2^63 - 1 ~= 9 x 10^19 randomly. See AI is easy /s
There’s even rumours that the next version of Windows is going to inject a bunch of AI buzzword stuff into the operating system. Like, how is that going to make the user experience any more intuitive? Sounds like you’re just going to have to fight an overconfident ChatGPT wannabe that thinks it knows what you want to do better than you do, every time you try opening a program or saving a document.
There’s even rumours
Like, I know we all love to hate Microsoft here but can we stop with the random nonsense? That’s not what’s happening, at all.
Windows Co-pilot just popped up on my Windows 11 machine. Its disclaimer said it could provide surprising results. I asked it what kind of surprising results I could expect, it responded that it wasn’t comfortable talking about that subject and ended the conversation.
They brought Cortana back in Halo Infinite and they’re gonna bring Cortana back for Windows Infinite
I’m actually pleasantly surprised on what ChatGPT can generate for me. It doesn’t usually take care of the detailed parts, but like I was able to have it spin up an android application skeleton that I could throw a couple of actions on I needed to test something with.
I’ve seen it generate very useful YAML and such. I still have to do a fair amount of work to make it behave how I need, but I really enjoy the ability to skip the filler bullshit in my work.
Unlike the previous bullshit they threw everywhere (3D screens, NFTs, metaverse), AI bullshit seems very likely to stay, as it is actually proving useful, if with questionable results… Or rather, questionable everything.
if it only were AI and not just llms, machine learning or just plain algorithms. but yeah let’s call everything AI from here on. NFTs could be useful if used as proof of ownership instead of expensive pictures etc
The NFT as ownership should really become the standard. Instead of having any people “authorizing” yadadada it’s done completely by machine and traceable.
No middlemen needed. Just I own x, this says I own x. I can sell you x, and you get ownership of x immediately. No “waiting 45 days to close” or “2 day transaction close” or even “title search verification.” Too many middlemen benefitting from the current system to allow NFT to replace them though. That’s the actual challenge.
Nfts will creep in slowly as efficiency gains are realized. They are already being used for airline tickets.
Okay, someone gains access to your device and sends themselves the NFT that proves ownership of your house.
What do you do? Do you just accept that since they own the NFT, that means they own the house? Probably not. You’ll go through the legal system, because that’s still what ultimately decides the ownership. I bet you’ll be happy about middle men and “waiting 45 days to close” then.
As a programmer and 3D artist getting almost instant art for reference and using chat GPT to help me solve complex coding problems has sped up production significantly. Theirs even plugins that generate and texture 3D models for you now which means I can do way more by myself.
This makes me think I should stay in IT infrastructure and not move to a developer position.