Did nobody really question the usability of language models in designing war strategies?
Did nobody really question the usability of language models in designing war strategies?
Correct, people heard “AI” and went completely mad imagining things it might be able to do. And the current models act like happy dogs that are eager to give an answer to anything even if they have to make one up on the spot.
LLM are just plagiarizing bullshitting machines. It’s how they are built. Plagiarism if they have the specific training data, modify the answer if they must, make it up from whole cloth as their base programming. And accidentally good enough to convince many people.
How is that structurally different from how a human answers a question? We repeat an answer we “know” if possible, assemble something from fragments of knowledge if not, and just make something up from basically nothing if needed. The main difference I see is a small degree of self reflection, the ability to estimate how ‘good or bad’ the answer likely is, and frankly plenty of humans are terrible at that too.
I dare say that if you ask a human “Why should I not stick my hand in a fire?” their process for answering the question is going to be very different from an LLM.
ETA: Also, working in software development, I’ll tell ya… Most of the time, when people ask me a question, it’s the wrong question and they just didn’t know to ask a different question instead. LLMs don’t handle that scenario.
I’ve tried asking ChatGPT “How do I get the relative path from a string that might be either an absolute URI or a relative path?” It spat out 15 lines of code for doing it manually. I ain’t gonna throw that maintenance burden into my codebase. So I clarified: “I want a library that does this in a single line.” And it found one.
An LLM can be a handy tool, but you have to remember that it’s also a plagiarizing, shameless bullshitter of a monkey paw.
I would argue that a decent portion of humans are usually ok with admitting they don’t know something
Unless they are in a situation where they will be punished for not knowing
My favorite doctor claimed he didn’t know something and at first I was thinking “Man that’s weird” but then I thought about all the times I’ve personally had or heard stories of doctors that bullshited their way into something like how I couldn’t possibly be diagnosed with ADHD at 18
A human brain can do that for 20 watt of power. chatGPT uses up to 20 megawatt.
To be fair they’re not accidentally good enough: they’re intentionally good enough.
That’s where all the salary money went: to find people who could make them intentionally.
GPT 2 was just a bullshit generator. It was like a politician trying to explain something they know nothing about.
GPT 3.0 was just a bigger version of version 2. It was the same architecture but with more nodes and data as far as I followed the research. But that one could suddenly do a lot more than the previous version, so by accident. And then the AI scene exploded.
It kind of irks me how many people want to downplay this technology in this exact manner. Yes you’re sort of right but in no way does that really change how it will be used and abused.
“But people think it’s real AI tho!”
Okay and? Most people don’t understand how most tech works and that doesn’t stop it from doing a lot of good and bad things.
If that’s really how they work, it wouldn’t explain these:
https://notes.aimodels.fyi/researchers-discover-emergent-linear-strucutres-llm-truth/
https://adamkarvonen.github.io/machine_learning/2024/01/03/chess-world-models.html
I will read those, but I bet “accidentally good enough to convince many people.” still applies.
A lot of things from LLM look good to nonexperts, but are full of crap.