I’ve just thought that LLMs are good for two opposite kinds of people:
-
The obvious, psychopaths or people behaving like them, who think they’ll distort the concept of truth and possessing such technologies will make their approach to society easier.
-
The people like me, who know that no random message written or picture drawn can be trusted anyway, so it’s better to overload the humanity with fakes so that it learned this simple truth.
I think both are right to some extent. Still it won’t work the exact way they want.
It’s like when Bolsheviks, when fighting illiteracy, basically conditioned people literate in first generation to think that everything officially printed is true, even that something being officially printed is identical to true, and that the religious darkness and ignorance is to doubt that. Like - blind belief is science and knowledge, and skepticism is darkness and ignorance. What could go wrong.
And then in Stalin’s years there were shortened evening education courses for workers. Where, well, they’d learn how to calculate something in some specialty, but without depth and context.
So you’d get a lot of engineers capable of really building and operating things and believing they could build and operate even more complex things (like spaceships eventually, or some planet-wide railway system, or whatever), but not understanding the context, the philosophy of science even. What’s worse is that they’d think they understand that well, because they’d have “scientific communism” about materialism and dialectics in their education.
So, back to the subject - they got a lot of people to believe all they officially printed on paper for a generation or even two. And those who didn’t would still indirectly believe a lot of it from their parents or peers.
But eventually, even if the damage is already done, right now not believing everything even from a “respectable” source is a good trait of many ex-Soviet people. Easier to notice among them than among Americans.
EDIT:
About that woman - this works too. She will see that a chatbot can’t provide depth when she wants it. I just hope she won’t feel too bad that moment.
If AI were sapient/sentient, I’d be 100% for this. Sapiosexuals assemble!
Given that LLMs are far, far from sapient/sentient at this point, however, this just makes me sad thinking about the sorry state of human interactions nowadays. I don’t and can’t blame her, though…
Unless you own the AI model, and can run it on your own hardware, it’s profoundly stupid. People will become slaves to the corporation who holds their AI relationship hostage. They can kill your “loved one” at any time, for any reason.
I fail to see how that is significantly different than what we have nowadays with humans.
We are dependent on large corporations already, some of us just materially while others ideologically as well.
We are denying healthcare, food, water, and shelter to people who can’t afford ridiculous prices or hold the wrong social status or have the “wrong” beliefs, skin colour, sexual orientation, gender identity, (etc., etc.) which is essentially killing them. That’s if we don’t just outright decide to ““liberate”” some other nation from whatever arbitrary reason and start carpet bombing civilians in hospitals because a handful of terrorists are supposedly active within said nation.
Catfishing has been a thing since the inception of third-party dating, and scams were a thing since before recorded history. Lying is as old as sentience itself.
A tech bros wet dream comes true.
Him