We’ve already got LLMs that can simulate conversing with those dead people to some degree, I wouldn’t say they’re beyond the reach of any technology. In a few years they might be good enough simulations that you can’t tell the difference.
LLMs are no better than speaking to a clever parrot. It might say the correct words but has no understanding and so there is no value in teaching it beyond a petty parlour trick.
“Parrots only repeat” is a pretty archaic view in regards to animal psychology, though.
LLM’s are glorified chatbots but parrots can actually understand even. To what extent, that’s very much debatable.
Here’s a good set of videos from Nativlang on the subject https://www.youtube.com/watch?v=YmkQLDJdhJI&list=PLc4s09N3L2h2lYeVD6pmax3f7qiHxyq3k
I was absolutely being unfair to parrots to get my point across, I apologise to all our feathered friends.
Socrates character AI is no fun. He isn’t clever or insightful or skeptical.
good enough simulations that you can’t tell the difference.
This requires us having actual conversations with those dead people to compare against, which we obviously can’t do.
There is simply not enough information to train a model on of a dead person to create a comprehensive model of how they would respond in arbitrary conversations. You may be able to train with some depth in their field of expertise, but the whole point is to talk about things which they have no experience with, or at least, things which weren’t known then.
So sure, maybe we get a model that makes you think you’re talking to them, but that’s no different than just having a dream or an acid trip where you’re chatting with Einstein.
Well, I’d think you’d test that model with living authors with similar inputs and make comparisons and then refine the process till nobody can tell the difference. We’ll never get all the way there, but I bet we’ll get far enough that we won’t be able to tell the difference.
As for when… Who knows?
There is simply not enough information to train a model on of a dead person to create a comprehensive model of how they would respond in arbitrary conversations.
True. And even if we did, most of them would be super racist, anyway. Just like chatbots from a few years ago!
Wait, maybe we do have the necessary technology…Hooray? Lol.
daily reminder i will die alone.
I’ve already resigned to the fact that I will probably die alone in an apartment not to be found for weeks because no one checks in on me other than my parents. I won’t kill myself, but I’m not good at socializing
Just give him a tablet with youtube kids on 24/7.
One of my favorites
Oh well done. Now I’m feeling warm and fuzzy.