I’m rather curious to see how the EU’s privacy laws are going to handle this.
(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)
There’s nothing that says AI has to exist in a form created from harvesting massive user data in a way that can’t be reversed or retracted. It’s not technically impossible to do that at all, we just haven’t done it because it’s inconvenient and more work.
What if you want to create a model that predicts, say, diseases or medical conditions? You have to train that on medical data or you can’t train it at all. There’s simply no way that such a model could be created without using private data. Are you suggesting that we simply not build models like that? What if they can save lives and massively reduce medical costs? Should we scrap a massively expensive and successful medical AI model just because one person whose data was used in training wants their data removed?
This is an entirely different context - most of the talk here is about LLMs, health data is entirely different, health regulations and legalities are entirely different, people don’t publicly post their health data to begin with, health data isn’t obtained without consent and already has tons of red tape around it. It would be much easier to obtain “well sourced” medical data than thebroad swaths of stuff LLMs are sifting through.
But the point still stands - if you want to train a model on private data, there are different ways to do it.
I guarantee the person you’re arguing with would rather see people die than let an AI help them and be proven wrong.