Deputy prime minister to urge UN general assembly to create international regulatory system
s/AI/Technology/
Regulators still don’t understand basic things about the internet, why are we surprised?
Computers, the Internet, and the whole of IT have been moving too fast for regulators to keep up since the 90s. They are slower than a tortoise walking through molasses with a blindfold on.
But what can you expect when those who make regulations over IT still don’t know how to change the time on their VCR?
Have you tried being less old and understanding tech better?
That’s really not the differentiating factor.
Easily 80% of young people loudly commenting on the topic online have no idea about nuances as centrally relevant to where the tech is going as “Do Large Language Models learn world models or just surface statistics?”.
I see a ton of young people patting themselves on the back regurgitating what at this point is clear misinformation about stochastic parrots and remixing content that they picked up from similarly poorly informed tech writers with skin in the game, oblivious to the emerging picture in ongoing research.
It is moving too quickly.
Just today I was reading a paper on using CoT prompting (research from 2022) to efficiently transfer domain knowledge from a larger model to a much smaller model which then outperforms the original.
What that’s going to mean for Meta’s open sourced models, for the market for synthetic data, for the practical limitations on the impact of IP cases - is wild.
And that’s just this week’s news.
It’s way too much too quickly.
Keep in mind that the average practicing doctor is on average 17 years out of touch with the most recent research.
To expect a politician of any age to have a solid grasp on this stuff isn’t practical.
There are a number of trends in the research that can be reasonably predicted, but I’ve never seen a field moving this fast.
The very idea of trying to predict the situation even five years out is ludicrous. By the time legislation proposed today is being passed, it’s going to be obsolete.
Regulators are screwed.
Honestly, an easy way to regulate generative A.I. is to just pretend the output was made by a person. If your “A.I.” is used to create a deepfake political ad, you should be fined or sued as if you had an intern make it. If you aren’t sure the LLM won’t hallucinate falsehoods, don’t use it for news articles unless you’re ok with libel laws being applied.
Regulators hardly know what the internet is yet.