Here in the USA, you have to be afraid for your job these days. Layoffs are rampant everywhere due to outsourcing, and now we have AI on the horizon promising to make things more efficient, but we really know what it is actually going to be used for. They want automate out everything. People packaging up goods for shipping, white collar jobs like analytics, business intelligence, customer service, chat support. Any sort of job that takes a low or moderate amount of effort or intellectual ability is threatened by AI. But once AI takes all these jobs away and shrinks the amount of labor required, what are all these people going to do for work? It’s not like you can train someone who’s a business intelligence engineer easily to go do something else like HVAC, or be a nurse. So you have the entire tech industry basically folding in on itself trying to win the rat race and get the few remaining jobs left over…
But it should be pretty obvious that you can’t run an entire society with no jobs. Because then people can’t buy groceries, groceries don’t sell so grocery stores start hurting and then they can’t afford to employ cashiers and stockers, and the entire thing starts crumbling. This is the future of AI, basically. The more we automate, the less people can do, so they don’t have jobs and no income, not able to survive…
Like, how long until we realize how detrimental AI is to society? 10 years? 15?
At some point society will need to realize that traditional work that is handled by automation (whether AI or not) isn’t necessary and economic systems will have to change.
I’m not an expert by any means, and I just don’t see this happening in the near-term. My opinion is that for now (the short-term at least) it’ll just widen the gap between rich and poor.
Society can exist without jobs, not everything has to be capital, in fact reaching a post scarcity world is needed for communism.
AI hype is also overblown as fuck, I remember watching the CGP grey video Humans Need not Apply, like what, 8 years ago? Haven’t really achieved some epic breakthrough did we?
For me from a software engineers perspective, “AI” is nothing but a productivity tool, it reduces the amount of mundane work I have to do, but then so does the IDE I use.
as humans we have been automatic tasks for a long time, just think about your washing machine, you have any idea how hard it would be to have clean clothes without them? Do you think we would be better off if we needed cleaning services that clean our clothes for us using human labour just so people have jobs? Or is it better to use that effort elsewhere?
This is the part of the AI conversation that always bugs me. People have just concluded that the hype is real and we’ve reached the point that people fear in movies. They don’t understand that it’s mostly bullshit. Sure, the fancy autocomplete can toss up some boilerplate code and it’s mostly ok. Sure, it saves me time scrolling through StackOverflow search results.
But it’s simply not this all-knowing miracle replacement for everything. I think everyone has been conditioned by entertainment to fear the worst. When that bubble bursts, IT will be the part which wreaks havoc on the economy.
Also, this recent classic: I will fucking piledrive you if you mention AI again was really illuminating.
That’s already been going to the wrong people for decades now.
The least drastic solution would be something like UBI, where a lot of people would be miserable, but at least will be able to put food on the table. (In case you’ve seen The Expanse series, I imagine that something like the part where Bobbie asks for directions on Earth).
A more drastic solution would be to not tie the worth of people to the amount of work they do or the amount of wealth they have.
I don’t disagree with most things. But I don’t think the celebration of not having a job muddles a bit the point. I don’t see a viable future if everyone does the same.
What do you think happened to building full of engineers designing plans and making stress load calculations? What do you think happened to switchboard operators?
I can answer that. We won’t.
We’ll keep iterating and redesigning until we have actual working general intelligence AI. Once we’ve created a general intelligence it will be a matter of months or years before it’s a super intelligence that far outshines human capabilities. Then you have a whole new set of dilemmas. We’ll struggle with those ethical and social dilemmas for some amount of time until the situation flips and the real ethical dilemmas will be shouldered by the AIs: how long do we keep these humans around? Do we let them continue to go to war with each other? do they own this planet? Etc.