30 Nov 2022 release https://openai.com/index/chatgpt/
Other than endless posts from the general public telling us how amazing it is, peppered with decision makers using it to replace staff and then the subsequent news reports how it told us that we should eat rocks, or some variation thereof, there’s been no impact whatsoever in my personal life.
In my professional life as an ICT person with over 40 years experience, it’s helped me identify which people understand what it is and more specifically, what it isn’t, intelligent, and respond accordingly.
The sooner the AI bubble bursts, the better.
I fully support AI taking over stupid, meaningless jobs if it also means the people that used to do those jobs have financial security and can go do a job they love.
Software developer Afas has decided to give certain employees one day a week off with pay, and let AI do their job for that day. If that is the future AI can bring, I’d be fine with that.
Caveat is that that money has to come from somewhere so their customers will probably foot the bill meaning that other employees elsewhere will get paid less.
But maybe AI can be used to optimise business models, make better predictions. Less waste means less money spent on processes which can mean more money for people. I then also hope AI can give companies better distribution of money.
This of course is all what stakeholders and decision makers do not want for obvious reasons.
The thing that’s stopping anything like that is that the AI we have today is not intelligence in any sense of the word, despite the marketing and “journalism” hype to the contrary.
ChatGPT is predictive text on steroids.
Type a word on your mobile phone, then keep tapping the next predicted word and you’ll have some sense of what is happening behind the scenes.
The difference between your phone keyboard and ChatGPT? Many billions of dollars and unimaginable amounts of computing power.
It looks real, but there is nothing intelligent about the selection of the next word. It just has much more context to guess the next word and has many more texts to sample from than you or I.
There is no understanding of the text at all, no true or false, right or wrong, none of that.
AI today is Assumed Intelligence
Arthur C Clarke says it best:
“Any sufficiently advanced technology is indistinguishable from magic.”
I don’t expect this to be solved in my lifetime, and I believe that the current methods of"intelligence " are too energy intensive to be scalable.
That’s not to say that machine learning algorithms are useless, there are significant positive and productive tools around, ChatGPT and its Large Language Model siblings not withstanding.
Source: I have 40+ years experience in ICT and have an understanding of how this works behind the scenes.
I think you’re right. AGI and certainly ASI are behind one large hurdle: we need to figure out what consciousness is and how we can synthesize it.
As Qui-Gon Jinn said to Jar Jar Binks: the ability to speak does not make you intelligent.
As a software developer, the one usecase where it has been really useful for me is analyzing long and complex error logs and finding possible causes of the error. Getting it to write code sometimes works okay-ish, but more often than not it’s pretty crap. I don’t see any use for it in my personal life.
I think its influence is negative overall. Right now it might be useful for programming questions, but that’s only the case because it’s fed with Human-generated content from sites like Stackoverflow. Now those sites are slowly dying out due to people using ChatGPT and this will have the inverse effect that in the future, AI will have less useful training data which means it’ll become less useful for future problems, while having effectively killed those useful sites in the process.
Looking outside of my work bubble, its effect on academia and learning seems pretty devastating. People can now cheat themselves towards a diploma with ease. We might face a significant erosion of knowledge and talent with the next generation of scientists.
AI has completely killed my desire to teach writing at the community college level.
Agreed. I started steps needed to be certified as an educator in my state but decided against it. ChatGPT isn’t the only reason, but it is a contributing factor. I don’t envy all of the teachers out there right now who have to throw out the entire playbook of what worked in the past.
And I feel bad for students like me who really struggled with in-class writing by hand in a limited amount of time, because that is what everyone is resorting to right now.
It cost me my job (partially). My old boss swallowed the AI pill hard and wanted everything we did to go through GPT. It was ridiculous and made it so things that would normally take me 30 seconds now took 5-10 minutes of “prompt engineering”. I went along with it for a while but after a few weeks I gave up and stopped using it. When boss asked why I told her it was a waste of time and disingenuous to our customers to have GPT sanitize everything. I continued to refuse to use it (it was optional) and my work never suffered. In fact some of our customers specifically started going through me because they couldn’t stand dealing with the obvious AI slop my manager was shoveling down their throat. This pissed off my manager hard core but she couldn’t really say anything without admitting she may be wrong about GPT, so she just ostracized me and then fired me a few months later for “attitude problems”.
Curious - what type of job was this? Like, how was AI used to interact with your customers?
It was just a small e-commerce store. Online sales and shipping. The boss wanted me to run emails i would send to vendors through gpt and any responses for customer complaints were put through GPT. We also had a chat function on our site for asking questions and what not and the boss wanted us to copy the customers’ chat into gpt, get a response, rewrite if necessary, and then paste GPT’s response into our chat. It was so ass backwards I just refused to do it. Not to mention it made the response times super high, so customers were just leaving rather than wait (which of course was always the employees fault).
For work, I teach philosophy.
The impact there has been overwhelmingly negative. Plagiarism is more common, student writing is worse, and I need to continually explain to people at an AI essay just isn’t their work.
Then there’s the way admin seem to be in love with it, since many of them are convinced that every student needs to use the LLMs in order to find a career after graduation. I also think some of the administrators I know have essentially automated their own jobs. Everything they write sounds like GPT.
As for my personal life, I don’t use AI for anything. It feels gross to give anything I’d use it for over to someone else’s computer.
My son is in a PhD program and is a TA for a geophysics class that’s mostly online, so he does a lot of grading assignments/tests. The number of things he gets that are obviously straight out of an LLM is really disgusting. Like sometimes they leave the prompt in. Sometimes the submit it when the LLM responds that it doesn’t have enough data to give an answer and refers to ways the person could find out. It’s honestly pretty sad.
convinced that every student needs to use the LLMs in order to find a career after graduation.
Yes, of course, why are bakers learning to use ovens when they should just be training on app-enabled breadmakers and toasters using ready-made mixes?
After all, the bosses will find the automated machine product “good enough.” It’s “just a tool, you guys.”
Sheesh. I hope these students aren’t paying tuition, and even then, they’re still getting ripped off by admin-brain.
I’m sorry you have to put up with that. Especially when philosophy is all about doing the mental weightlifting and exploration for onesself!