OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking
So staff requested the board take action, then those same staff threatened to quit because the board took action?
That doesn’t add up.
The whole thing sounds like some cockamamie plot derived from chatgpt itself. Corporate America is completely detached from the real world.
That’s an appealing ‘conspiracy’ angle, and I understand why it might seem juicy and tantalising to onlookers, but that idea doesn’t hold up to any real scrutiny whatsoever.
Why would the Board willingly trash their reputation? Why would they drag the former Twitch CEO through the mud and make him look weak and powerless? Why would they not warn Microsoft and risk damaging that relationship? Why would they let MS strike a tentative agreement with the OpenAI employees that upsets their own staff, only to then undo it?
None of that makes any sense whatsoever from a strategic, corporate “planned” perspective. They are all actions of people who are reacting to things in the heat of the moment and are panicking because they don’t know how it will end.
OpenAI loves to “leak” stories about how they’ve developed an AI so good that it is scaring engineers because it makes people believe they’ve made a massive new technological breakthrough.
More like:
-
They get a breakthrough called Q* (Q star) which is just combining 2 things we already knew about.
-
Chief scientist dude tells the board Sam has plans for it already
-
Board says Sam is going too fast with his “breakthroughs” and fires him.
-
Original scientist who raised the flag realized his mistake and started supporting Sam but damage was done
-
Microsoft
My bet is the board freaked out at how “powerful” they heard it was (which is still unfounded and from what they explain in various articles, Q* is not very groundbreaking) and jumped the gun. So now everyone wants them to resign because they’ve shown they’ll take drastic actions without asking on things they don’t understand.
There’s clearly a good amount fog around this. But something that is clearly true is that at least some OpenAI people have behaved poorly. Altman, the board, some employees, the mainstream of the employees or maybe all of them in some way or another.
What we know about the employees was the petition which had ~90% sign it. Many were quick to point out the weird peer pressure that was likely around that petition. Amongst all that, some employees being alarmed about the new AI to the board or other higher ups is perfectly plausible. Either they were also unhappy with the poorly managed Altman sacking, never signed the petition or did so while really not wanting Altman back that much.
I’m so burnt out on OpenAI ‘news’. Can we get something substantial at some point?
There’s a huge discrepency between the scary warnings about Q* calling it the lead-up to artificial superintelligence, and the actual discussion of the capabilities of Q* (it is good-enough at logic to solve some math problems).
My theory: the actual capabilities of Q* are perfectly nice and useful and unfrightening… but somebody pointed out the obvious: Q* can write code.
Either
-
“Q* is gonna take my job!”
-
“As we enhance Q*, it’s going to get better at writing code… and we’ll use Q* to write our AI code. This thing might not be our hypothetical digital God, but it might make it.”
It’s possible it’s related to the Q* function from Q-learning, a strategy used in deep reinforcement learning!
… or this is the origin of the Q and we’re all fucked. I find my hypothesis much more plausible.
Nah. Programming is… really hard to automate, and machine learning more so. The actual programming for it is pretty straightforward, but to make anything useful you need to get training data, clean it, and design a structure, which is much too general for an LLM.
Programming is like 10% writing code and 90% managing client expectations in my small experience.
But a lot of the crap you have to do only exists because projects are large enough to require multiple separate teams, so you get all the overhead of communication between the teams, etc.
If the task gets simple enough that a single person can manage it, a lot of the coordination overhead will disappear too.
In the end though, people may find out that the entire product, that they are trying to develop using automation, is no longer relevant anyway.
Programming is 10% writing code, 80% being up at 3 in the morning wondering whY THE FUCKING CODE WON’T RUN CORRECTLY (it was a typo that you missed despite looking at it over 10 times), and 10% managing expectations
The sensationalized headline aside, I wish people would stop being so dismissive about reports of advancement here. Nobody but those at the fringes are freaking out about sentience, and there are plenty of domains where small improvements in the models can fuck up big parts of our defense/privacy/infrastructure if they end up being successful. It really doesn’t matter if computers have subjective experience, if that computer is able to decrypt AES-192 or identify keystrokes from an audio recording.
We need to be talking about what happens after AI becomes competent at even a handful of tasks, and it really doesn’t inspire confidence if every bit of news is received with a “LOL computers aren’t conscious GTFO”.
That’s why I hate when people retort “GPT isn’t even that smart, it’s just an LLM.” Like yeah, the machines being malevolent is not what I’m worried about, it’s the incompetent and malicious humans behind them. Everything from scam mail to propaganda to law enforcement is testing the water with these “not so smart” models and getting incredible (for them) results. Misinformation is going to be an even bigger problem when it’s so hard to know what to believe.
Also “Yeah what are people’s minds really?”. The fact that we cannot really categorize our own minds doesn’t really mean that we’re forever superior to any categorized AI model. The mere fact that right now that bleeding edge is called an LLM doesn’t mean that it cannot fuck with us - especially if it is an even more powerful one in the future.
Allegedly. And no proof was presented. The letter cited was nowhere to be found.