I hate everything about this: the lack of transparency, the lack of communication, the chaotic back and forth. We don’t know now if the company is now in a better position or worse.
I know it leaves me feeling pretty sick and untrusting about it considering the importance and potential disruptiveness (perhaps extreme) of AI in the coming years.
Given the rumors he was fired based on undisclosed usage of some foreign data scraping company’s data, it ain’t looking good.
Now that there’s big money involved, screw ethics. We don’t care how the training data was acquired.
Now that there’s big money involved, screw ethics. We don’t care how the training data was acquired.
I dont care about ethics here, if the money would be excluded as well.
IF they would live up to their goals they settled for its fine.
But its similar to google, back in the days, with “dont be evil”.
I’ve tried to find it but I can’t seem to find it. There’s been a thread on Lemmy somewhere about it that linked to a thread on Blind where someone claiming to be working at OpenAI having heard that from the board.
But, it’s ultimately just rumors, we don’t know for sure. But it was at least pretty plausible and what I would expect the board of a very successful AI company to fire the CEO for, since the company is obviously doing really well right now.
I actually like the chaoticness, because I don’t like having one small group of people as the self-appointed and de-facto gatekeepers of AI for everyone else. This makes it clear to everyone why it’s important to control your own AI resources.
Accelerationism is human sacrifice. It only works if it does damage… and most of the time, it only does damage.
Not wanting a small group of self-appointed gatekeepers is not the same as accelerationism.
I’m with you there, I just hope the general public come to that realization.
On the one hand, the board was an insane cult of effective altruism / longtermism / LessWrong, so fuck them. But on the other hand, this was a worker revolt for the capitalists, which I guess shouldn’t be surprising since tech workers famously lack class consciousness.
an insane cult of effective altruism / longtermism / LessWrong
I’m out of the loop. What’s the problem with those things?
People are asking what is wrong with these cults. It’s a lot to cover so I won’t try. People who follow the podcasts Tech Won’t Save Us or This Machine Kills will already be familiar with them. Here’s an article relevant to the moment that talks about them a little: Pivot to AI: Replacing Sam Altman with a very small shell script
Genuinely confused by your first statement (in particular effective altruism). What does that have to do with the board?
Not an attack, just actually clueless.
Several of the [former] board members are affiliated with the movement. EA is concerned with existential risk, AI being perceived as a big one. OpenAI’s nonprofit was founded with the intent to perform research AI safely, and those members of the board still reflected that interest.
That’s what happens when the wealth is shared with those who make it. Everyone becomes a capitalist.
Actually that’s just self interest. Both capitalism and socialism claim to benefit workers. But only socialism has remotely shown to do that to any extent. Capitalist hoarding and speculation is the primary driver of inflation and things like the inafordability of housing.
If you labor for a living, you aren’t a capitalist. You’re labor.
famously lack class consciousness
How much money do you suppose the average OpenAI employee makes? What class do you imagine they’re part of?
I’m sure the developers make the lower half of six figures, but they still have to sell their labor to survive, so they’re still working class.
I’ve been an SF Bay Area software developer for almost thirty years, so I know them well. I consider us members of the professional–managerial class (PMC). We generally think we’re “above” the working class (we’re not), and so we seldom have any sense of solidarity with the rest of the working class (or even each other), and we think unionization is for those other people and not us.
When Hillary Clinton talked about the “basket of deplorables,” she was talking to her PMC donors & voters about the rest of the working class, and we eat that shit up. Most of my peers have still learned no lessons from her election defeat, preferring to blame debunked RussiaGate conspiracy theories.
I guess the entire workforce calling the board incompetent twats and threatening to quit was actually effective.
Sounds like they got together and forced their hand. Wonder if there’s a term for that?
I guess this will have to do as entertainment until GRRM finishes his damn book.
Any day now! I have a friend that got hyped up every time George published another chapter from WoW, but I just refuse to read any of them. I want a complete book. I’m not sure he’s got any idea of how to finish his own story.
I know you’re joking, but it stands for Winds of Winter if anyone is confused.
Man what a clusterfuck. Things still don’t really add up based on public info. I’m sure this will be the end of any real attempts at safeguards, but with the board acting the way it did, I don’t know that there would’ve been even without him returning. You know the board fucked up hard when some SV tech bro looks like the good guy.
I mean, the non-profit board appears, at current glance, to have fired the CEO for their paranoid-delusional beliefs, that this LLM is somehow a real AGI and we are already at a point of a thinking, learning, AI.
Just delusional grandeur on behalf of the board, or they didn’t and don’t understand what is really going on, which might be why they fired the CEO: for not informing the board, truly, what level OpenAI’s AI is actually at. So the board was trying to reign in a beast that is merely a puppy, with information that was wrong.
As I used the word “appears”, I am postulating based on how the company is controlled, the non-profit entity, as well as certain statements that board members have made in the past such as Ilya Sutskever (now ex-board??), whose thoughts have likely been influenced by his mentor Geoffrey Hinton who is quoted on 60 Minutes saying the AI is about to be “more intelligent than us”. Ilya is known for, beyond his scientific endeavors into AI and his position of Chief Scientist of OpenAI, some odd behavior on his commitment to AI safety though I’m sure his beliefs come from the right place.
There’s a lot more to this, for each board member and Sam, but it makes me believe that a large wall was erected in information leading to a paranoid board.