I hate everything about this: the lack of transparency, the lack of communication, the chaotic back and forth. We don’t know now if the company is now in a better position or worse.
I know it leaves me feeling pretty sick and untrusting about it considering the importance and potential disruptiveness (perhaps extreme) of AI in the coming years.
Given the rumors he was fired based on undisclosed usage of some foreign data scraping company’s data, it ain’t looking good.
Now that there’s big money involved, screw ethics. We don’t care how the training data was acquired.
Now that there’s big money involved, screw ethics. We don’t care how the training data was acquired.
I dont care about ethics here, if the money would be excluded as well.
IF they would live up to their goals they settled for its fine.
But its similar to google, back in the days, with “dont be evil”.
I’ve tried to find it but I can’t seem to find it. There’s been a thread on Lemmy somewhere about it that linked to a thread on Blind where someone claiming to be working at OpenAI having heard that from the board.
But, it’s ultimately just rumors, we don’t know for sure. But it was at least pretty plausible and what I would expect the board of a very successful AI company to fire the CEO for, since the company is obviously doing really well right now.
I actually like the chaoticness, because I don’t like having one small group of people as the self-appointed and de-facto gatekeepers of AI for everyone else. This makes it clear to everyone why it’s important to control your own AI resources.
Accelerationism is human sacrifice. It only works if it does damage… and most of the time, it only does damage.
Not wanting a small group of self-appointed gatekeepers is not the same as accelerationism.
I’m with you there, I just hope the general public come to that realization.