After internal chaos earlier this month, OpenAI replaced the women on its board with men. As it plans to add more seats, Timnit Gebru, Sasha Luccioni, and other AI luminaries tell WIRED why they wouldn’t join.

51 points
*

how is the board ever supposed to get women if it needs women to ever get women on the board?

permalink
report
reply
3 points

Let other members of the board go so there’s more seats.

permalink
report
parent
reply
22 points

Not having Larry Summers on it could be a start ;-)

permalink
report
parent
reply
3 points

Can I get a summary?

permalink
report
reply
1 point

Yeah.

permalink
report
parent
reply
27 points
*

Seems like the article is trying to combine two issues into one, the lack of representation of woman on OpenAI’s Board, and the concerns of some prominent AI researchers (who happen to be women) about OpenAI’s ambition and profitability above safety.

On the representation side, this seems like a chicken and egg problem where there won’t be any change in diversity if no one wants to make a move because the board isn’t already diverse enough.

And on the AI safety side, there won’t be any change unless someone sits on the board and pushes for safety proactively, instead of reactively through legislation.

permalink
report
reply
29 points

there won’t be any change unless someone sits on the board and pushes for safety proactively, instead of reactively through legislation.

There won’t be any change because the board that pushed back just got replaced with people who won’t.

permalink
report
parent
reply
2 points

And they’re getting an opportunity to apply and bring back some balance, but decided not to.

permalink
report
parent
reply
6 points

It also elides “AI safety” (Toner’s thing) and “AI ethics” (Gebru’s thing). They’re two different things. Jammed together here because both are women (FFS).

“AI safety” is the sci-fi, paperclip maximisation, fantasies about the potential future of AI.

“AI ethics” is the real actual harms done in the here and now, by embedding existing biases into decision-making, and consuming enormous amounts of resource.

Meredith Whittaker sums up the difference nicely in this interview:

So in 2020-21 when Timnit Gebru and Margaret Mitchell from Google’s AI ethics unit were ousted after warning about the inequalities perpetuated by AI, did you feel, “Oh, here we go again”?

Timnit and her team were doing work that was showing the environmental and social harm potential of these large language models – which are the fuel of the AI hype at this moment. What you saw there was a very clear case of how much Google would tolerate in terms of people critiquing these systems. It didn’t matter that the issues that she and her co-authors pointed out were extraordinarily valid and real. It was that Google was like: “Hey, we don’t want to metabolise this right now.”

Is it interesting to you how their warnings were received compared with the fears of existential risk expressed by ex-Google “godfather of AI” Geoffrey Hinton recently?

If you were to heed Timnit’s warnings you would have to significantly change the business and the structure of these companies. If you heed Geoff’s warnings, you sit around a table at Davos and feel scared.

Geoff’s warnings are much more convenient, because they project everything into the far future so they leave the status quo untouched. And if the status quo is untouched you’re going to see these companies and their systems further entrench their dominance such that it becomes impossible to regulate. This is not an inconvenient narrative at all.

permalink
report
parent
reply
12 points

Article seems to be mainly about Timnit Gebru. I struggle to see ANY business wanting her on the board. Sasha luccioni, appears to be another AI Doomer, i.e. Up there with Helen toner who

said that if the company was destroyed as a result of Altman’s firing, that could be consistent with its mission, the New York Times reported.

And additionally reported:

The New York Times reported this week that in the weeks leading up to Altman’s firing, he and Toner had discussed an October paper she had co-authored for CSET.

In the paper, OpenAI is criticised for releasing ChatGPT at the end of last year, sparking “a sense of urgency inside major tech companies”, like Google, to ensure they did not fall behind and prompting competitors to “accelerate or circumvent internal safety and ethics review processes”.

Seriously, look at the people in the article, the organisations that they’re associated with and the opinions they’ve publicly stated. The Doomers at open.ai tried a coup and failed. The Accels won. The current board surely wouldn’t welcome or be welcoming to the Doomers. We’re clearly well past the point where people can sensibly pretend that they can hold back the avalanche of A.I. from the board of a single company in the space.

permalink
report
reply
57 points
*

It doesn’t need to be a “prominent” woman (AKA a rich person).

How about a woman who is passionate and knowledgeable, rather than just one who is rich?

permalink
report
reply
7 points

If they aren’t themselves rich, they might vote against the interests of the rich and then might get vocal about why they were then ousted, which makes the illusion of capitalism being amazing for everyone a bit harder to buy.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 11K

    Posts

  • 506K

    Comments