Google is coming in for sharp criticism after video went viral of the Google Nest assistant refusing to answer basic questions about the Holocaust — but having no problem answer questions about the Nakba.

You are viewing a single thread.
View all comments View context
2 points

So if someone wrote a prompt to make an image of a black woman as a pope, would you expect the model to only return historical popes?

If the model is supposed to be able to make both historically accurate and possibilities, why would the expectation for a vague prompt to be historical instead of possible?

If the model is supposed to default to historical accuracy, how would it handle a request for a red dragon? Just the painting named Red Dragon, dragons from mythology, or popular media?

Yes, there is could be something that promotes diversity or it could just be that the default behavior doesn’t have context for what content ‘should’ be historically accurate and what is just a randomized combination of position/race/gender.

permalink
report
parent
reply
2 points

Of course it will draw black female pope if you request, but if you do not - it would not. As a gross approximation, ANN is an interpolator of known data-points (with some noise), and if you ask simply a pope, it will interpolate between the images it learned of popes. Since all of them are white male it is highly unlikely for ANN to produce black female (the noise should be very high). If you ask black female pope, it would start to interpolate between the images of popes and black females. You have to tune the model so that when you ask just for pope, something else pushes the model to consider otherwise irrelevant images.

permalink
report
parent
reply
-1 points

Would expect a lot of models to struggle with making the pope female, making the pope black, or making a black female a pope unless they build in some kind of technique to make replacements. Thing is, a neural net reproduces what you put into it, and I assume the bias is largely towards old white men since those images are way more readily found.

Even targeted prompts, like a zebra with rainbow colored stripes, had very limited results 6 monts ago where there would be at least 50% non black and white stripes. I had to generate multiple times with a lot of negative terms just to get close. Currently, the first generation of copilot matches my idea behind the prompt.

Clearly the step made was a big one, and I imagine tuning was done to ensure models capable of returning more diverse results rather just what is in the data set. It just has more unexpected results and less historically accurate images for these kind of prompts. And some that might be quite painful. Still, being always underrepresented in data sets is also quite painful. Hard to get to a perfect product quickly, but there should be a feature somewhere on their backlog to by default prevent some substitutions. Black, female popes when requesting a generated pope? To me that is a horizon broadening feature. Black, female nazis when requesting nazis? Let that not be a default result.

permalink
report
parent
reply

News

!news@lemmy.world

Create post

Welcome to the News community!

Rules:

1. Be civil

Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.

Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.

Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.

Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.

Posts must be news from the most recent 30 days.


6. All posts must be news articles.

No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.

If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.

Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.

The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body

For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

Community stats

  • 14K

    Monthly active users

  • 20K

    Posts

  • 511K

    Comments