160 points

14 out of 15 requests were of black people. Facial recognition is notoriously bad with darker skin tones.

Racial Discrimination in Face Recognition Technology https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/

permalink
report
reply
50 points

Actually, all 15 were of black people. 14 were of black men, one was a black woman.

permalink
report
parent
reply
17 points

Zero arrests as well.

permalink
report
parent
reply
7 points

New Orleans is pretty black, but thats just impressive.

permalink
report
parent
reply
38 points

Yeah, this same exact story keeps coming up for years now just with different names. Why anyone would think that both the ineffectiveness and racial bias in these systems either wouldn’t exist or will somehow go away eventually is beyond me. Just expensive and ineffective mass surveillance for the sake of it…

permalink
report
parent
reply
19 points
*

Who remembers the HP computer that was unable to identify black people? One of my favorite “oooph, that’s not a good look” tech fails of all time. At least the people in that video were having a good laugh about it.

https://www.youtube.com/watch?v=t4DT3tQqgRM

Holy hell, that was 13 years ago.

permalink
report
parent
reply
8 points

More recently, there was also Google Photos mistaking a photo of a black couple as “gorillas”, back in 2015.

https://www.bbc.com/news/technology-33347866

On a funnier note, there was also the AI tool turning a pixelated photo of Barack Obama into that of a white man.

https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias

permalink
report
parent
reply
1 point

Haha. He looks like Mike Nelson.

permalink
report
parent
reply
17 points

Minor correction.
15 out of 15 requests were of black people. 14 of those requests were black men and 1 was a black woman.

permalink
report
parent
reply
3 points

Thank you for your service!

permalink
report
parent
reply
10 points
*
Deleted by creator
permalink
report
parent
reply
17 points
*

Yeah, but statistics is a b*tch.

We had a similar technology for a test run some years ago at a train station in Berlin, capital of Germany and largest city in the EU with 3.8M.

The results the government happily touted as a success were devastating. They had a true positive rate of 80% (and this was already cooked since they tested several systems at several locations but only reported the best results), which is really not that good to start with.

But they were also extremely proud of the false negative positive rate, which was below 0.1%. That doesn’t sound too bad, does it?

Well, let’s see…

True positive means you actually identified the people you were looking for. Now, I don’t know the number of people Berlin’s police is actively looking for, but it’s not that much. And the chances of one of them actually passing that very station are even worse. And out of that, you have 20% undetected. That’s one out of five. Great. If I were a terrorist, I would happily take that chance.

So now let’s have a look at the false negative positive rate, which means you incorrectly identified a totally harmless person as a terrorist/infected/whatever. The population for that condition is: everyone passing through that station.

Let’s assume there’s a 100k people on any given day (which IIRC is roughly half of what that station in Berlin actually has). 0.1% of 100k is 100 people, every day, who are mistakenly reported as „terrorists“. Yay.

permalink
report
parent
reply
4 points

I think you’ve gotten false negative wrong here: False negatives are terrorists who were not identified as such.

permalink
report
parent
reply
0 points

How about 15/15?

permalink
report
parent
reply
6 points
*

Yeah. Basicly anything with a lower contrast, with shadows and backgrounds. And because shadows are dark, they have a lower contrast with other dark things.

permalink
report
parent
reply
-3 points

Discrimination is the wrong word. Technology has no morals or sense of justice. It is bias in the data that developers should have accounted for.

permalink
report
parent
reply
11 points

It’s totally accurate though. It’s like the definition of systemic racism really. Think about housing or financial policy that disproportionately fails for minorities. They aren’t some Klan manifesto. Instead they just include banal qualifications and exemptions that end up at the same result.

permalink
report
parent
reply
8 points

This seems shortsighted. You are basically asking people to police their own biases. That’s a tall ask for something no one can claim immunity from.

permalink
report
parent
reply
1 point

I am asking a group of scientists who should be very well-versed in statistics and weights, you know, one of the biggest components in a machine learning model, to account for how biased their data is when engineering their model.

It’s really not a hard ask.

permalink
report
parent
reply
7 points

Ask the people who create the data sets that machine learning models train on how they feel about racism and get back to us

permalink
report
parent
reply
6 points

It can be an imported bias/descrimination. I still think that words fair.

Do you have a more accurate word?

permalink
report
parent
reply
2 points

I already said it: bias. It’s a common problem with LLMs and other machine learning models that model engineers need to watch out for.

permalink
report
parent
reply
-2 points

You need to learn some critical race theory. Racist systems turn innocent intentions into racist actions. If a PhD student trains an AI model on only white people because the university only has white students, then that AI model is going to fail black people because black people were already failed by university admissions. Innocent intention plus racist system equals racist action.

permalink
report
parent
reply
1 point

Even CRT would call this “racial bias”, which is exactly what this is.

permalink
report
parent
reply
89 points

Huh. It’s almost like cops are constantly wasting money on bullshit.

permalink
report
reply
21 points

only if it’s ours, of course

permalink
report
parent
reply
56 points
*

The terrifying part to me is that cops across the nation have a long history of seeing that the tech they want to use is unreliable and based on junky science, but they still push it through anyway. Aren’t police dogs about as reliable as a coin-flip when their handlers aren’t nipping at their neck to get them to jump at anything? They don’t care if it’s right as long as they can use it to justify their behavior, so they make it policy.

permalink
report
reply
28 points

Only the drug dogs are ineffective. Bloodhounds and tracking dogs have been a staple of hunting down people, and German retrievers can take a man down effectively as well.

permalink
report
parent
reply
2 points

When they are trained with incentives for finding something, instead of incentives to be correct, then they will find something. Same is true for man or beast.

permalink
report
parent
reply
11 points

A lot if forensic “science” is utter bunk. Yet it continues to be used. Having a fair and equitable system was never the point.

permalink
report
parent
reply
42 points

I’m going to take a wild stab in the dark that all the false positives were black men.

For the same reason that my Echo dot (aka Spotify Bitch) will ignore my wife but cheerfully respond to my mumbled requests from three rooms away. If you make all this shit in Silicon Valley, it will work best for people of a similar demographic to those that work there.

permalink
report
reply
9 points

The white liberals building this technology say they’re all progressive yet only surround themselves with people like them and only build products for people like them. A lack of diversity in tech like this is a lack of good testing.

permalink
report
parent
reply
17 points

Oh they are progressive. They’ll support Black Lives Matter and sympathise with Iranian women.

But there’s only so much anybody can do when it’s the entire US (and further afield) social structure at fault. It’s the same where I am. I work on a project with 3 other white guys. If I put a job advert up for another programmer, who will apply? 3-4 more white guys.

I agree that it’s a lack of good testing. Especially when you consider that it’ll be mostly used to pick black guys out of a database. And especially so in New Orleans.

permalink
report
parent
reply
5 points
*
Removed by mod
permalink
report
parent
reply
9 points

They’re more libertarian than liberal. Anti worker rights, anti consumer rights, and anti taxation.

The only government spending they’re in favour of is government spending and subsidies on tech e.g. Tesla, space X, and the entire military complex.

permalink
report
parent
reply
-12 points

You haven’t read much about Libertarian policy I see. They are very pro-rights, in fact that is the core of the party platform. Individual liberty is their chief concern, and I applaud their efforts in fighting for our rights and freedom.

permalink
report
parent
reply
-2 points

Also AI is taught by its creator. Tech has some of it’s most well hidden, bigotted, mid-level white people refusing to critically question their own bias and privilege. There’s a shit tone of that fragile masculinity in the tech industry just hard coding it into it.

There was a guy fired from google for writing a manifesto about how women aren’t ‘wired’ for tech. And that’s just the one that waved his crazy flag out in the open so no one in upper management could easily keep on ignoring it.

permalink
report
parent
reply
5 points

While I agree with you 100% that programming can be affected by the programmers biases, there’s a much simpler problem that face recognition was having a hard time overcoming. At least when it was a main topic about a decade ago, sensors were having a lot of problems with the low contrast of some black people’s faces. Anyone who’s had a black friend and was a shutter bug will know what kind of problems you can run into when trying to get a proper exposure and not make a black person disappear completely from a photograph. It was just an inherent limitation of the technology they were using. The last statistics I read was something like between 20 to 30% positive matches, which we know damn well is too low for it to be a workable technology. The success rate on Caucasian and lighter skin tones weren’t even that great. There was still something like a 60% false positive match rate. The software may have gotten better over the past decade but we all know that whether it did or not, they’re still going to use it.

permalink
report
parent
reply
1 point

I imagine anyone with more than a mild accent doesn’t even bother using Alexa or whatever except in their native language.

permalink
report
parent
reply
29 points

The current state of policing doesn’t deserve to have access to this kinda shit. Hopefully it never will tbh.

permalink
report
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 11K

    Posts

  • 507K

    Comments