AI Ethics Weekly – August 10: Racist and Sexist AI is everywhere

AI Ethics Weekly – August 10: Racist and Sexist AI is everywhere
August 9, 2020 LH3_Admin

Like a virus, biased algorithms have permeated every part of our society.

Start off your week right with our AI Ethics Weekly newsletter.  Sign up to get the full-length version of this content delivered to your inbox every Monday!

Systemic Racism

https://media.giphy.com/media/TE3CsCcbGz8rK/giphy.gif

Silicon Valley Didn’t Inherit Discrimination, But Replicated It Anyway
h/t Timnit Gebru @timnitGebru
“To truly make good on all these years of promises, tech companies must start by puncturing two pervasive Silicon Valley myths: that they’re meritocracies where everyone gets a fair shot, and that diversity is a pipeline problem.”

PS: If you want to support Diversity & Ethics in AI, you can now fund our work directly at this link.

Medical

How a Popular Medical Device Encodes Racial Bias
h/t Ruha Benjamin @ruha9
“In their studies of that low saturation range, the UCSF doctors noticed “a bias of up to 8 percent . . . in individuals with darkly pigmented skin,” errors that “may be quite significant under some circumstances.”

Law Enforcement

The Protests Prove the Need to Regulate Surveillance Tech
h/t Safiya Umoja Noble PhD @safiyanoble
“Browne, along with numerous other scholars, lays bare the origins of digital surveillance and harm that still today has oppressive and disparate effects.”

Police built an AI to predict violent crime. It was seriously flawed
h/t Shannon Vallor @ShannonVallor
“She explains that criminal history factors are often biased themselves, meaning any algorithms that are trained upon them will contain the same issues if a human does not intervene in the development.”

Immigration

Home Office to scrap ‘racist algorithm’ for UK visa applicants
h/t C.J. Colclough @CjColclough
“Campaigners claim the Home Office decision to drop the algorithm ahead of the court case represents the UK’s first successful challenge to an AI decision-making system.”

Media & Pop Culture

Researchers say ‘The Whiteness of AI’ in pop culture erases people of color
h/t VentureBeat @VentureBeat
“Cave and Dihal attribute the white AI phenomena in part to a human tendency to give inanimate objects human qualities, as well as the legacy of colonialism in Europe and the U.S. which uses claims of superiority to justify oppression.”

Language models

AI Weekly: Can language models learn morality?
h/t Kyle Wiggers @Kyle_L_Wiggers
“Even language models as powerful as GPT-3 have limitations that remain unaddressed. Morality aside, countless studies have documented their tendency to reinforce the gender, ethnic, and religious stereotypes explicit within the data sets on which they’re trained.”

Here are a few ways GPT-3 can go wrong
h/t Liz O’Sullivan @lizjosullivan
“Just as you’d expect from any model trained on a largely unfiltered snapshot of the internet, the findings can be fairly toxic.”

Did you like what you read? Start off your week right with our AI Ethics Weekly newsletter.  Sign up to get the full-length version of this content delivered to your inbox every Monday!