As pandemic claims more lives and surveillance tech is used to undermine civil rights, tech giants are enjoying an unprecedented boom. In the AI/ technology space, there is an irrational belief that ethics and diversity are two separate and unrelated topics. As we count down to our Women in AI Ethics annual event on August 27th, we will continue to highlight how lack of diversity in AI leads to unethical outcomes and is often weaponized against marginalized groups. The only winners here are big tech and the elite institutions they fund.
Start off your week right with our AI Ethics Weekly newsletter. Sign up to get the full-length version of this content delivered to your inbox every Monday!
Biased Datasets Proliferate
An A.I. Training Tool Has Been Passing Its Bias to Algorithms for Almost Two Decades
h/t Theo – 劉䂀曼@psb_dc
“A model trained on CoNNL-2003 wouldn’t just fall short when it comes to identifying the current names included in the dataset — it would fall short in the future, too, and likely perform worse over time. It would have more trouble with women’s names, but it would also likely be worse at recognizing names more common to minorities, immigrants, young people, and any other group…” Read more.
Too many AI researchers think real-world problems are not relevant
h/t Mia Shah-Dand@MiaD”
“More than half of the images in ImageNet (pdf) come from the US and Great Britain, for example. That imbalance leads systems to inaccurately classify images in categories that differ by geography (pdf).” Read more.
ICYMI: MIT removes huge dataset that teaches AI systems to use racist, misogynistic slurs
h/t Spiros Margaris @SpirosMargaris
“Thanks to MIT‘s cavalier approach when assembling its training set, though these systems may also label women as whores or bitches, and Black and Asian people with derogatory language. The database also contained close-up pictures of female genitalia labeled with the C-word.” Read more.
Racism and Sexism
Lack of darker skin in textbooks, journals harms patients of color
Timnit Gebru @timnitGebru
“An analysis of textbooks by Jules Lipoff, an assistant professor of clinical dermatology at the University of Pennsylvania, showed the percentage of images of dark skin ranged from 4% to 18%. “We are not teaching (and possibly not learning) skin of color.”
Did you like what you read? Start off your week right with our AI Ethics Weekly newsletter. Sign up to get the full-length version of this content delivered to your inbox every Monday!