AI Ethics Weekly – June 22: Welcome to your Dystopian AI Hellscape

AI Ethics Weekly – June 22: Welcome to your Dystopian AI Hellscape
June 21, 2020 LH3_Admin

As economies around the world slowly limp back to life, the mad rush to automate things back to normalcy has accelerated questionable uses of Artificial Intelligence. In these challenging times, it’s more important than ever to be mindful about the misuse of AI technologies, racial bias introduced implicitly and explicitly into training datasets, and discriminatory algorithms built by non-diverse teams of ML researchers, engineers, and others. We need to acknowledge that the systemic racism embedded in these systems is further perpetuating inequalities, and that our journey to fair unbiased AI will be a long hard marathon, not a leisurely sprint.

In the meanwhile, stay vigilant, healthy, and safe.

~Mia Shah-Dand @MiaD

If you want to support Diversity & Ethics in AI, you can now fund our work directly at this link.

Colorblindness will not solve racism

.https://media.giphy.com/media/WQG8yFzs7r5Dy/giphy.gif

Face Depixelizer

h/t Robert Osazuwa Ness @osazuwa
In yet another example of racial bias in Machine Learning –  “Given a low-resolution input image, this (Face Depixelizer) model generates high-resolution images that are perceptually realistic and downscale correctly.” This model is based on PULSE, which has a known bias “It does appear that PULSE is producing white faces much more frequently than faces of people of color. This bias is likely inherited from the dataset StyleGAN was trained on.” Read more

How Surveillance Has Always Reinforced Racism
h/t Frank Pasquale @FrankPasquale
“Amid recent police shootings of unarmed black men, WIRED spoke with Browne about Big Tech’s unconvincing support for racial justice and how sharing black pain on social media may mobilize action, but only on Silicon Valley’s terms.”

Researchers find racial discrimination in ‘dynamic pricing’ algorithms used by Uber, Lyft, others
h/t Kyle Wiggers@Kyle_L_Wiggers
“A preprint study published by researchers at George Washington University presents evidence of social bias in the algorithms ride-sharing startups like Uber, Lyft, and Via use to price fares.”

Start off your week right with our AI Ethics Weekly newsletter. Sign up to get the full-length version of this content delivered to your inbox every Monday!

Hidden Figures in AI

https://media.giphy.com/media/hpKkJSlZqUCauAImx9/giphy.gif

Black Women in AI Ethics
h/t Lighthouse3 @lh3com
We’ve all heard the excuses “We don’t know any (Black women)”, “We know 2 but they are unavailable” “It’s a pipeline issue” and other similar excuses to explain away the lack of diversity at tech events/companies so we added a new category filter to Women to AI Ethics directory to make it easier to recruit brilliant Black Women in AI Ethics

Anti-Blackness in Policy Making:  Learning from the Past to Create a Better Future
h/t Maria Luciana Axente @maria_axente
“Given the subjective way in which algorithms are designed, the accuracy of facial recognition systems not only relies on the training data but also on the people who are creating the algorithms because FR systems “see” through the eyes of their creators. This can create problems for tech companies that lack employees who are racial minorities.”

The VC Firms Backing Black Founders (List)
h/t James Norman @MotownIni
“Last Black History Month, a group of Black founders, VCs, and tech professionals put together the most comprehensive list of US-based venture-backed Black founders ever. On Juneteenth 2020, they are back with a list of the VC’s who have actually “sent the wire” and backed Black founders.”

Start off your week right with our AI Ethics Weekly newsletter. Sign up to get the full-length version of this content delivered to your inbox every Monday!