AI Ethics Weekly – July 13: Time for a Reality Check

AI Ethics Weekly – July 13: Time for a Reality Check
July 12, 2020 LH3_Admin

Growing awareness about bias embedded in datasets and algorithms is driving much-needed change in AI.

Computer scientists will enforce rules for AI ethics — Quartz
h/t Mia Dand @MiaD
“This year, for the first time, major AI conferences—the gatekeepers for publishing research—are forcing computer scientists to think about those consequences.” Read More.

Things are Changing

https://media.giphy.com/media/mBM9YyaoctmYKTF1Bu/giphy.gif

Start off your week right with our AI Ethics Weekly newsletter.  Sign up to get the full-length version of this content delivered to your inbox every Monday!

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism
h/t Rachel Coldicutt @rachelcoldicutt
“Researchers from Google’s DeepMind and the University of Oxford recommend that AI practitioners draw on decolonial theory to reform the industry, put ethical principles into practice, and avoid further algorithmic exploitation or oppression.” Read More.

PS: If you want to support Diversity & Ethics in AI, you can now fund our work directly at this link.

Algorithmic Bias is Real

Meet the Secret Algorithm That’s Keeping Students Out of College
h/t Evan Selinger @EvanSelinger
“Rooting out bias and inaccuracy in such systems is a growing field of activism and academia.” Read More.

Bias and Algorithmic Fairness
h/t Ayodele  @DataSciBae
“While data scientists and business leaders could rely a lot on technological advances to solve the first round of data science teething issues it would be wrong to hope for technology to solve these new challenges alone.” Read More.

Training bias in AI “hate speech detector” means that tweets by Black people are far more likely to be censored
h/t Ayodele @DataSciBae
“University of Washington experts have found that Perspective misclassifies inoffensive writing as hate speech far more frequently when the author is Black.” Read More.

Facial Recognition is Everywhere

Detroit facial recognition technology has misidentified suspects
h/t EthicsByDefault @EthicsByDefault
“The high-profile case of a Black man wrongly arrested earlier this year wasn’t the first misidentification linked to controversial facial recognition technology used by Detroit Police, the Free Press has learned.“ Read More.

Liked what you read? Start off your week right with our AI Ethics Weekly newsletter.  Sign up to get the full-length version of this content delivered to your inbox every Monday!