Here’s your weekly round-up of the latest developments in AI Ethics!
Sign up for our weekly AI ethics newsletter to get the full-length version of this issue in your inbox every Monday!
NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software
h/t Alaric Aloor
“The NIST study evaluated 189 software algorithms from 99 developers — a majority of the industry. It focuses on how well each individual algorithm performs one of two different tasks that are among face recognition’s most common applications.” Read more.
AI systems claiming to ‘read’ emotions pose discrimination risks
h/t Amy Chou
Artificial Intelligence (AI) systems that companies claim can “read” facial expressions is based on outdated science and risks being unreliable and discriminatory. Read more.
The Great Google Revolt
h/t Hessie Jones
“In April, company leaders also flew Whittaker to Google’s headquarters in Mountain View for three back-to-back town-hall discussions about the A.I. project, which it broadcast to Google employees around the world. The discussions did not go well for the company.” Read more.
EU plans new rules for AI but experts seek more detail
h/t Maria Luciana Axente
“Industry Commissioner Thierry Breton suggested the new legislation would be comparable to the General Data Protection Regulation. The far-reaching law governing data privacy came into effect in 2018, with harsh financial penalties.” Read more.
‘The goal is to automate us’: Welcome to the age of surveillance capitalism
h/t Mia Dand
“It was a Google executive – Sheryl Sandberg – who played the role of Typhoid Mary, bringing surveillance capitalism from Google to Facebook, when she signed on as Mark Zuckerberg’s number two in 2008.” Read more.
The measure and mismeasure of fairness: a critical review of fair machine learning
h/t ipfconline
“Beyond the questions of whether any one model of fairness is better or worse than another, I’m coming to the realisation that this doesn’t hold. To show that a machine learning model is fair, you need information from outside of the system.” Read more.
The messy, secretive reality behind OpenAI’s bid to save the world
h/t Antonio Vieira Santos
“But three days at OpenAI’s office—and nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the field—suggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors.” Read more.
ICYMI
Forensic Architecture Founder Says United States Prevented His Visit
h/t Frank Pasquale
“Eyal Weizman, the director of the investigative group, said an embassy official in London told him an algorithm had identified a security threat that was related to him.” Read more.
Bringing Facial Recognition Systems To Light
h/t Kathy Baxter
“Facial recognition. What do you think of when you hear that term? How do these systems know your name? How accurate are they? And what else can they tell you about someone whose image is in the system?” Read more.
AI-Related Lawsuits Are Coming
h/t Cortnie Abercrombie
“As HR managers use artificial intelligence (AI) to make recruiting decisions, evaluate employee performance, and decide on promotions and firings, HR executives should know that several law firms are preparing for what they believe is inevitable: AI-related lawsuits.” Read more.
AI Ethics Resources/Events
Empowering AI Leadership
h/t KayFirth-Butterfield
“This resource for boards of directors consists of: an introduction; 12 modules intended to align with traditional board committees, working groups and oversight concerns; and a glossary of artificial intelligence (AI) terms.” Read more.
AI Ethics Twitter Chat – Jan 2020: “Moving from Talk to Action”
In our monthly AI Ethics Twitter Chat for January, we invited Dr. Caitlin McDonald, award-winning scholar and Digital Anthropologist at Leading Edge Forum to discuss how organizations can take AI Ethics from Talk to Action. Here are key highlights from our very insightful chat. Read more.
AI Ethics Twitter Chat – Feb 2020: “Deconstructing Deepfakes”
Join our monthly AI Ethics Twitter Chat on Friday, Feb 28 at 8a PST/4p GMT for what is guaranteed to be a fantastic discussion on “Deconstructing Deepfakes” with our expert guest, Dr. Brandie Nonnecke, Founding Director, CITRIS Policy Lab at UC Berkeley Join us.