Here’s your weekly round-up of the latest developments in AI Ethics!
Sign up for our weekly AI ethics newsletter to get the full-length version of this issue in your inbox every Monday!
“The NIST study evaluated 189 software algorithms from 99 developers — a majority of the industry. It focuses on how well each individual algorithm performs one of two different tasks that are among face recognition’s most common applications.” Read more.
Artificial Intelligence (AI) systems that companies claim can “read” facial expressions is based on outdated science and risks being unreliable and discriminatory. Read more.
“In April, company leaders also flew Whittaker to Google’s headquarters in Mountain View for three back-to-back town-hall discussions about the A.I. project, which it broadcast to Google employees around the world. The discussions did not go well for the company.” Read more.
“Industry Commissioner Thierry Breton suggested the new legislation would be comparable to the General Data Protection Regulation. The far-reaching law governing data privacy came into effect in 2018, with harsh financial penalties.” Read more.
“It was a Google executive – Sheryl Sandberg – who played the role of Typhoid Mary, bringing surveillance capitalism from Google to Facebook, when she signed on as Mark Zuckerberg’s number two in 2008.” Read more.
“Beyond the questions of whether any one model of fairness is better or worse than another, I’m coming to the realisation that this doesn’t hold. To show that a machine learning model is fair, you need information from outside of the system.” Read more.
“But three days at OpenAI’s office—and nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the field—suggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors.” Read more.
“Eyal Weizman, the director of the investigative group, said an embassy official in London told him an algorithm had identified a security threat that was related to him.” Read more.
“Facial recognition. What do you think of when you hear that term? How do these systems know your name? How accurate are they? And what else can they tell you about someone whose image is in the system?” Read more.
“As HR managers use artificial intelligence (AI) to make recruiting decisions, evaluate employee performance, and decide on promotions and firings, HR executives should know that several law firms are preparing for what they believe is inevitable: AI-related lawsuits.” Read more.
AI Ethics Resources/Events
“This resource for boards of directors consists of: an introduction; 12 modules intended to align with traditional board committees, working groups and oversight concerns; and a glossary of artificial intelligence (AI) terms.” Read more.
In our monthly AI Ethics Twitter Chat for January, we invited Dr. Caitlin McDonald, award-winning scholar and Digital Anthropologist at Leading Edge Forum to discuss how organizations can take AI Ethics from Talk to Action. Here are key highlights from our very insightful chat. Read more.
Join our monthly AI Ethics Twitter Chat on Friday, Feb 28 at 8a PST/4p GMT for what is guaranteed to be a fantastic discussion on “Deconstructing Deepfakes” with our expert guest, Dr. Brandie Nonnecke, Founding Director, CITRIS Policy Lab at UC Berkeley Join us.