Activists – 1, Facial Recognition Tech – 0
In a major victory for AI ethics advocates, tech companies are finally heeding the advice of experts who have warned about the dangers of bias in facial recognition and the broader implications of surveillance technologies. Recent announcements by tech companies to completely exit or partially divest their facial recognition business are proof that even if takes a while, activism and advocacy play a huge role in bringing about meaningful change in tech. Let’s take a moment to correct the glaring omission in media coverage and give credit where it’s due – to the seminal work by two scholars, Joy Buolamwini and Timnit Gebru, whose work Gender Shades paved the way for this to happen.
Start off your week right with our AI Ethics Weekly newsletter. Sign up to get the full-length version of this content delivered to your inbox every Monday!
Facial Recognition
IBM will no longer offer, develop, or research facial recognition technology
h/t Valentine Goddard @vavacolor
IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology.
Microsoft bans police from using its facial-recognition technology
h/t Kate Crawford @katecrawford
Microsoft has joined the list of tech giants that have decided to limit the use of its facial-recognition systems, announcing that it will not sell the controversial technology to police departments until there is a federal law regulating it.
A Case for Banning Facial Recognition
h/t celestekidd @celestekidd
A Google research scientist explains why she thinks the police shouldn’t use facial recognition software.
Facial recognition has always troubled people of color. Everyone should listen
h/t Mia Shah-Dand @MiaD
When Amazon announced a one-year ban on its facial recognition tools for police, it should’ve been a well-earned win for Deborah Raji, a black researcher who helped publish an influential study pointing out the racial and gender bias with Amazon’s Rekognition.
Sign up to get the full-length version of this content delivered to your inbox every Monday!
If you want to support Diversity & Ethics in AI, you can now fund our work directly at this link.
Surveillance Tech
The Protests Prove the Need to Regulate Surveillance Tech
h/t Safiya Umoja Noble PhD @safiyanoble
Law enforcement has used surveillance technology to monitor participants of the ongoing Black Lives Matter protests, as it has with many other protests in US history. While none of this is new, the exposure that domestic surveillance is getting in this moment is further exposing a great fallacy among policymakers.
Fighting Misinformation & Regulating AI
https://media.giphy.com/media/9G1pzYSsO90rBapiEv/giphy.gif
It matters how platforms label manipulated media. Here are 12 principles designers should follow
h/t Peter Lo
“At the Partnership on AI and First Draft, we’ve been collaboratively studying how digital platforms might address manipulated media with empirically-tested and responsible design solutions.”
Czech civil society fights back against fake news
h/t Gisele Waters, Ph.D. @EthicalBAU
In the Czech Republic, the media ecosystem is plagued by disinformation. A group of PR professionals have teamed up to cut off dodgy outlets from their main, and often only, source of income — online ads.
A Council of Citizens Should Regulate Algorithms
h/t Archon Security @archonsec
A new report by OpenAI suggests we should create external auditing bodies to evaluate the societal impact of algorithm-based decisions. But the report does not specify what such bodies should look like.
Sign up to get the full-length version of this content delivered to your inbox every Monday!
If you want to support Diversity & Ethics in AI, you can now fund our work directly at this link.