AI Ethics Weekly Bonus Issue – Nov 8: AI bias, surveillance state, killer self-driving cars

AI Ethics Weekly Bonus Issue – Nov 8: AI bias, surveillance state, killer self-driving cars
November 8, 2019 LH3_Admin

AI Ethics Newsletter

Friday, November 8, 2019

Sign up for our free newsletter to get this great content delivered to your inbox every Tuesday!

Happy Friday! 

Enjoy this special bonus issue with the latest developments in AI Ethics from this past week. 

Artificial Intelligence Can Be Biased. Here’s What You Should Know
h/t Joy Buolamwini

“Artificial intelligence has already started to shape our lives in ubiquitous and occasionally invisible ways. In its new documentary, In The Age of AI, FRONTLINE examines the promise and peril this technology. AI systems are being deployed by hiring managers, courts, law enforcement, and hospitals — sometimes without the knowledge of the people being screened. And while these systems were initially lauded for being more objective than humans, it’s fast becoming clear that the algorithms harbor bias, too.It’s an issue Joy Buolamwini, a graduate researcher at the Massachusetts Institute of Technology, knows about firsthand. She founded the Algorithmic Justice League to draw attention to the issue, and earlier this year she testified at a congressional hearing on the impact of facial recognition technology on civil rights.” Read more.

China wildlife park sued for forcing visitors to submit to facial recognition scan|
h/t Hessie Jones

“The park introduced facial recognition in July for annual pass holders and told those who did not register their biometric information by 17 October that passes would be invalid, Beijing News reported. About 10,000 visitors hold the annual park passes…Professor Guo Bing is taking action against Hangzhou safari park, after it replaced its existing fingerprinting system with the new technology. Guo broadly backed the use of such technology by authorities but also said that the issue needed to be discussed more widely in China. “I think it is OK and, to some extent, necessary for government agencies, especially police departments, to implement this technology, because it helps to maintain public security,” Guo said… But it’s still worth discussing when it comes to the legitimacy and legality of using the technology.” Read more

Sign up for our free newsletter to get this great content delivered to your inbox every Tuesday! 

EXCLUSIVE: This Is How the U.S. Military’s Massive Facial Recognition System Works
h/t Meredith Whittaker 

“DFBA and its ABIS database have received little scrutiny or press given the central role they play in U.S. military’s intelligence operations. But a newly obtained presentation and notes written by the DFBA’s director, Krizay, reveals how the organization functions and how biometric identification has been used to identify non-U.S. citizens on the battlefield thousands of times in the first half of 2019 alone. ABIS also allows military branches to flag individuals of interest, putting them on a so-called “Biometrically Enabled Watch List” (BEWL). Once flagged, these individuals can be identified through surveillance systems on battlefields, near borders around the world, and on military bases.” Read more

Uber’s Self-Driving Car Didn’t Know Pedestrians Could Jaywalk
h/t Mia Dand
“The software inside the Uber self-driving SUV that killed an Arizona woman last year was not designed to detect pedestrians outside of a crosswalk, according to new documents released as part of a federal investigation into the incident. That’s the most damning revelation in a trove of new documents related to the crash, but other details indicate that, in a variety of ways, Uber’s self-driving tech failed to consider how humans actually operate.” Read more.

Check out past issues and sign up for our free newsletter to get this great content delivered to your inbox every Tuesday!