Here’s your weekly round-up of the latest developments in AI Ethics!
Check out past issues and sign up for our free newsletter to get this great content delivered to your inbox every Tuesday.
SPOTLIGHT: The Age of Surveillance
LEAK: Commission considers facial recognition ban in AI ‘white paper’
h/t @AlaricAloor
“The European Commission is considering measures to impose a temporary ban on facial recognition technologies used by both public and private actors, according to a draft white paper on Artificial Intelligence obtained by EURACTIV.”
Fight against facial recognition hits wall across the West
“Face-scanning technology is inspiring a wave of privacy fears as the software creeps into every corner of life in the United States and Europe — at border crossings, on police vehicles and in stadiums, airports and high schools. But efforts to check its spread are hitting a wall of resistance on both sides of the Atlantic.” Read more.
The Secretive Company That Might End Privacy as We Know It
h/t @AlaricAloor
“A little-known start-up helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something,” a backer says.” Read more.
Your online activity is now effectively a social ‘credit score’
“If the thought of companies stalking you online and denying you services because they think you’re a sinner gives you the Orwell Anti-Sex League chills, you should know that Airbnb just asked Instagram to hold its beer.” Read more.
Sundar Pichai calls for moratorium on facial recognition
h/t @AnnikaPoutiaine
“While Amazon, Microsoft and Facebook have all waded into the business, with Amazon being criticised for widely selling an inaccurate technology, Google has been more cautious about deploying facial recognition across its products and services. The advertising giant has, however, been under intense scrutiny over its handling of users’ data.” Read more.
Are Your Students Bored? This AI Could Tell You
“Putting video cameras in the classroom also creates privacy issues. “The disclosure of the analysis of an individual’s emotion in a classroom may have unexpected consequences and can cause harm to students,” Read more.
ICYMI:
The White House’s new AI principles won’t solve regulatory problems
h/t Hessie Jones
“Unregulated algorithms can automate and thereby govern the human right to life in areas like health care, where flaws in algorithms have dictated that black patients receive inadequate care when compared to their white counterparts…But the true extent of the harm AI does globally is often obscured, due to trade secret designations and a governmental tendency to resort to the Glomar response — the classic “I can neither confirm nor deny” line. Using these protective measures, entities can hide the extent and breadth of the AI-related programs and products they’re using. It’s entirely likely that many algorithms already in use violate existing anti-discrimination laws (among others).” Read more
AirBNB Claims its AI can predict whether guests are Psychopaths
h/t Hessie Jones
“To protect its hosts, Airbnb is now using an AI-powered tool to scan the internet for clues that a guest might not be a reliable customer… According to patent documents reviewed by the Evening Standard, the tool takes into account everything from a user’s criminal record to their social media posts to rate their likelihood of exhibiting “untrustworthy” traits — including narcissism, Machiavellianism, and even psychopathy.” Read more
Check out past issues and sign up for our free newsletter to get this great content delivered to your inbox every Tuesday.