AI Ethics Weekly – Feb 17: The Hidden Casualties of AI Bias

AI Ethics Weekly – Feb 17: The Hidden Casualties of AI Bias
February 16, 2020 LH3_Admin

Here’s your weekly round-up of the latest developments in AI Ethics!

Sign up for our weekly AI ethics newsletter to get the full-length version of this issue in your inbox every Monday!

Image Credit: Jude Beck/Unsplash; TheDigitalArtist/Pexels

Attorneys in unemployment fraud cases join forces, call for state review of AI in government

h/t Meredith Whittaker @mer__edith

“Sandvig says many governments have embraced AI systems as a solution to government bureaucracy. “But now, you’re seeing a backlash where people are saying, it may be that some of the AI systems being implemented are actually worsening some of the problems they’re designed to solve,” he says.” Read more.

AI: Decoded

h/t Maria Luciana Axente @maria_axente

“Remember when Facebook CEO Mark Zuckerberg told U.S. lawmakers more than 30 times that artificial intelligence was the silver bullet to the platform’s numerous problems? His former mentor Roger McNamee doesn’t buy into the argument — and told me instead that “Zuck’s” belief in AI could be part of the problem.” Read more.

Who owns your DNA? You should, according to this biodata bill of rights

h/t  Wolfgang Schröder (@Faust_III)

“Whether it’s our voice or our DNA, privacy matters because biodata reveals an essential, unchangeable part of who we are—and its unintended use or disclosure can expose individuals to discrimination, manipulation, and levels of surveillance that can threaten our democratic way of life.” Read more.

Cost Cutting Algorithms Are Making Your Job Search a Living Hell

h/t Maria Luciana Axente (⁦‪@maria_axente‬⁩)

“Maneuvering around algorithmic gatekeepers to reach an actual person with a say in hiring has become a crucial skill, even if the tasks involved feel duplicitous and absurd. ATS software can also enable a company to discriminate, possibly unwittingly, based on bias-informed data and culling of certain psychological traits.” Read more.

Inside the future of online dating: AI swiping and concierge bots

h/t Lighthouse3 @lh3com

“Using AI and bots to “hack” dating apps sounds like a Silicon Valley wet dream, and perhaps it is. But how bad is it from an ethical perspective? There are several concerns here. One is unconscious (or conscious!) bias; one is disclosure; and one is data security.” Read more.

Public bodies are secretly using AI for decisions on people’s lives, warns standards watchdog

h/t Rob McCargow (@RobMcCargow)

“In a report presented to Boris Johnson, the committee warned: “Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.” Read more.

Algorithmic bias hurts people with disabilities, too. 

h/t Rachel Thomas @math_rachel

“As more instances of algorithmic bias hit the headlines, policymakers are starting to respond. But in this important conversation, a critical area is being overlooked: the impact on people with disabilities. A huge portion of the population lives with a disability—including one in four adults in the U.S. But there are many different forms of disability, making bias hard to detect, prove, and design around.” Read more.

Sign up for our weekly AI ethics newsletter to get this and other exclusive content delivered to your inbox every Monday!