AI Ethics Weekly – Nov 19: Meet the Black women fighting algorithmic bias; AI and power

AI Ethics Weekly – Nov 19: Meet the Black women fighting algorithmic bias; AI and power
November 19, 2019 LH3_Admin

Here is your weekly round-up of the latest developments in AI Ethics.

Sign up for our free newsletter to get this great content delivered to your inbox every Tuesday!

Picture source: Fay Cobb Payton – NC State Poole College

These Black Women Are Fighting For Justice In A World Of Biased Algorithms 
h/t Sherrell Dorsey 

“Fortunately, four Black women are holding the code and its creators accountable. By rooting out bias in technology, these Black women engineers, professors and government experts are on the front lines of the civil rights movement of our time.” Read more.

The Real AI Threat to Democracy
h/t Hessie Jones

“In the past, the effects of new technologies unfolded relatively slowly, allowing governments time to adjust. However, the pace of the ongoing technological revolution in artificial intelligence (AI) and robotics will not only be much faster but will accelerate over time. …regulators often do not fully realize the consequences of new technologies and adopt the necessary reforms until after the negative repercussions have become evident. Read more.

When Algorithms Decide Whose Voices Will Be Heard 
h/t Theodora Lau

“Are we giving up our freedom of expression and action in the name of convenience? While we may have the perceived power to express ourselves digitally, our ability to be seen is increasingly governed by algorithms — with lines of codes and logic — programmed by fallible humans. Unfortunately, what dictates and controls the outcomes of such programs is more often than not a black box.” Read more

Microsoft hires Eric Holder to audit AnyVision for use of facial recognition on Palestinians
h/t Mia Dand

“Microsoft’s venture capital arm, M12, invested in AnyVision as part of a $74 million Series A funding round in June. Under the terms of the deal, Microsoft stipulated that AnyVision should comply with its six ethical principles to guide its facial recognition work: fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance.” Read more.

Opinion: AI For Good Is Often Bad
h/t Boring_AI 

“AI for good programs often warrant genuine excitement, they should also invite increased scrutiny. Good intentions are not enough when it comes to deploying AI for those in greatest need. In fact, the fanfare around these projects smacks of tech solutionism, which can mask root causes and the risks of experimenting with AI on vulnerable people without appropriate safeguards.” Read more.

Why Is Google Slow-Walking Its Breakthroughs in AI?
h/t Mia Dand

“Early this year, Google said that it had begun limiting some of the code released by its AI researchers to prevent it from being used inappropriately. The continued caution over AI contrasts with how Google has continued to expand into new areas of business—such as health care and banking—even as regulators and lawmakers talk about antitrust action against tech companies.” Read more.

Google Is Slurping Up Health Data—and It Looks Totally Legal
h/t Mia Dand

“Last week, when Google gobbled up Fitbit in a $2.1 billion acquisition, the talk was mostly about what the company would do with all that wrist-jingling and power-walking data. It’s no secret that Google’s parent, Alphabet—along with fellow giants Apple and Facebook—is on an aggressive hunt for health data. But it turns out there’s a cheaper way to get access to it: Teaming up with health care providers.” Read more.

AI ethics is all about power
h/t Sarbjeet Johal

Arguments about AI ethics can wage without mention of the word “power,” but it’s often there just under the surface. In fact, it’s rarely the direct focus, but it needs to be. Power in AI is like gravity, an invisible force that influences every consideration of ethics in artificial intelligence.” Read more.

Assessing ethical AI principles in defense
h/t Mia Dand

“This framework would introduce AI-based military applications based on “their ethical, safety, and legal risk considerations” with the rapid adoption of mature technologies in low-risk applications and greater precaution in less mature applications that might lead to “more significant adverse consequences.” Read more.

Sign up for our free newsletter to get this great content delivered to your inbox every Tuesday!