AI Ethics Weekly – Nov 5: Killer robots, racist medical algorithms, responsible AI publication

AI Ethics Weekly – Nov 5: Killer robots, racist medical algorithms, responsible AI publication
November 4, 2019 LH3_Admin

AI Ethics Newsletter

Tuesday, November 5, 2019

Sign up for our free newsletter to get this great content delivered to your inbox every Tuesday!

Here’s what you need to know about the state of AI Ethics this week.

Pentagon’s draft AI ethics guidelines fight bias and rogue machines
h/t Mia Dand

“Tech companies might have trouble establishing the groundwork for the ethical use of AI, but the Defense Department appears to be moving forward. The Defense Innovation Board just published draft guidelines for AI ethics at the Defense Department that aim to keep the emerging technology in check. Some of them are more practical (such as demanding reliability) or have roots in years-old policies (demanding human responsibility at every stage), but others are relatively novel for both the public and private spheres.” Read more

New York is investigating UnitedHealth’s use of a medical algorithm that steered black patients away from getting higher-quality care
h/t Fayzan Gowani

“The algorithm in question, Impact Pro, identifies which patients would benefit from complex health procedures favored treating white patients than sicker black ones between 2013 and 2015, according to a study published in the prestigious journal Science.

NY lawmakers deemed the use of this discriminatory technology “unlawful,” and asked to either demonstrate the algorithm is not biased or to stop using Impact Pro immediately.” Read more

[Note: The research mentioned in this news article was shared by Ian Moura in the last issue – Dissecting racial bias in an algorithm used to manage the health of populations https://science.sciencemag.org/content/sci/366/6464/447.full.pdf]

AI is making literary leaps – now we need the rules to catch up 
h/t Mia Dand

“If the row over GPT-2 has had one useful outcome, it is a growing realisation that the AI research community needs to come up with an agreed set of norms about what constitutes responsible publication (and therefore release). At the moment, as Prof Rebecca Crootof points out in an illuminating analysis on the Lawfare blog, there is no agreement about AI researchers’ publication obligations. And of all the proliferating “ethical” AI guidelines, only a few entities explicitly acknowledge that there may be times when limited release is appropriate. At the moment, the law has little to say about any of this – so we’re currently at the same stage as we were when governments first started thinking about regulating medicinal drugs.” Read more.

Artificial Intelligence research needs responsible publication norms 
h/t John Haughton

“Weighing the benefits of openness against responsible disclosure is no easy task. As is often the case when balancing competing social goals, there will rarely be a clear-cut answer. Precisely because political and market incentives may place undue weight on the scale in favor of immediate, concrete or concentrated benefits over long-term, abstract or diffuse risks, we need to create shared ex-ante principles—and, eventually, institutional structures to implement and further develop them. While it will be impossible to fully predict the future, researchers can at least increase the likelihood that these evaluations will be based on considered reasoning rather than (possibly unconscious) self-interested intuitions.” Read more

“If we want to make conflicts more humane, we must turn to humans, not machines.’ ~ Marisa Tschopp

Essential AI Ethics research for your reading list:

Check out past issues and sign up for our free newsletter to get this great content delivered to your inbox every Tuesday!