AI Ethics Weekly – Nov 12: Algorithms that discriminate against women, personal health data mining

AI Ethics Weekly – Nov 12: Algorithms that discriminate against women, personal health data mining
November 12, 2019 LH3_Admin

AI Ethics Newsletter

Tuesday, November 12, 2019

Here is your weekly round-up of the latest developments in AI Ethics.

Sign up for our free newsletter to get this great content delivered to your inbox every Tuesday!

Google’s ‘Project Nightingale’ Gathers Personal Health Data on Millions of Americans 
h/t Amee Vanderpool

“Google in this case is using the data in part to design new software, underpinned by advanced artificial intelligence and machine learning, that zeroes in on individual patients to suggest changes to their care. Staffers across Alphabet Inc., Google’s parent, have access to the patient information, internal documents show, including some employees of Google Brain, a research science division credited with some of the company’s biggest breakthroughs.” Read more.

Apple Card algorithm sparks gender bias allegations against Goldman Sachs 
h/t Twitter

“What started with a viral Twitter thread metastasized into a regulatory investigation of Goldman Sachs’ credit card practices after a prominent software developer called attention to differences in Apple Card credit lines for male and female customers. David Heinemeier Hansson, a Danish entrepreneur and developer, said in tweets last week that his wife, Jamie Hansson, was denied a credit line increase for the Apple Card, despite having a higher credit score than him. Read more.

Canada is denying travel visas to AI researchers headed to NeurIPS — again
h/t Timnit Gebru

“Canadian immigration officials are denying travel visas to a large number of AI researchers and research students scheduled to attend NeurIPS and the Black in AI workshop, event organizers said. Those denied entry include Tẹjúmádé Àfọ̀njá, co-organizer of the NeurIPS Machine Learning for the Developing World workshop. Black in AI cofounder and Google AI researcher Timnit Gebru said 15 of 44 attendees planning to join the event’s December 9 workshop were denied entry. Many cited immigration officials’ fears that they would not return home.” Read more.

Computers Evolve a New Path Toward Human Intelligence
h/t Mia Dand

“Evolutionary algorithms have been around for a long time. Traditionally, they’ve been used to solve specific problems. In each generation, the solutions that perform best on some metric — the ability to control a two-legged robot, say — are selected and produce offspring. While these algorithms have seen some successes, they can be more computationally intensive than other approaches such as “deep learning,” which has exploded in popularity in recent years. The steppingstone principle goes beyond traditional evolutionary approaches. Instead of optimizing for a specific goal, it embraces creative exploration of all possible solutions.” Read more.

Getting Specific About Algorithmic Bias – Rachel Thomas
h/t Kanchan Kumar

Through a series of case studies, Rachel Thomas illustrates different types of algorithmic bias, debunk common misconceptions, and share steps towards addressing the problem.

Latest research on Ethics & Diversity in AI: 

Discriminating Systems: Gender, Race, and Power in AI West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute.

Check out past issues and sign up for our free newsletter to get this great content delivered to your inbox every Tuesday!