AI Ethics Weekly – Sept 7: Back to regularly scheduled dystopian programming

AI Ethics Weekly – Sept 7: Back to regularly scheduled dystopian programming
September 6, 2021 LH3_Admin

After a much needed break for personal and environmental reasons, we are back on our weekly schedule.  Much gratitude to our community for sharing the latest developments in this space since awareness is the very first step in the long journey towards meaningful solutions.

Facebook AI puts ‘primates’ label on video of Black men
h/t Pinna Pierre @pierrepinna
Facebook users who recently watched a video featuring Black men were served an automated prompt that asked if they’d like to “keep seeing videos about Primates,” a mistake the social network called “unacceptable” in statements to news outlets.

The Secret Bias Hidden in Mortgage-Approval Algorithms – The Markup
h/t  Julia Angwin @JuliaAngwin
Some of the homeownership gap is due to disparities in mortgage lending: applications from Blacks and Latinos are denied more often than those from White applicants. And that disparity is powered in part by several algorithms—mainly those created or required by Fannie Mae and Freddie Mac—that are a key part of the process of approving and denying loan applications. Holding 17 factors steady in a complex statistical analysis of 2019 conventional mortgage applications, Emmanuel Martinez found that lenders were 40 to 80 percent more likely to reject an applicant of color than a White applicant.

‘Selling a promise’: what Silicon Valley learned (or hasn’t) from the fall of Theranos
h/t Mairtín Cunneen @EmergTechEthics
The outcome of the case will be huge for startup culture, Carreyrou, the journalist, said. “There has long been a culture of faking it until you make it in Silicon Valley, and Holmes is a product of that culture,” he said. “To reform that – to change Silicon Valley – it is going to take a conviction.”

Tech-industry AI is getting dangerously homogenized, say Stanford experts
h/t Theodora (Theo) Lau – 劉䂀曼 @psb_dc
A multidisciplinary group of Stanford University professors and students wants to start a serious discussion about the increasing use of large, frighteningly smart, “foundation” AI models such as OpenAI’s GPT-3 (Generative Pretraining Transformer 3) natural language model.

Sign up link to get the full-length version of this newsletter delivered to your inbox on the first Tuesday of every week!