AI Ethics Weekly – October 5: Anything is Possible…if the Tech Gods are Willing

AI Ethics Weekly – October 5: Anything is Possible…if the Tech Gods are Willing
October 3, 2020 LH3_Admin

Availability of most privacy, moderation, accessibility, and other critical features  in AI are dependent on the whims of tech companies. Here’s what we learned during the great pandemic of 2020 –  Working from home long-term is (and was always possible), accessibility features were now magically available because of the pandemic, and yesterday we found out that Twitter is able to moderate/remove tweets that “that wish or hope for death, serious bodily harm” This news was met with incredulity by many women of color, including Black female politicians who’ve had to deal with racist attacks and death threats on the platform with no recourse.

You must be kidding me

Sign up to get the full-length version of this content delivered to your inbox every Monday!

Twitter Says You Cannot Tweet That You Hope Trump Dies From COVID
h/t Jason Koebler @jason_koebler
“Twitter told Motherboard that users are not allowed to openly hope for Trump’s death on the platform and that tweets that do so “will have to be removed” and that they may have their accounts put into a “read only” mode.”

Twitter wants to tackle its biased image cropping problem by giving you control instead
h/t Theo – 劉䂀曼 @psb_dc
Last night, in an update, the company said it’s planning to give users more control over how the final image will look like but didn’t share any details.

What ‘The Social Dilemma’ Gets Wrong
h/t Matt Navarra @MattNavarra
For a documentary that so many experts up in arms about all the inaccuracies, folks sure can’t stop talking about it. Here’s Facebook’s response to the Social Dilemma.

Coded Bias & The Social Dilemma // Ford Foundation & Omidyar Network
h/t Sasha Costanza-Chock
Here’s a discussion of the two documentaries – Social Dilemma (again) and Coded Bias, which may be helpful to those working in tech reform and social justice.

Make it Stop

NYPD Used Facial Recognition Technology In Siege Of Black Lives Matter Activist’s Apartment
h/t Ángel S. Díaz @AngelSDiaz_
The NYPD deployed facial recognition technology in its hunt for a prominent Black Lives Matter activist, whose home was besieged by dozens of officers and police dogs last week, a spokesperson confirmed to Gothamist.

Facial recognition contract extension reopens rifts among Detroit police commissioners
Tawana Petty @Combthepoet
Technology components incorporated into DataWorks Plus software was found to falsely identify Black and Asian faces 10 times to 100 times more than Caucasian faces, according to a 2019 federal study.

Want more AI Ethics news?  Sign up to get the full-length version of this content delivered to your inbox every Monday!