AI Ethics Weekly – Feb 2: End of Privacy As We Know it

AI Ethics Weekly – Feb 2: End of Privacy As We Know it
February 2, 2021 LH3_Admin

Pervasiveness of surveillance in every part of our lives is making it increasingly difficult to hang on to any semblance of privacy.

If you want to support Diversity & Ethics in AI, you can now fund our work directly at this link.

It’s not you, Juan, it’s us”: How Facebook takes over our experience
h/t Dr. Pinkeee @Elinor_Carmi
Facebook spies on your behavior (whether you are a subscriber or not) within and outside the platform with cookies and pixels to inform the ranking. Its surveillance of user behavior off the platform is extensive and hard to limit, even if you try. Read more.

NJ Transit will test face-mask detection with federal grant grant
New Jersey Transit, the country’s third-largest public transit system, will test an array of emerging technologies like heat mapping, face-mask detection and artificial intelligence. Read more.

The Myth of the Privacy Paradox: Final Published Version
h/t Daniel J. Solove @DanielSolove
In this Article, Professor Daniel Solove deconstructs and critiques the privacy paradox and the arguments made about it. Read more.

Spotify Secures Horrifying Patent to Monitor Users’ Speech
h/t Alejandro Saucedo @AxSaucedo
The big green circle first filed a patent for its “Identification of taste attributes from an audio signal” product in February of 2018,  and finally received approval on January 12th, 2021. The goal is to gauge the listener’s “emotional state, gender, age, or accent,” in order to recommend new music. Read more.

Facebook and the Surveillance Society: The Other Coup
h/t Alaric Aloor @AlaricAloor
We can have democracy, or we can have a surveillance society, but we cannot have both. Read more.

Racism in AI Systems

‘For Some Reason I’m Covered in Blood’: GPT-3 Contains Disturbing Bias Against Muslims
h/t Folajimi Freddie Odukomaiya @import_folajimi
Last week, a group of researchers from Stanford and McMaster universities published a paper confirming a fact we already knew. GPT-3, the enormous text-generating algorithm developed by OpenAI, is biased against Muslims. Read more.

How our data encodes systematic racism
h/t Daniel Leufer @djleufer
Technologists must take responsibility for the toxic ideologies that our data sets and algorithms reflect. Read more.

Use this sign up link to get the full-length version of this newsletter delivered to your inbox every Tuesday!