In America, the brutal killing of yet another Black man, George Floyd and woman, EMT Breona Taylor (in two separate incidents) by the police unleashed protests across the country and worldwide, inspiring even the reclusive Mennonites to show up in solidarity, and may have cued the return of the hacker collective – Anonymous.
Against this volatile backdrop, SpaceX became the first private corporation to launch people into orbit. But the pandemic and protests have forced us to own up to the inequities in our society right here on planet earth, acknowledge the limitations of technology, and accept that if we are serious about responsible AI, we need concrete action to address these societal and systemic issues rather than trying to force-fit ethics into AI as an afterthought.
It is also time to reexamine what role we individually and organizationally want to play during these consequential times. On that note, Dise @ComoseDise poses a critical question for these consequential times, “Academics need to ask themselves this more: “Do we need research or do we need organizing?”
If you are wondering how you can help? Here is a helpful resource to bail out the brave protesters (h/t rae @raeddand), donate food to those who have lost their livelihoods, and last but not the least, an Essential Reading Guide For Fighting Racism.
Please stay safe and healthy, wherever you are. We are all in this together! <3
Start off your week right with our AI Ethics Weekly newsletter. Sign up to get the full-length version of this content delivered to your inbox every Monday!!
Many Questionable Uses of AI
Facebook Executives Shut Down Efforts to Make the Site Less Divisive
h/t Deepa Seetharaman @dseetharaman and @JeffHorwitz
“Facebook spent years studying Facebook’s role in polarization, according to sources and internal documents. One internal slide laid out the issue like so. ”Our algorithms exploit the human brain’s attraction to divisiveness.” Read More.
Coronavirus tests the value of artificial intelligence in medicine
h/t Honoree Tigrett @crushitgirlboss
“AI is being used for things that are questionable right now,” said Eric Topol, M.D., director of the Scripps Research Translational Institute and author of several books on health IT.” Read More.
Health Officials Say ‘No Thanks’ to Contact-Tracing Tech
h/t Frank Pasquale @FrankPasquale
“So far at least, the pandemic response has become a bitter lesson in everything technology can’t do and an example of Silicon Valley’s legendary myopia.” Offered tech firms’ help, states & cities have largely said, “No thanks,” or “Not now.” Read More.
Microsoft sacks journalists to replace them with robots
h/t Luciano Floridi @Floridi
“Dozens of journalists have been sacked after Microsoft decided to replace them with artificial intelligence software. One staff member who worked on the team said: “I spend all my time reading about how automation and AI is going to take all our jobs, and here I am – AI has taken my job.” Read More.
ACLU sues facial recognition firm Clearview AI, calling it a ‘nightmare scenario’ for privacy
“The American Civil Liberties Union is suing controversial facial recognition firm Clearview AI for violation of the Illinois Biometric Information Privacy Act (BIPA), alleging the company illegally collected and stored data on Illinois citizens without their knowledge or consent and then sold access to its technology to law enforcement and private companies.” Read More.
If you like what you read, sign up to get the full-length version of this content delivered to your inbox every Monday!