AI Ethics Weekly – Nov 2: ‘History Will Not Judge Us Kindly’

AI Ethics Weekly – Nov 2: ‘History Will Not Judge Us Kindly’
November 1, 2021 LH3_Admin

We’re taking a break! 

The Tuesday issue will be on hold until Jan 2022. We will continue to share resources and opportunities every Friday so you can close out this year well-informed and start the new year ready to make a difference!

November is Native Heritage Month, here is a great resource to  learn more about the history and culture of the Native Nations. 

Sign up here to get the full-length version of this newsletter with AI Ethics resources and opportunities delivered to your inbox every Friday!

https://media.giphy.com/media/6pWO2HA7RM1czXFiUx/giphy.gif 

Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist
AI Ethics Watch @AIEthicsWatch
after playing around with Delphi for a while, you’ll eventually find that it’s easy to game the AI to get pretty much whatever ethical judgement you want by fiddling around with the phrasing until it gives you the answer you want. 

Facebook prioritized ‘angry’ emoji reaction posts in news feeds
Will Oremus @WillOremus
Facebook engineers gave extra value to emoji reactions, including ‘angry,’ pushing more emotional and provocative content into users’ news feeds

Leaked Facebook Documents Reveal How Company Failed on Election Promise
Julia Angwin @JuliaAngwin
CEO Mark Zuckerberg had repeatedly promised to stop recommending political groups to users to squelch the spread of misinformation

Inside the controversial US gunshot-detection firm
Tawana Petty @Combsthepoet
ShotSpotter has garnered much negative press over the last year. Allegations range from its tech not being accurate, to claims that ShotSpotter is fuelling discrimination in the police.

Sign up here to get the full-length version of this newsletter with AI Ethics resources and opportunities delivered to your inbox every Friday!