Sign up for our weekly AI ethics newsletter to get the full-length version of this issue in your inbox every Monday!
Special update on the coronavirus (COVID-19)
The coronavirus: A warning from Peter Daszak, the scientist who saw it coming.
“I think we’re going to see things happen that we didn’t expect would happen. I think we’re going to see a personal invasion of our daily lives that we’ve not seen for a long time. And some people disagree, and that will lead to conflicts. How far will public health services go to actually force people to change our behavior?” Read more.
How AI Is Tracking the Coronavirus Outbreak
“We are moving to surveillance efforts in the US,” Brownstein says. It is critical to determine where the virus may surface if the authorities are to allocate resources and block its spread effectively. “We’re trying to understand what’s happening in the population at large.” Read more.
Google’s DeepMind is using AI to help scientists understand coronavirus
DeepMind also points out that normally, it’d wait until the work was peer-reviewed before publishing, but due to the nature of the coronavirus outbreak, the company is publishing the data now under an open license, meaning that anyone can use it. Read more.
Coronavirus Researchers Using AI to Predict Virus Spread
“Meanwhile, understanding a virus doesn’t necessarily mean we’ll be able to stop it from spreading — Ebola was known for decades before an outbreak killed more than 11,000 people in western Africa between 2014 and 2016.” Read more.
Corona Virus Tech Handbook
h/t Sarah Porter @SColesPorter
“Coronavirus Tech Handbook” – open source guide for tech companies/startups by the London school of political technologists. Includes debunking misinformation online about the COVID-19 outbreak. Includes Whatsapp groups for updates. Read more.
Creating a more ethical and diverse world
AI Ethics Twitter Chat – Feb 2020: Deconstructing Deepfakes
“For our February AI Ethics Twitter Chat, we invited expert guest, Dr. Brandie Nonnecke, Founding Director, Citris Policy Lab at UC Berkeley to discuss “Deconstructing Deepfakes”. Read more.
Creating a Curious, Ethical, and Diverse AI Workforce
h/t Carol Smith @carologic
“People with similar concepts of the world and a similar education are more likely to miss the same issues due to their shared bias. The data used by AI systems are similarly biased, and people collecting the data may not be aware of how that is conveyed through the data they create.” Read more.
How much is too much?: The ethics of AI and data in the workplace
h/t Lighthouse3 @lh3com
“”if staff have consented to giving this information, and they’re aware that their stress levels are being tracked, perhaps all is fine. If it is benefiting their wellbeing then they’re happy, the company is happy, and it ticks a box in the mission to improve CSR. But what if it goes deeper than that? Read more.
Move over Palantir, Facebook, here are the new players in the surveillance economy
Before Clearview Became a Police Tool, It Was a Secret Plaything of the Rich
h/t Muck Rack @muckrack
“Its backers included the billionaire investor Peter Thiel, the venture capitalist David Scalzo and Hal Lambert, an investor in Texas who runs an exchange-traded fund with the ticker symbol “MAGA,” which tracks companies that align with Republican politics.” Read more.
RESOURCES:
h/t Maria Luciana Axente @maria_axente
What do we teach when we teach tech & AI ethics?
“What are we teaching when we teach ethics — both generally, and specifically for AI? This post covers two forthcoming papers that answer this question — one for SIGCSE (in Portland in March) and one for AIES (in New York in February).” Read more.
Sign up for our weekly AI ethics newsletter to get this and other exclusive content delivered to your inbox every Monday!