AI Ethics Weekly – Feb 4: Coronavirus early AI warning, banality of chatbots, and tools to fight deepfakes

AI Ethics Weekly – Feb 4: Coronavirus early AI warning, banality of chatbots, and tools to fight deepfakes
February 4, 2020 LH3_Admin

Here’s your weekly round-up of the latest developments in AI Ethics!

Sign up for our weekly AI ethics newsletter to get this great content delivered to your inbox every Tuesday!

Algorithms on social media need regulation, says UK’s AI adviser
h/t CarolynTH 

“New regulation should be passed to control the algorithms that promote content such as posts, videos and adverts on social networks, the UK government’s advisory body on AI ethics has recommended.” Read more.

Artificial intelligence: Does another huge language model prove anything?
h/t Dorothea Baur

“According to the paper, the training of Meena took 30 days on a TPU v3 Pod, composed of 2,048 TPU cores. Google doesn’t have a price listing for the 2,048-core TPU v3 Pod, but a 32-core configuration costs $32 per hour. Projecting that to 2,048 cores ($2,048/hour), it would cost $49,152 per day and $1,474,560 for 30 day.” Read more.

Google’s ‘Meena’ advances the exquisite banality of chatbots 

“Google has made a major advance in chatbots with a giant version of its “Transformer” language model that can stay sensibly on topic within a conversation. But the results are still dreadfully boring as far as dialogue.” Read more.

Silicon Valley’s cocaine problem shaped our racist tech
h/t Charlotte.EU

“When the nation and the world perpetually identifies, frames and targets black people as “problems”, then one can clearly see how new technologies – inadvertently or purposefully – become trained on, and threaten to destroy, black lives.” Read more.

AI Can Do Great Things—if It Doesn’t Burn the Planet

“It’s not just a worry for academics. As more companies across more industries begin to use AI, there’s growing fear that the technology will only deepen the climate crisis.” Read more.

AI still doesn’t have the common sense to understand human language

“A new paper from the Allen Institute of Artificial Intelligence calls attention to something still missing: machines don’t really understand what they’re writing (or reading).” Read more.

Why Amazon’s Ring and facial recognition technology are a clear and present danger to society
h/t Julie Carpenter 

“There are no US laws preventing Amazon from sharing any footage it obtains with anyone else. And it’s important to remember that even if you don’t own a Ring camera, the ones in your neighborhoods are still recording footage of you without your consent.” Read more.

An AI Epidemiologist Sent the First Warnings of the Wuhan Virus

“BlueDot uses an AI-driven algorithm that scours foreign-language news reports, animal and plant disease networks, and official proclamations to give its clients advance warning to avoid danger zones like Wuhan.” Read more.

Sign up for our weekly AI ethics newsletter to get this great content delivered to your inbox every Tuesday!