Here’s your weekly round-up of the latest developments in AI Ethics!
Check out past issues and sign up for our free newsletter to get this great content delivered to your inbox every Tuesday.
AI can now read emotions — should it?
“In its annual report, the AI Now Institute, an interdisciplinary research center studying the societal implications of artificial intelligence, called for a ban on technology designed to recognize people’s emotions in certain cases.” Read more.
Could an AI ‘human’ become your friend?
“If the CEO of a Samsung-backed startup has his way, “artificial humans” will become your teachers, doctors, financial advisers and possibly your closest friends.It’s a polarizing concept that became the most talked about topic at the CES tech show in Las Vegas this week.” More about life with these human-like AI life forms.
Newest AI technology: Fake people
h/t @DorotheaBaur
“Artificial intelligence start-ups are selling images of computer-generated faces that look like the real thing, offering companies a chance to create imaginary models and “increase diversity” in their ads without needing human beings.” Read more about fake diversity without real people.
Scientists use stem cells from frogs to build first living robots
h/t @JohnAFlood
“Researchers in the US have created the first living machines by assembling cells from African clawed frogs into tiny robots that move around under their own steam.” What are the ethical implications of these of squidgy robots?
The Bazillion-Dollar question
h/t @RobMcCargow
“What if the next wave of robots is different? What if robots aren’t like laptops or sewing machines or any other technology we’ve ever seen and they replace jobs without creating new ones?” Welcome to the ‘Robot Tax Debate.
Sign up for our weekly AI Ethics newsletter to get this great content delivered to your inbox every Tuesday!
Contextualizing “Ethical AI” Within the History of Exploitation and Innovation in Medical Research
h/t @BlakeleyHPayne
“It’s time for us to move beyond “bias” as the anchor point for our efforts to build ethical and fair algorithms.” Taking a closer look at the broader set of issues in AI related to power and exploitation.
Technology Can’t Fix Algorithmic Injustice
“What responsibilities and obligations do we bear for AI’s social consequences in the present—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable.” Making the case for more democratic oversight of AI.
Using AI to answer “Bot or not?”
h/t @katecrawford
“Graham collected a sample of the tweets from January 1 to January 6 using the Python-based scraper twint and ran them through the R package tweetbotornot. That second tool assigns each Twitter account a score from 0 (definitely not a bot) to 1 (definitely a bot).” Read more about bot-powered disinformation campaigns.
Troll Watch: AI Ethics
“This week, the Trump administration outlined its AI policy in a draft memo which encouraged federal agencies to, quote, “avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth,” More on this AI race in an interview with Drew Harwell, who covers artificial intelligence for The Washington Post.
Sign up for our weekly AI Ethics newsletter to get this great content delivered to your inbox every Tuesday!