AI Ethics Weekly – Jan 7, 2020: Learnings from a decade of tech activism, life under the algorithm, how big tech invented “Ethical AI”

AI Ethics Weekly – Jan 7, 2020: Learnings from a decade of tech activism, life under the algorithm, how big tech invented “Ethical AI”
January 6, 2020 LH3_Admin

Here’s the first weekly round-up of the latest developments in AI Ethics for this new year!

PSA: Devastating bush fires have ravaged Australia for months, burned millions of acres, destroyed thousands of homes, and killed many. Here’s how you can help.

AI Now 2019 Report
h/t Amy Chou

In depth report with detailed spotlights on harmful AI and recommendations to prevent the abuse of AI technologies. “What becomes clear is that across diverse domains and contexts, AI is widening inequality, placing information and control in the hands of those who already have power and further disempowering those who don’t. The way in which AI is increasing existing power asymmetries forms the core of our analysis, and from this perspective we examine what researchers, advocates, and policymakers can do to meaningfully address this imbalance.” Read more.

The Utility of Interpretable AI
h/t Ian Moura

“…a significant proportion of research remains focused on creating explanations for opaque models, rather than on developing models which are inherently interpretable. With the resurgence in interest in artificial neural networks over the past decade, and their subsequent application to an ever broader array of scenarios, the necessity of understanding the limitations of models that are uninterpretable to their human designers and users is only becoming more clear.” Read more.                                                                                          

AI Index 2019 Report from Stanford Human-Centered AI
h/t Amy Chou

“Only 19% of large companies surveyed say their organizations are taking steps to mitigate risks associated with explainability of their algorithms, and 13% are mitigating risks to equity and fairness, such as algorithmic bias and discrimination” Read more.

AI Systems as State Actors
h/t Mia Dand 

“Many legal scholars have explored how courts can apply legal doctrines, such as procedural due process and equal protection, directly to government actors when those actors deploy artificial intelligence (AI) systems. But very little attention has been given to how courts should hold private vendors of these technologies accountable when the government uses their AI tools in ways that violate the law.” Read more.

What we learned from over a decade of tech activism
h/t Nataliya Nedzhvetskaya

“We documented all the collective actions in the tech industry in a publicly accessible online database and analyzed the results. What we learned challenges many mainstream media narratives about the tech workers’ movement. Here are our eight most important insights.1. Tech worker actions are growing exponentially. There were more than a hundred publicly reported actions in 2019, some involving thousands of people. This is almost triple the number of actions we saw in 2018 and nine times the number in 2017” Read more.

Life Under the Algorithm
h/t Mia Dand 

“Guendelsberger, a reporter for the alt-weekly Philadelphia City Paper until it was sold off and shut down in 2015, went undercover at three low-wage workplaces: an Amazon warehouse in Indiana, a call center in North Carolina, and a McDonald’s in San Francisco. Whereas Ehrenreich’s main discovery was that there still existed an exploited working class—a controversial point in the late 1990s and early 2000s—Guendelsberger takes inequality and exploitation as given, asking instead what these jobs are doing to the millions who work them.
” Read more.

Ethics of Technology Needs More Political Philosophy
h/t Johannes Himmelreich

“A basic mistake in the ethics of self-driving cars is asking only what an individual should do. This is the domain of moral philosophy. Whether you should eat meat or maintain a vegetarian diet is an example of a moral question. But in addition to such questions, we also need to ask what makes for good policies and institutions. Policies and institutions result from collective decisions and form the domain of political philosophy.” Read more.

The Invention of “Ethical AI’: How Big Tech Manipulates Academia to Avoid Regulation
h/t Mia Dand

“How did five corporations, using only a small fraction of their budgets, manage to influence and frame so much academic activity, in so many disciplines, so quickly? It is strange that Ito, with no formal training, became positioned as an “expert” on AI ethics, a field that barely existed before 2017. But it is even stranger that two years later, respected scholars in established disciplines have to demonstrate their relevance to a field conjured by a corporate lobby.” Read more.

ICYMI

Definitions of terms used in the OECD’s Principles of AI
h/t Joanna J Bryson

“Some definitions of some words used variously in my field. My intent is to provide definitions most useful for interpreting the five OECD Principles of AI, since that’s the soft law with the most international support, with 42 governments (initially, morow now I think) plus the G20 signed up.” Read more.

Cummings’ Whitehall weirdos will need to understand people, not just numbers
h/t Guardian Opinion

“But (and there is a but) recognising the power of maths to transform the world is, in many ways, the easy bit; far harder is recognising its limits. In the past decade it has become clear that you can neatly split those who apply equations to human behaviour into two groups: those who think numbers and data ultimately hold the answer to everything, and those who have the humility to realise they don’t.” Read More.