AI Ethics Weekly – Nov 26: Robot police dogs are here, second wave of algorithmic accountability, power of female perspective on AI

AI Ethics Weekly – Nov 26: Robot police dogs are here, second wave of algorithmic accountability, power of female perspective on AI
November 26, 2019 LH3_Admin

Here is your weekly round-up of the latest developments in AI Ethics sorted by our 6 AI Ethics categories.

Sign up for our free newsletter to get this great content delivered to your inbox every Tuesday!

(HUMAN) ROLES + RIGHTS

A.I. Is Not as Advanced as You Might Think
h/t @zoramag
“I am using evidence of A.I. bias to introduce legislation that protects the digital civil rights of African Americans. My work grapples with how to develop new legal frameworks to hold tech companies accountable. Currently, we can only prosecute for intentional acts of discrimination, but how can we legislate against the unintended consequences of racial proxies?”  Read more.

When robots are ultra-lifelike will it be murder to switch one off?
h/t @jgcarpenter
“Technology is getting better all the time. What will it mean if we can create a robot that is considered alive? If I find myself annoyed by such a robot, would it be wrong to turn it off? Would that be the same as killing it?” Read more 

NYC creates a high-level position to oversee ethics in AI
h/t @peteharrison
New York City wants to avoid bias in AI and other algorithms, and it’s creating a role primarily to ensure that equal treatment. Mayor Bill de Blasio has issued an executive order creating a position for an Algorithms Management and Policy Officer. Whoever holds the position will work within the Mayor’s Office of Operations and serve as both an architect for algorithm guidelines and a go-to resource for algorithm policy. This person will make sure that city algorithms live up to the principle of “equity, fairness and accountability,” the Mayor’s office said.” Read more.

SOCIETY + SUSTAINABILITY

The Second Wave of Algorithmic Accountability
h/t @EvanSelinger
“At present, the first and second waves of algorithmic accountability are largely complementary. First wavers have identified and corrected clear problems in AI, and have raised public awareness of its biases and limits. Second wavers have helped slow down AI and robotics deployment enough so that first wavers have more time and space to deploy constructive reforms. There may well be clashes in the future among those who want to mend, and those at least open to ending or limiting, the computational evaluation of persons.” Read more.

To secure a safer future for AI, we need the benefit of a female perspective
h/t @maria_axente
“Reading the observations of these three women brought to the surface a thought that’s been lurking at the back of my mind for years. It is that the most trenchant and perceptive critiques of digital technology – and particularly of the ways in which it has been exploited by tech companies – have come from female commentators. The thought originated ages ago as a vague impression, then morphed into an intuitive correlation and eventually surfaced as a conjecture that could be examined.” Read more.

Diversity in AI is not your problem, it’s hers
h/t @WWRob
“There is a bias against “hers” in most major AI systems today, and the source of the bias is the perfect metaphor for bias in AI more broadly. Like you might remember from high school, “hers” is a pronoun. Each word in a sentence belongs to one of a small number of categories: nouns, pronouns, adjectives, verbs, adverbs, etc. One common building block in many AI applications is to identify the right category in raw text.Today, “hers” is not recognized as a pronoun by the most widely used technologies for Natural Language Processing (NLP).” Read more.

Sign up for our free newsletter to get this great content delivered to your inbox every Tuesday!

(ALGORITHMIC) FAIRNESS + ACCOUNTABILITY

Human biases are baked into algorithms. Now what?
h/t @safiyanoble
Recently, regulators began investigating the new Apple Card and Apple’s partner, Goldman Sachs, after several users reported that in married households, men were given higher credit limits than women — even if the women had higher credit scores.I spoke with Safiya Noble, an associate professor at UCLA who wrote a book about biased algorithms. She said women having little financial independence or freedom over centuries is reflected in the data algorithms use to evaluate credit. The following is an edited transcript of our conversation.” Read more.

In the Outcry over the Apple Card, Bias is a Feature, Not a Bug
h/t @AINowInstitute 
“Tech companies know they have a problem on their hands, and that algorithmic discrimination is deeply embedded in the systems they are unleashing into the world. It’s enough of an issue that Microsoft listed reputational harm due to biased AI systems among the company’s risks in its latest report to shareholders. But the industry thus far seems unwilling to prioritize solving these issues over their bottom line.” Read more.

The Architect of Modern Algorithms
h/t @miad
“Q&A with Barbara Liskov who pioneered the modern approach to writing code. Liskov, who had studied mathematics as an undergraduate at the University of California, Berkeley, wanted to approach programming not as a technical problem, but as a mathematical problem — something that could be informed and guided by logical principles and aesthetic beauty. She wanted to organize software so that she could exercise control over it, while also making sense of its complexity.” Read more.

The value of a shared understanding of AI models
h/t mmitchell_ai
“Model cards: a proposed first step. Today we’re excited to share our vision for model cards. It’s an idea we originally explored in a Google research paper earlier this year, and one we hope may soon help organize the essential facts of machine learning models in a structured way.” Read more.

PRIVACY + DATA RIGHTS

How we fought our landlord’s secretive plan for facial recognition—and won
h/t @miad
“Tenants in the Atlantic Plaza Towers apartment complex in New York’s Brownsville neighborhood were fighting to prevent their landlord, Nelson Management Group, from installing facial recognition technology to open the front door to their buildings, calling it an intrusion of their privacy. This week, they succeeded—the group reversed the decision.” Read more.

PHYSICAL + DIGITAL SAFETY

Mass. State Police Tested Out Boston Dynamics’ Spot The Robot Dog. ACLU Wants To Know More
h/t @elizabeth_joh
“Massachusetts State Police is the first law enforcement agency in the country to use Boston Dynamics’ dog-like robot, called Spot. While the use of robotic technology is not new for state police, the temporary acquisition of Spot — a customizable robot some have called “terrifying” — is raising questions from civil rights advocates about how much oversight there should be over police robotics programs.” Read More.

ICYMI

Playing with our 9 new mini cheetah robots in Killian Court

“The bots are MIT university’s Mini Cheetah: a lightweight and modular quadruped that’s been under development for years. Mini Cheetah earlier in 2019 learned to backflip and now the biomimetics lab now has at least nine of these little bots.”

Sign up for our free newsletter to get this great content delivered to your inbox every Tuesday!