AI Ethics Weekly – August 17: Biased Algorithms by Any Other Name…Are Still Really Problematic ?

AI Ethics Weekly – August 17: Biased Algorithms by Any Other Name…Are Still Really Problematic ?
August 16, 2020 LH3_Admin

Unless you’ve been on a digital detox lately, you are probably aware of the latest algorithmic debacle, where hundreds of students marched on 10 Downing Street to protest against the algorithm, which has marked down the grades of huge numbers of students from the most disadvantaged schools. h/t The Guardian @guardian

Start off your week right with our AI Ethics Weekly newsletter.  Sign up to get the full-length version of this content delivered to your inbox every Monday!

It seems very arbitrary

‘Classist’ Bias of Algorithms

Do the maths: why England’s A-level grading system is unfair
h/t Armando Iannucci @Aiannucci
“An Ofqual paper reveals that the process, developed using historical data, performed best at predicting marks for history A-level, when it was right slightly more than two-thirds of the time. For the worst exam, Italian, it was right barely a quarter of the time.” Read More.

A-levels: Exam regulator ignored expert help after statisticians wouldn’t sign non-disclosure agreements
h/t Foxglove @Foxglovelegal
Royal Statistical Society (RSS) offered to help the regulator with the algorithm in April, writing to Ofqual to suggest that it take advice from external experts.Ofqual agreed to consider the fellows, but only if the two academics signed a non-disclosure agreement (NDA) which prevented them from commenting in any way on the final choice of the model for five years after the results were released.” Read More. 

Awarding GCSE, AS, A level, advanced extension awards and extended project qualifications in summer 2020: interim report [PDF with Methodology]
“Direct Centre Performance model (DCP)–works by predicting the distribution of grades for each individual school or college. That prediction is based on the historical performance of the school or college in that subject taking into account any changes in the prior attainment of candidates entering this year compared to previous years.” Read More.

A-Level results 2020: How have grades been calculated?
[with an actual example]
h/t alex hern @alexhern
“The prior attainment adjustment also does not appear to take account of historic value added at the school. So in schools with historically high value added, the prior attainment adjustment will result in grades being lowered.” Read More.

Pro tip! Cathy O’Neil @mathbabedotorg wrote her powerful book “Weapons of Math Destruction” in 2016 detailing how flawed algorithms control our lives including where we go to school, get a car loan, how much we pay for health insurance…all these decisions are made by mathematical models. If you haven’t read it already, pick up a copy today.

Cancel Facial Recognition

NYPD Used Facial Recognition Technology In Siege Of Black Lives Matter Activist’s Apartment
h/t Evan Selinger @EvanSelinger
The NYPD deployed facial recognition technology in its hunt for a prominent Black Lives Matter activist, whose home was besieged by dozens of officers and police dogs last week, a spokesperson confirmed to Gothamist.” Read More.

ICE just signed a contract with facial recognition company Clearview AI
Theo @psb_dc
Clearview AI has been in the spotlight since a January investigation from The New York Times showed that its facial recognition technology was in widespread use among law enforcement agencies and private companies.” Read More.

Governments should close the AI trust gap with businesses
h/t Mia Shah-Dand @MiaD
“Experts at EY suggest that the absence of a focused approach to ethical artificial intelligence (AI) poses a huge risk to the business environment. Governments and private organisations need to collaborate to define ethical AI and bridge the gaps.” Read More.

Problematic Uses of AI 

The Quiet Growth of Race-Detection Software Sparks Concerns Over Bias
h/t Evan Selinger @EvanSelinger
“More than a dozen companies offer artificial-intelligence programs that promise to identify a person’s race, but researchers and even some vendors worry it will fuel discrimination.” Read More.

Problematic study on Indiana parolees seeks to predict recidivism with AI
h/t Alaric Aloor
A 2016 ProPublica analysis, for instance, found that Northpointe’s COMPAS algorithm was twice as likely to misclassify Black defendants as presenting a high risk of violent recidivism than white defendants.” Read More.

Technology Can’t Fix Algorithmic Injustice
h/t Mia Shah-Dand @MiaD
“There is a wealth of empirical evidence showing that the use of AI systems can often replicate historical and contemporary conditions of injustice, rather than alleviate them.” Read More.

Did you like what you read? Start off your week right with our AI Ethics Weekly newsletter.  Sign up to get the full-length version of this content delivered to your inbox every Monday!