Here’s your weekly round-up of the latest developments in AI Ethics!
Sign up for our weekly AI ethics newsletter to get this and more exclusive content delivered to your inbox every Tuesday!
These data flows empty into surveillance capitalists’ computational factories, called “artificial intelligence,” where they are manufactured into behavioral predictions that are about us, but they are not for us. Instead, they are sold to business customers in a new kind of market that trades exclusively in human futures. More on this
“Anonymised” data lies at the core of everything from modern medical research to personalised recommendations and modern AI techniques. Unfortunately, according to a paper, successfully anonymising data is practically impossible for any complex dataset.” Learn more.
“…the IBM Policy Lab calls for what it describes as “precision regulation” of AI, or laws that require companies to develop and operate “trustworthy” systems. Our approach is grounded in the belief that tech can continue to disrupt and improve civil society while protecting individual privacy,” wrote Hagemann and Leclerc in a joint statement. “As technological innovation races ahead, our mission to raise the bar for a trustworthy digital future could not be more urgent.” Read more.
“Ring isn’t just a product that allows users to surveil their neighbors. The company also uses it to surveil its customers. An investigation by EFF of the Ring doorbell app for Android found it to be packed with third-party trackers sending out a plethora of customers’ personally identifiable information (PII).” Read more.
“There has to be some personal or professional responsibility here,” said Liz O’Sullivan, an artificial intelligence researcher and technology director at the Surveillance Technology Oversight Project. “The consequences of a false positive is that someone goes to jail.” Read more.
“Moore’s law being Moore’s law, AI hardware and software have improved dramatically in recent years. Now there’s a new breed of neural net that can run entirely on cheap, low-power microprocessors. It can do all the AI tricks we need, yet never send a picture or your voice into the cloud.” Read more.
“Since 2012, the city’s law enforcement agencies have compiled over 65,000 face scans and tried to match them against a massive mugshot database. But it’s almost completely unclear how effective the initiative was, with one spokesperson saying they’re unaware of a single arrest or prosecution that stemmed from the program.” Read more.
“Beyond AI farms, online crowdsourcing projects are able to create robust tools because thousands of people come together to curate data. However, people bring biases and subjectivity that can influence AI, intentionally or not.” Read more.
“Corporations & governments are deploying #AI in many ways, but they have failed to ensure technologies work for All. AJL United’s work to expose the resulting harms is featured in the Shalini Kantayya’s film Coded Bias at Sundance.” Get screening details.
“Bias and the prospect of societal harm increasingly plague artificial-intelligence research — but it’s not clear who should be on the lookout for these problems.” Read more.
Sign up for our weekly AI ethics newsletter to get more exclusive content delivered to your inbox every Tuesday!