AI Ethics Twitter Chat – April 2020: Spotlight on China

AI Ethics Twitter Chat – April 2020: Spotlight on China
April 26, 2020 LH3_Admin

For our April AI Ethics Twitter Chat we invited Alice Xiang, Head of Fairness, Transparency, and Accountability (FTA) Research @PartnershipAI and Visiting Scholar, Yau Mathematical Sciences Center at Tsinghua University, Beijing (2019)

Mia Dand: Welcome Alice! Glad you could join us. Every region has its own specific concerns about the responsible use of AI. What are top ethical issues in AI for China?

Alice Xiang: AI ethics in China is often described in terms of governance, and much of the conversation focuses on data privacy issues. This is similar to how AI ethics developed in the US – folks didn’t have a strong sense of the shortcomings of algorithmic decision-making processes, but they were worried that their data was being used in ways they didn’t understand or consent to. The one more AI-specific area I’ve seen a lot of concern about in China has been autonomous vehicles. The safety concerns with deploying self-driving cars in very dense cities have sparked regulatory efforts. Topics like algorithmic bias that have become major AI ethics issues in the US are still not a big part of the conversation in China.

MD: What is the Chinese government’s approach to ethical AI?

AX: On the regulatory front, the Chinese government has primarily passed laws around data privacy and consumer protection. A common misperception is that Chinese people or the Chinese government does not care about data privacy. Although it is true that the Chinese government has far more access to citizens’ data than most governments, many of the data privacy concerns center around private companies misusing user data. For example, a viral app in China came under fire after people noticed that the user agreement gave the company global rights to use any images or videos they uploaded.

MD: Who are the key stakeholders in AI ethics discussions in China?

AX: The Chinese government, tech companies, and universities are key stakeholders in AI ethics conversations in China. For example, last year, the Beijing Academy of AI, which is backed by the Chinese Ministry of Science and Technology, convened a committee of top Chinese universities and tech companies to issue AI governance principles. In general, the line between government and non-government, is blurrier in China than in the US, so while efforts around AI ethics involve a variety of stakeholders, they are generally directed or supported by the government.

MD: What are the main hurdles impeding the adoption of ethical/responsible AI in China?

AX: A major hurdle is the US vs. China AI arms race narrative. The race to be seen as having the most advanced technologies makes it harder to slow down and thoughtfully evaluate the potential impacts of new AI technologies. This is a problem both for the US and China, but in China, there is particularly the sense of being the underdog. From this underdog position, it can be harder to prioritize AI ethics. Also, there have been fewer high-profile examples in China of systematic flaws with AI decision-making that have gotten significant news coverage. In the US, highly publicized examples of algorithmic bias in facial recognition and criminal justice risk assessment sparked public backlash and raised awareness around the limitations of algorithmic decision-making.

MD: AI use in China during this pandemic has received a good deal of media coverage. How effective has AI been in the fight against coronavirus?

AX: AI has been used in China in the fight against COVID-19 as both a diagnostic and surveillance tool. AI to help or replace radiologists in diagnosing COVID-19 based on CT scans seems to be quite promising, but it’s impact is somewhat limited since it only alleviates shortages of radiologists. Debates on contact tracing have begun in the US, but related technologies are already widely deployed in China. AI for surveillance has seemed to be effective in enforcing quarantines but has raised civil liberties concerns. Digital quarantines in China are powerful – apps like WeChat centralize social media, communication, and commerce onto a single platform, making it possible to restrict people from making purchases if they violate quarantine restrictions. Alipay Health Code assigns citizens a color-coded health status which determines whether they can use public transit or go to certain public places. As NYT recently reported, the app also shares data with the police. There is little transparency, however, on how these color codes are generated.

MD: How do think the current public health crisis might affect the ethical/responsible use of AI in China?

AX: In the short term, COVID is likely to detract from AI ethics conversations given that the focus now is on shipping new technologies in the battle against this rapidly developing pandemic. In fact, a Chinese government-supported working group recently issued a notice to companies reminding them of the importance of respecting data privacy regulations despite the current push to deploy new COVID-related tools quickly. The longer term picture is murkier. COVID has led to the expansion of AI surveillance systems. These systems were set up in a time of crisis without full consideration of possible ethical ramifications, so they will likely be harder to retrofit. On the other hand, this increased use of AI in high-stakes contexts might spark more concerns about AI ethics if there are high-profile examples highlighting the risks to such systems.

Question from Nadia Abouayoub: In covid-19 battle, countries are looking at reopening economies and turning to AI for solutions like tracing and testing. What would be your recommendations for implementation of the solutions.

AX: My overall recommendation would be to proceed cautiously, especially given the civil liberties concerns around tracing. It’s unclear that such tech will have a significant impact unless a large portion of the population uses it. Mandating the use of such tech, however, would create a large surveillance apparatus, and I worry that ethical concerns will not be adequately addressed given the time-pressure of the current crisis.

Question from Pinna Pierre: Facial recognition is commonly used in China in the field of surveillance (private or public). Does this worry the population as in Western countries (and others)?

AX: There is some growing worry in China around the use of facial recognition. For example, late last year, a Chinese law prof sued a wildlife park that required the use of facial recognition registration at its entrance.

MD: Thanks so much for joining us Alice and sharing these great insights!