Reading the China Dream
  • Blog
  • About
    • Mission statement
  • Maps
    • Liberals
    • New Left
    • New Confucians
    • Others
  • People
  • Projects
    • China and the Post-Pandemic World
    • Chinese Youth Concerns
    • Voices from China's Century
    • Rethinking China's Rise
    • Women's Voices
    • China Dream-Chasers
    • Textos en español
  • Themes
    • Texts related to Black Lives Matter
    • Texts related to the CCP
    • Texts related to Civil Religion
    • Texts related to Confucianism
    • Texts related to Constitutional Rule
    • Texts related to Coronavirus
    • Texts related to Democracy
    • Texts related to Donald Trump
    • Texts related to Gender
    • Texts related to Globalization
    • Texts related to Intellectuals
    • Texts related to Ideology
    • Texts related to the Internet
    • Texts related to Kang Youwei
    • Texts related to Liberalism
    • Texts related to Minority Ethnicities
    • Texts related to Socialism with Chinese Characteristics
    • Texts related to Tianxia
    • Texts related to China-US Relations

Lao Dongyan, “The Hidden Dangers of Facial Recognition Technology”

Lao Dongyan, “The Hidden Dangers of Facial Recognition Technology”[1]

Introduction by David Ownby, Translation by Jeffrey Ding
 
Introduction
 
Artificial Intelligence (AI) is a theme often mentioned in discussions on contemporary China.  In recent months, it is has probably been linked most often to the Orwellian surveillance state China has constructed in Xinjiang, and in recent days to new efforts to track China’s population because of the coronavirus. More broadly, however, AI figures importantly in “China 2025,” the plan to make China a global post-industrial innovation hub, in efforts to rethink education, and more grandly in China’s goal to become the world’s AI superpower. 

The text translated here, by Tsinghua law professor Lao Dongyan (see her Tsinghua website here), addresses the proposition to install facial recognition technology in the Beijing subway system in order to enhance traffic efficiency through security screening checks.  Lao posted her reflections on her WeChat account, the rough equivalent of Twitter, and her essay thus offers an example of the sort of public statement establishment intellectuals can still make in China (click here for another example available on our site; Lao’s post is still available as I write this in mid-March 2020, the other has long since been taken down by China’s censors). 

​I have no idea if Lao is “liberal” in her broader research orientations, but her reactions to the idea of using face recognition in the Beijing subway system mirror liberal attitudes in the West.  She does not like the idea, argues that China is going too far and that people should stand up for their rights.  These arguments reflect both her legal training and her understandable reaction as a law-abiding citizen:  “What do you think you’re doing?”  In a sense, the arguments are surprising only because we can still find them in China.  

The text was translated by Jeffrey Ding, a Rhodes Scholar at Oxford, Ph.D. candidate in International Relations, and Researcher at GovAI/Future of Humanity Institute. His original translation, lightly edited, is available here, and is part of the ChinAI newsletter, a weekly-updated library of translations from Chinese thinkers on AI-related issues available at https://chinai.substack.com/ .  Our thanks to Jeff for allowing us to share his work with our readers.
 
Translation

I learned from the news the day before yesterday that the Beijing Subway will start using facial recognition technology to carry out screening security checks on passengers, claiming that it will improve the efficiency of passenger traffic.
 
On reading this news, my first reaction was: this is crazy. Happily, I saw a critical commentary on the Enlightenment Daily website yesterday, entitled, ‘Don't Turn Facial Recognition Technology into a Modern ‘Branding Punishment 刺黥[2],”’ and I felt a little better.  But public opinion has been very quiet, making it seem like not that many people are paying attention to this matter, which makes me wonder if I am not the one going crazy.
 
You show your ID card when entering or leaving the university campus, you show your ID card when you mail something, you have your face scanned when you check into a hotel, and now the security check you already do to take the subway is not enough, and you have to go one step further and submit to this so-called new technology to continue to improve the level of security. What I want to know is, when will it all end? Will the next thing be to install facial recognition machines on all roads and in all public places, so that pedestrians can be intercepted, questioned, and searched at any moment, with those considered to be dangerous being detained?
 
I'm getting more and more confused as to who is being guarded against and who is being protected by this uncontrolled investment in security. I originally thought that I should be among those being protected, but as successive measures are added, it feels like I am the object of prevention and control. As a law-abiding citizen, I generally respect the law, I have no criminal record, I am dedicated to my work, and get along with other people. What is it about me that people need to be protected from?
 
I often feel that I am not trusted by the society in which we live. Whether it is the scrutiny of my research expense accounts, or the constantly escalating security measures, it feels like an atmosphere of unlimited precaution. With the expense accounts, I felt like I was being treated like a potential thief; with the security measures, like a potential evil force in society. I doubt that these experiences and feelings are mine alone.
 
The principle of the presumption of innocence is part of the modern code of criminal procedure. According to this principle, everyone is presumed legally innocent until found guilty by the court. However, the current security measures, no matter how you look at them, are based on the presumption of guilt. Everyone is presumed to be a danger to public safety and all are required, without exception, to pass through increasingly stringent security checks. Who, besides someone with schizophrenia, would believe that these security measures are actually used to protect the general public, including you and me?
 
Maybe some people will disagree with me, thinking that I am being too sensitive. Overall, I can imagine four types of disagreement:
 
First, some people may think that I am overthinking it, and ultimately am failing to appreciate and thank the government for its paternalistic protection and kindness.
 
To this I can only say: forgive me, but I cannot accept this type of kindness.
 
Think about it.  Enough of your personal data, including what websites you go on, what news and videos you watch, what you buy, who you chat with on WeChat, what you talk about, what you like and dislike, etc., is already collected.  Now they want to add personal biometric information, and put it all under the control of a huge organization. And we all know that in our society, any personal data, if it is in the hands of enterprises or other institutions, it is also in the hands of the government.
 
Because these huge organizations are run by concrete people, this is equivalent to saying that all personal data, including biometric data extremely useful for recognition purposes, are in the hands of a few people in that group. We should think carefully about how much personal information these people actually control, why they control our personal information, and what they are doing with it.
 
The people who control our data are obviously not God. They have their own selfish desires and weak points. Therefore, we cannot know how they will use our personal data and how they will manipulate our lives. I hardly need to mention that such data may be leaked or hacked due to improper storage, leading to harmful results that may be exploited by criminals.
 
Second, some people might say that as long as you do nothing bad, you don't have to worry about your personal data being in the hands of the government.
 
All I can say to this is that I don't want to become a “transparent” person; the idea of ​​becoming a “transparent” person makes me very uneasy.
 
In a normal society, individuals should have the right to oppose any organization's arbitrary access to their personal biometric data. The reason that the law protects an individual’s right to privacy and freedom of residence is to give that individual a space to govern himself, a space that cannot be infringed upon by others.
 
By “others” I mean not only other individuals and organizations in general, but also governments and countries. If the biometric data of an individual can be obtained without their consent in the name of security, do the legal protections of privacy and freedom of residence mean anything? Without privacy there is no freedom.

Third, some people might argue that they are not important, and that no one would be interested in learning about their personal information.
 
More than a few people think like this. In the face of the large-scale collection of personal data, even if they are uncomfortable having their own data collected, they still feel like the policy is not problematic.  The reason for this is that they think that they are not important people, and that on one will pay attention to them, so they are safe enough. 
 
To this I can only say that when you entrust your personal safety to the neglect of others, you are living like a desperate gambler. And you are not only betting on your luck, but you are also betting that the person who controls the data is an angel. While I might admire the ability of those who wishfully imagine they can win this bet to play the ostrich, I secretly think they probably need to pay some intelligence tax.
 
These optimists should take a serious look at the 20 year-old movie, "Enemy of the State."[3] The ending of the movie itself is not bad, and the bad guys eventually meet up with a bad fate. However, if you are the main character, without any particular knowledge or luck, all you can do is to wait for the tragedy to end. Worst of all, you probably won’t even know how you finally died.  

Fourth, there are people who will argue that the spread of this type of technology is problematic, but that opposing it is useless, or in other words, they are too lazy to waste their energy opposing it.
 
I can only say that, on issues that concern our own important rights and interests, if we do not stand up and express our opposition and do what we can, we naturally cannot expect others to help. How do you know that opposition is ineffective before you make the minimum effort? Even if opposition is ultimately invalid, it is better than tamely putting on your own shackles. At least we put in the work and resisted.
 
If those whose rights are being violated simply endure this in silence and do not even dare to express our opposition, it is the same as doing nothing, helping our opponent, and hurting ourselves. Retreat in this situation does not mean a blissful escape, and we are likely to fall further into the abyss, because this is not a problem that can be solved by waiting it out. Watching us head step by step towards the abyss, this horrible fate, one has to admit this is at least partly caused by our own forbearance.
 
I express my firm opposition to the facial recognition technology that the Beijing Subway is about to implement. Below are my specific reasons:

First, facial recognition involves the collection of biometric data that is important to individuals. The organizations and institutions involved must prove the legitimacy of what they are doing.
 
According to existing laws and regulations, ordinary personal information, such one’s address, phone number, email, on-line accounts, and location tracking information, etc. all allow for the identification of the person in question, and  collection of this information requires the person’s prior approval. Moreover, if the collecting party improperly uses, sells, or leaks the information, this may trigger legal liabilities, including criminal liabilities.
 
The personal nature of biometric data is even more obvious, and for individuals, these data are obviously more important. Why is it not necessary to obtain the consent of the person whose data is being collected? All the more because there are no restrictions on the subject, purpose, method, scope, and procedures of the collection, and no legal responsibility for illegal collection or use.
 
If the government is the principal data collector, then it obviously needs explicit legal authorization.  Without the authorization, they cannot do it; the government has no right to collect the biometric data of ordinary citizens in the name of security. If this data collection is being done by a company or another institution, its collection of personal biometric data requires at least the explicit consent of the person whose data being collected; collection without consent constitutes an illegal act of obtaining the personal information of citizens.

Second, the subway’s implementation of facial recognition involves important personal rights of the public at large. Its implementation without a hearing fails to meet minimum standards of legitimacy.
 
A few years ago, the Beijing subway undertook a broad survey of the public’s views on the question of a possible fare adjustment, and also went through a strict hearing process. If fare adjustment requires extensive consultation and a hearing, then how can facial recognition technology be directly introduced without soliciting opinions or holding a hearing, when facial recognition obviously involves more important personal rights? Could it be that the biometric data of an individual means less than a few thousand RMB?
 
Without going through any discussion, we are ready to launch facial recognition on a large scale. People have reason to wonder whether this involves illegal transactions or whether it is the result of lobbying by interest groups.

Third, the use of facial recognition technology is said to facilitate security screening, but questions related to the idea itself have not been answered.
 
What authority does a traffic management department have to screen passengers? What law is it based on? In addition, what standards are the relevant departments planning to use to screen passengers?  What is the specific nature of the screening standards that are being adopted?  Who are they coming from and how did they establish them?  Will the standards be made public?   Shouldn't these questions be answered before implementing face recognition? The sorting criteria for even garbage must be clearly stated, to say nothing of the sorting of people.
 
If the relevant departments intend to decide on screening standards internally,  then how do we know whether the standards are legal and reasonable? How can I know if there is illegal discrimination? How can I find out if there if the screening standards are fixed arbritrarily? If the objects of the screening are not satisfied with the screening criteria, or believe that inappropriate screening violates their legal rights, how do they report this and how do they ensure that their concerns are effectively addressed? Before these questions are answered, how can we make a hasty decision that facial recognition can be used for security screening in a place like a subway?
 
If arbitrary internal standards are used to divide passengers into various grades and ranks and different security screening measures are applied based on this sorting, we have reasons to suspect that this approach violates the constitutional principle of equality and violates citizens' fundamental right to personal liberty. Article 37 of the Constitution clearly stipulates that illegal detention and other methods of illegally depriving or restricting the personal freedom of citizens are prohibited, as are illegal searches of citizens.

Fourth, in the end, there is not enough evidence to show that the use of facial recognition in subways can improve transport efficiency; even if there were such evidence, efficiency itself is not a sufficient basis for implementation.
 
Subway transit officials claim that the implementation of facial recognition technology in the subway is to improve traffic efficiency during periods of heavy passenger use. The problem is that their claims do not constitute objective facts. Before doing a solid empirical investigation, how can we be convinced that the use of this technology in the subway will help improve traffic efficiency? Based on my personal experience at airports and hotels, I can hardly believe this conclusion.
 
Even if there were expert support for these claims, we still have reason to doubt whether the experts' judgments are correct. Because this involves the prediction and evaluation of unknown events, expert judgment can easily be wrong. For example, for many years before the policy allowing Chinese couples to have two children was announced, many demographic experts arrogantly stated that this policy would cause a sharp increase in China’s population. Since the policy was announced, what has happened to the actual fertility rate? It’s perfectly obvious to anyone who has eyes to see.
 
From another angle, even if using facial recognition could really improve traffic efficiency, efficiency alone is not a sufficient basis for implementation. Don't fool the public in the name of efficiency, okay? If what we want is efficiency, doing away with the so-called security check on the subway could best improve traffic efficiency during times of high passenger traffic.
 
I don't know if the relevant departments have bothered to look into this. Existing security checks on people and material items, especially those of people, are useless during both peak and regular periods. Except for wasting taxpayer money, it is really impossible to see what kind of function and purpose these security checks serve.
 
Based on the above reasons, and especially considering the potentially enormous dangers and negative effects, I am not only opposed to the use of facial recognition technology in subways, but also to its use in airports, hotels, and other sites where people are forced to undergo facial recognition checks.
 
Businesses use seductive arguments like “our profit levels are very low,” or “this will increase security” in an attempt to convince people to use facial recognition "voluntarily." But because of the general failure to provide adequate notification, it is difficult to establish effective user consent, so this use can hardly be said to be legitimate.
 
Not long ago, I attended a lecture on facial recognition technology, where I learned that some Chinese companies have been vigorously developing facial recognition technology in recent years. In order to avoid public attention, these companies deliberately kept a low profile and successfully carried out the large-scale promotion of the technology while avoiding the issue’s becoming a topic of public concern.
 
Such efforts frighten me. While seeking their own interests, have these companies and their technical personnel never stopped to think what kind of disaster the spread of this technology could bring to society? Do they not know that one day that this might come back to bite them?
 
Don’t give me your spiel about the neutrality of technology. Facial recognition technology is being used widely to obtain the personal information of ordinary citizens, which is being continuously gathered in the hands of large organizations. Do these companies and technicians involved in the research and promotion of this technology dare to say that they have no responsibility themselves? If the world of electric screens actually comes to pass one day, you will have been its willing servants; my only I hope is that there will still be people free to drink the fine wine at the celebration party.
 
A media industry person who attended that lecture deleted the facial recognition (features) of WeChat and Alipay before she finished listening to the talk. She told me that she was not afraid that personal information would be used by police, but instead worried about her data being abused by commercial organizations.
 
In response, I confessed that as a legal practitioner, especially someone who works on public law, I was never too worried that my personal information was being misused by commercial organizations; because the misuse of my data by commercial organizations, at worst, only costs me some money .
 
My real concern and fear is that my information is being abused by public authorities; because when they misuse the data, I have no idea what the price will be for myself and my family, in terms of property, reputation, occupation, freedom, health, or life. Anything is possible.
 
In the name of security, public places like the subway with large traffic flows began with inspection of material objects, and then added bodily inspection. Now facial recognition is also to be implemented. In a few years, will we move on genetic or fingerprint recognition? According to current trends, this is entirely possible. In the not too distant future, perhaps public transportation like the subway will become a privilege, available to only certain members of society.
 
If our society has not yet fallen into a state of persecution and paranoia, it is high time to call a halt to further security measures, before we go too far. What the hysterical pursuit of security has brought to society not security at all, but complete oppression and panic.
 
Finally, I solemnly propose that the Standing Committee of the National People's Congress conduct a review for the Beijing Metro’s proposal to employ facial recognition for security screening in terms of its basic legitimacy. At the same time, it should also consider initiating appropriate legislative procedures for a legal approach to regulating the arbitrary use of facial recognition technology. 
 
Posted on October 31,2019.
 
Notes

[1] 劳东燕, “人脸识别技术的隐忧,” posted on her public WeChat account (劳燕东飞) on October 31, 2019, original text available here.

[2] Editor’s note:  Under some dynasties, criminals in China could have the name of their crime branded on their face or forehead. 

[3] Editor’s note:  ​A 1998 film starring Will Smith and Gene Hackman in which the main character’s life is ruined by the surveillance state.

    Subscribe for fortnightly updates

Submit
This materials on this website are open-access and are published under a Creative Commons 3.0 Unported licence.  We encourage the widespread circulation of these materials.  All content may be used and copied, provided that you credit the Reading and Writing the China Dream Project and provide a link to readingthechinadream.com.

Copyright

  • Blog
  • About
    • Mission statement
  • Maps
    • Liberals
    • New Left
    • New Confucians
    • Others
  • People
  • Projects
    • China and the Post-Pandemic World
    • Chinese Youth Concerns
    • Voices from China's Century
    • Rethinking China's Rise
    • Women's Voices
    • China Dream-Chasers
    • Textos en español
  • Themes
    • Texts related to Black Lives Matter
    • Texts related to the CCP
    • Texts related to Civil Religion
    • Texts related to Confucianism
    • Texts related to Constitutional Rule
    • Texts related to Coronavirus
    • Texts related to Democracy
    • Texts related to Donald Trump
    • Texts related to Gender
    • Texts related to Globalization
    • Texts related to Intellectuals
    • Texts related to Ideology
    • Texts related to the Internet
    • Texts related to Kang Youwei
    • Texts related to Liberalism
    • Texts related to Minority Ethnicities
    • Texts related to Socialism with Chinese Characteristics
    • Texts related to Tianxia
    • Texts related to China-US Relations