[Guest post] China’s path to regulating facial recognition technology

Facial recognition technologies are already widespread in China, though their adoption and use needs to comply with a number of requirements. That is so especially after the highlight judicial regulation issued by the Supreme People's Court of China came into effect on 1 August. The guidance addresses some sensitive and pressing concerns. 

The IPKat is delighted to host the following guest post by Anja Geller (Ph.D. candidate at Ludwig-Maximilians-Universität and Junior Research Fellow at the Max Planck Institute for Innovation and Competition). (Anja had a relevant publication in 2020, in case readers are interested: How Comprehensive Is Chinese Data Protection Law? A Systematisation of Chinese Data Protection Law from a European Perspective)

Here's what Anja writes:



China’s Path to Regulating Facial Recognition Technology 
by Anja Geller


Smart Jonny (Anja’s kitten) knows when to hide his face

Facial recognition is a relatively new and very controversial technology. There are significant differences globally regarding its adoption, legal treatment and public acceptance. Vivid discussions are still going on, and complex issues are waiting to be solved. On the one hand, there are privacy and data protection concerns, as this is a particularly intrusive form of data processing. Facial information is directly identifying, unique and unchangeable for the data subjects. 

On the other hand, there are interests in crime prevention, commercial use and user convenience. Unlike in the EU, facial recognition is already widespread in China. There, it is a very lucrative and growing technology used in schools, railway stations, residential complexes and other public venues or to support “face-scanning payment” (刷脸支付) and pandemic control measures. These real-life implications make clarification of the legal framework all the more urgent and important.

In the EU, the GDPR already provides numerous requirements since facial information falls under biometric and, therefore, sensitive data. In contrast to the technology-neutral GDPR, the Draft AI Act goes even further by explicitly classifying facial recognition as a high-risk use and making it one of the focus areas of the law. The comparatively uniform and strict legislation already in place in the EU could be one of the reasons why facial recognition technology is not as common as in China. The future AI Act will most likely bring further restrictions. 

Attempts to regulate facial recognition do not only exist in Europe. The technology has also caught the attention of Chinese lawmakers. Although some Europeans may still be surprised that China has ambitions regarding data protection, many laws and regulations have been drafted and enacted in this area in recent years. As part of this wider trend, the Supreme People’s Court (SPC) has published the Provisions on Several Issues on the Application of Law in Hearing Civil Cases Related to the Use of Facial Recognition Technology in Processing Personal Information (SPC Provisions) on 28 July 2021, which entered into force on 1 August 2021. Such judicial interpretations of the highest court are quasi-legislative and practically binding for the lower courts, unifying and guiding court practice at all levels. The SPC Provisions are limited to civil cases and explicitly exempt usages for public safety reasons, similar to many other existing or drafted data protection norms. Thus, only private actors and not the state are addressed, even though regulation of the extensive public security system would be necessary, at least from a European perspective. 

Despite these limitations, the SPC Provisions represent a further step towards stronger and more coherent data protection. They are based on current court practice and are intended to guide the application of data protection norms, which are scattered in numerous laws and are mostly rather vague and brief. This could significantly improve judicial practice, as China’s long-awaited and first comprehensive “Personal Information Protection Law" (PIPL) has not yet come into force. It is expected to be passed by the end of the year

The SPC Provisions categorise facial information as “biometric information”. The vice-president of the SPC points out that face information is sensitive personal information with strong social qualities, whose leakage may cause significant harm to the safety of individuals or even the general public. Likewise, the Draft PIPL considers “biometric information” as “sensitive personal information” and provides additional rules such as separate and written consent and further information requirements. Already enacted and practically significant, but legally non-binding is the “Personal Information Security Specification” of 2020, which similarly categorises “facial recognition features” as sensitive personal information and requires explicit, unbundled consent and specific security measures. 

As a next step, the SPC Provisions enumerate certain activities that infringe the personality rights and interests of natural persons. Among them are the failure to disclose the rules, purpose, manner and scope of the processing, the inability to obtain consent where needed or the failure to implement due security measures, resulting in, e.g. the leakage of facial information. Consent is not considered a legal basis when access to products and services is linked to giving consent, although processing of facial information is not necessary for this purpose. Furthermore, consent for face information may not be bundled with authorisations for other processing activities. The data subject must not be forced to give consent in any way. 

In addition to these general rules, the SPC Provisions single out two particularly widespread and debated uses. The first one is the usage of facial recognition for face verification, identification or analysis in public places such as hotels, shopping malls, banks, stations, airports, sports venues, entertainment venues and other business sites. These are only allowed when there is a clear legal basis. The second case concerns property managers who must obtain the consent of the residents before using facial recognition. In case of refusal of consent, they must offer an alternative verification method. 

There are exceptions to these rules, e.g. for news reporting and public health emergencies, the latter reflecting the COVID-19 pandemic. There is also a blanket exemption that allows facial recognition in accordance with other laws, giving legislators leeway to create specific norms for certain uses. 

Finally, the SPC Provisions contain several procedural norms. They set out the circumstances under which natural persons may request the deletion of facial information, claim monetary remedies and seek injunctions. The burden of proof for the legal compliance of the processing lies in principle with the data processor. This allocation may help strengthen the awareness of the responsibility of processors and improve the protection of natural persons. According to SPC officials, it takes the unequal economic power and information asymmetry between the two parties into account. The officials recognise that the high costs of litigation and the difficulties in providing evidence mean that individuals file relatively few lawsuits. For this reason, the SPC Provisions additionally advocate civil public interest litigation initiated by public prosecutors and organisations such as consumer associations when many consumers are affected. All in all, the SPC Provisions will most likely help make courts, companies and the general public more aware of the legal issues surrounding facial recognition. 

The SPC Provisions respond to rising numbers of abuses of the technology and public concerns, especially regarding mandatory face recognition in community properties as well as blanket and bundled consent in apps. Facial recognition is at the centre of many current debates. The discussions were sparked by a 2019 case. There, law professor Bing Guo sued a park in Hangzhou that forced people with an annual pass to allow a facial scan to enter the park. He wanted the park to abolish this rule and give pass holders the option of choosing a different way of entry. The court narrowly ruled the case as an individual breach of contract between Guo and the park due to lack of consent, ordering the park to delete Guo’s data and pay him a certain amount of money as compensation. Guo’s appeal to the Hangzhou Intermediate People’s Court did not significantly change this result. Still, the case is considered a breakthrough and the first lawsuit to challenge the commercial use of facial recognition technology. In relation to the SPC Provisions, even the vice-president of the SPC refers to the Guo case. According to Guo, when it comes to contentious matters like facial recognition, Chinese courts tend to be conservative and avoid dealing with the key issues. In this respect, the new SPC Provisions may encourage courts to tackle facial recognition more directly and critically. 

The case has received much media attention and furthered heated debates about privacy risks and the need for data protection. Surveys of 2019 and 2020 have shown that the vast majority of the respondents feared security issues leading to leakages of facial information as well as fake information on the Internet. Issues of data security and fraud are also most present in academic discussions. There is also a fear of becoming a “transparent person” (透明人), who can be observed, recorded and analysed at any given time. In contrast to the European debates, however, the Chinese discussions are still less about all-encompassing surveillance but rather about the manifold experiences with data leaks and abuses by third parties. Likewise, many regulatory ventures focus on security. 

Apart from the SPC Provisions, there are several regulatory attempts regarding facial recognition. Noticing the increasing privacy concerns of their users, companies have adopted several self-regulatory measures. Another impact of the Guo case is visible in the non-binding but practically importantSecurity Requirements of Face Recognition Data” (Draft Requirements), which were released for public commentary on 23 April 2021. In line with Guo’s request and similar to the SPC Provisions, the Draft Requirements state that data subjects must not be denied access to essential business functions because they have not consented. In addition, the Draft Requirements contain some rules that go beyond the SPC Provisions. For example, they provide that in public places, a mechanism has to be put in place to allow data subjects to actively participate in facial recognition. That could be, e.g. by asking them to look directly into the camera and give a specific sign or pass through a particular channel marked “facial recognition”. Furthermore, the Draft Requirements contain much more detailed definitions and concrete security standards. 

In summary, the SPC Provisions and the other new drafts and regulations represent another step towards a more comprehensive regulation of facial recognition and data protection, at least concerning private actors. To what extent all these new binding and non-binding rules will actually be implemented and how the state’s use of this technology will continue remains to be seen. China’s journey to regulate facial recognition technology is certainly not over here.





Photo courtesy: Anja Geller
[Guest post] China’s path to regulating facial recognition technology [Guest post] China’s path to regulating facial recognition technology Reviewed by Tian Lu on Monday, August 09, 2021 Rating: 5

No comments:

All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.

Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html

Powered by Blogger.