[Guest post] Deepfake technology and the law: Perspectives from Japan, South Korea, and China (Part 2)
The IPKat has received and is pleased to host this guest contribution from Katfriend Dinusha Mendis (Centre for Intellectual Property Policy & Management (CIPPM), Bournemouth University, UK) who has co-authored this four-part blog series with Rossana Ducato (University of Aberdeen, UK) and Tatsuhiro Ueno (Waseda University, Japan). Part 2 of this four-part series explores the challenges and opportunities presented by synthetic media and presents policy and legislative responses from a comparative perspective, looking at notable models such as Japan, South Korea and China. The findings are drawn from two stakeholder roundtables hosted in Japan and UK and funded by the Daiwa Anglo-Japanese Foundation. For an overview of the use, impact and adoption of deepfake technology and how it is being tackled in UK and EU, see Part 1 of this series.
Deepfake Technology and the Law: Perspectives from Japan, South Korea, and China (Part 2)
by Dinusha Mendis, Rossana Ducato, and Tatsuhiro Ueno
Japan
At the moment, Japan’s does not specifically contain any provisions to deal with deepfake technology. The law already in place can tackle some issues, but only to a certain extent. For example, as discussed by Kunifumi Saito, the defamation criminal offence can be applied in principle to deepfake pornography. However, the offence requires an impact on the social reputation of the victim, which might not be affected if the video is clearly fake. Similarly, Kaori Ishii pointed out that legislation such as the Act on the Protection of Personal Information and the Notice regarding the use of GenAI services issued by the Personal Information Protection Commission in 2023, did not directly address deepfakes, and the remit of data protection law might be too limited (for instance, if we consider that provisions like the prohibition to use personal information in a way that can lead to an unlawful or unjust act, Art. 19, and the prohibition to obtain personal information by deception of wrongful means, Art 20, are addressed to businesses and not the general public). Masaru Terui, who is representing Japanese billionaire and influencer Yusaku Maezawa in litigation against Facebook Japan and Meta for allowing unauthorised use of his pictures in investment-scams ads (claiming a symbolic 1 Yen in damages), pointed to the lack of clarity concerning the current platform liability regulation and stressed the challenges faced by deepfake’s victims, who have to bear a high burden of proof, identify all potential and recurring illegal ads, and eventually claim damages, which are usually awarded up to a maximum of 3 million yen (£14,000). A quite low ceiling that in practice discourages people from taking legal action, leading to “loss of trust in the legal system”.
![]() |
| Participants during the stakeholder roundtables |
Although, the case was decided in favour of the defendant, it established the scope of publicity rights in Japan. In particular, it limited the scope of the right, whilst clearly defining when infringement is likely to occur in the context of the media and commercial content; “emphasizing the need for a distinct and direct commercial exploitation for it to be considered an infringement.”
However, as Tatsuhiro Ueno, Satoshi Narihara and Kunifumi Saito discussed, whilst this recognition by the Supreme Court helps individuals protect their own likeness and can be applicable to the infringement perpetrated via deepfake, one of the main loopholes remains post-mortem protection; when the likeness of deceased individuals is used without permission. The difficulty lies in the non-transferability of these rights which mean, once a person has died, their personality rights cannot be invoked by another person, family or otherwise.
South Korea
In contrast to the UK and Japan, South Korea has recognised statutory provisions for protecting the right of publicity of an individual. In response to the rise in deepfakes, South Korea has been quick to address the legal gaps through amendments to existing legislation as well as the ratification of new legislation. In 2021, the Unfair Competition Prevention Act was amended to protect celebrity names, portraits and likenesses. Additionally, the Public Offices Election Act 2023 prohibits the use of deepfake technology of any individual to be utilised in election campaigns.
More recently, in 2024, the Sexual Violence Punishment Act, and later the Youth Protection Act 2025, were amended to further protect individuals against the harm caused by deepfake pornography by criminalising the production, distribution, possession and viewing of deepfake pornography including content involving minors, even without the intent to distribute. Similar to the EU, South Korea has also ratified its own legislation surrounding AI titled the Basic AI Act. However, as illustrated by Yeyoung Chang, it is far narrower than the EU's AI Act as South Korea’s Act is primarily “built on a framework of promoting trust rather than Europe’s detailed product regulation designed to ban risks”.
Most recently, South Korea amended the Personal Information Protection Act in 2025, following amendments in 2023. The 2025 amendment makes it mandatory for oversees businesses with local subsidiaries to designate them as domestic agents, thereby strengthening their compliance of South Korean laws.
Moving forward, proposals for a Portrait Property Rights in South Korea indicate a move towards a more comprehensive personality right that better reflects the modern era and better serves to protect individuals online whilst aiming to prevent unauthorised exploitation of individuals portraits, aligning it with constitutional rights regarding personal image and privacy.
China
In China, individuals’ likeness and voice are protected as personality rights under the civil code (amended in 2021). As such, this protection is guaranteed to every physical person, whether they are celebrities or not, and some post mortem protection is also offered to the relatives of the deceased. In case of violation of the personality right, the Chinese system allows to claim non patrimonial and patrimonial damage. Despite this broad protection, Wanli Cai warned about the challenges to substantiate infringements cases with evidence, e.g. proving that individual’s voice or likeness have been effectively used by the AI system.
![]() |
| Kat deep(fake?) in thought |
The protection afforded by the PIPL and personality rights in the context of deepfakes has been already put to test in two recent cases. Cai reported that, the Internet Court of Beijing found the violation of the personality right in an AI-generated voice (as the timbre of the voice actor was clearly recognisable). In a second case concerning face-swapping, the same court excluded the violation of the image right, because the generated portrait was not “recognisable” by the public. Nevertheless, they found an infringement of the PIPL due to the unlawful processing of video footages. Interestingly, Professor Cai concluded reporting that between 2023 and 2024 the Internet Court of Beijing has accepted 113 cases for alleged violation of the PIPL. There is no doubt that this amount of litigation will help providing clearer guidelines in this matter.
In next blogpost, we will consider the impact of deepfake technology in the creative and technology sectors, reporting the views of representatives from the film and music industries, on the one hand, and from AI developers and online platforms, on the other.
[Guest post] Deepfake technology and the law: Perspectives from Japan, South Korea, and China (Part 2)
Reviewed by Eleonora Rosati
on
Friday, December 26, 2025
Rating:
Reviewed by Eleonora Rosati
on
Friday, December 26, 2025
Rating:




No comments:
All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.
It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.
Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html