China’s first civil public interest lawsuit that contained the application of face recognition technology
Recently, the Guangzhou Internet Court released information from a case in which AI was utilized to convert photos into videos to circumvent face-scanning security software. The so-called businesses, “Check Head” and “Pass Face”, allowed users to upload photos that Zheng would then turn into moving videos of the person’s face to access their bank accounts, payment services, among other things.
With so many major payment platforms in China, à la WeChat and Alipay, it stands to reason that a service using facial recognition would give one the ability to upload a friend’s, coworker’s, or complete stranger’s photo in order to gain access to a bank account or payment service.
He confessed that he bought some personal photos, whose holders have ID card numbers that matched his, from some undeclared sources in social networking sites and resold them, each for about 15 to 20 yuan. Three others identified as Ren, Dai, and Chen bought the personal photos of some citizens from Zheng’s group and paid different prices ranging between 50 to 100 yuan for each. They generated dynamic facial recognition videos that nodded, blinked, and so on by using artificial intelligence software. These videos were applied to unlock accounts and check real name authentication on some apps for illegal profits.
The “Pass Face” service consists of taking facial information and a synthesis software that generates simulated dynamic videos of a person. For example, some of the current processes in verifying facial images through videos may need a specific movement, such as looking left or right, opening the mouth, or tilting the head. When entering the facial verification stage in apps or account verification, if the video’s facial clarity meets the required standards, it will be judged as a real human operation by the system, thus bypassing the facial verification stage and achieving the aim of hacking an account.
The suspect confessed that after bypassing facial recognition systems, criminals can access other people’s accounts on applications such as WeChat, obtaining personal privacy and information like the documents by purchasing chat records, payment records, and movement trajectories. After being convicted of illegally handling more than 2000 pieces of personal information that resulted in illegal gains of over $15,000, the four were sentenced after facing a serious judicial briefing. They were issued various prison sentences ranging from one year and two months to one year, besides fines.
Ever-growing popularity of AI face swapping apps
The case has generated massive discussion in China concerning the rampant information exposure to and possible leakage due to numerous face-swapping apps, which have been so popular.
AI face-swap software widely uses deepfake technologies. Some of the pre-packaged, popularly marketed AI face-swap apps are Hugging Face, Reface, DeepArt, FaceShow, and DeepFaceLive. AI face-swapping technologies were reportedly last year criticized in China over concerns of infringing privacy this week.
The AI portrait app “Miaoya Camera”, which went viral from last August, has flooded social media platforms with videos of millions of AI portraits shared by netizens.
-User uses the photo in the bottom right hand to make into an
AI photo –
The cost of just $1.39 to develop a digital selfies creating an avatar as opposed to the hundreds of dollars that professional portrait studios would charge for the same service, appeared as more or less inconsequential. However, as the number of users increased, the app slowed, and there were reports of more than 2,000 people queuing to create their portraits on the second night of its.
With regard to Miaoya’s success, insiders in the industry speculated that on some days, it could have done more than ¥100,000. According to the Miaoya team, the app works with AI technology, dubbed “Tiziano,” inspired by the master of portrait art Tiziano Vecellio. Even though the official details on the technology of the model behind it were not fully revealed, it is quite likely that Miaoya built on top of some open-source large models, like SD, and fine-tuned them toward user customization.
To obtain Miaoya, a user should upload a clear photo of their front face. Additionally at least 20 more photos demonstrating the face with different lighting and backgrounds and from angles or expressions. After generating the digital avatar, a user is given an option with more than 30 templates for various kinds of portraits. These include the vintage one, forest theme, business and the oil painting templates.
At first, Miaoya’s terms sanctioned that the AI-generated content was completely unrestricted to use for several real-world purposes, among which people went into public uproar. However, there was no clear response from the company regarding the use of the information. However, the company stated that it is overwhelmed by the high volume work that it is required to perform and thus aimed to delete this information immediately the photos are used to develop digital avatars. The company later issued an apology and made changes to their terms by providing that photos are only used only to develop digital avatars after which they are automatically deleted. The changed terms expressly prohibited the unlawful maintenance of identifiable information, profiling of the user as per the information provided, and the passing of the user configuration information to third parties.
According to Wang Peng, associate researcher at Beijing Academy of Social Sciences: “Although AI portraits are very hot in the application of the AIGC era, they still need a long way to go to form a really real business model. Inference needs much more computing power than training, and the high cost of computing power is still a bottleneck for AIGC applications. “It’s too early to say that AI portraits are going to replace offline photo booths,” he also said.
Requirements for the compliance of providers of AI face swapping services
According to Zhang Tianyi, the senior product manager at RuiLai Intelligent, an incubated startup at Tsinghua University’s AI Research Institute, protection of data has become very essential in this AIGC era with large-scale application of heavy models. “Misuse of AIGC models can lead to content compliance issues, like deceptive content produced by Deepfakes and Diffusion models that are misleading and creating negative social impacts.
Recent guidance on AIGC governance has been quickly published, from the central to local government of China:
• AI algorithm registration
As discussed earlier, deepfake is used as the base-capability technology of AI face-swapping software, thus, considering the said software in providing AIGC services with public opinion or social mobilization capabilities according to China’s requirements. Developers should make their algorithm registered through updated algorithm registration process under “Regulations on the Management of Internet Information Service Algorithm Recommendation.”.
- PIA: Personal Information Protection Impact Assessment
A PIA should be conducted by all AIGC service providers processing sensitive personal information in the delivery of services to assess the legal and compliance aspects of the processing activities. The PIA should be able to evaluate the legality, necessity, and appropriateness of personal information processing; the potential impact on the exercise of rights; security risks; and the effectiveness of protection measures. Any PIA reports and evidence of action through logs shall be maintained for at least three years.
PIA is an obligation under the Personal Information Protection Law; an implementation failure by an AI face-swapping service provider in fulfilling its obligation will acquire remedial orders, warnings, confiscation of illegal proceeds, and fines among other punishments from the regulators. - Discoverability of AI-Generated Content
AI face-changing service providers are to be held accountable for identifying generated content in line with “Internet Information Service Deep Synthesis Management Regulations.” In accordance with this, the National Information Security Standardization Technical Committee issued guidelines in late August for identifying AIGC service content and required that the implicit watermark of AI-generated image, audio, and video contents at least include the name of the service provider.
AI face-swapping products represent a significant application in artificial intelligence, giving birth to a new experience for a large number of users. On the other hand, they are exposed to facing huge challenges during their development, chiefly in satisfying compliance. First, AI-powered face-swapping products lead to a growing number of misuses, resulting in incidents of the recent “one-click undressing,” cases of leaks of facial information that cause infringement to portrait rights and copyrights, and cases of fraud risks. Meanwhile, apart from those compliance obligations mentioned above, AI face-swapping products, processing facial information, struggle with great difficulties in obtaining authorization for copyrights, portrait rights, and other rights of all entities involved.
All of those are actually very tough tasks—when providers of face-swapping services using AI must make sure that personal information is processed appropriately under the current legal framework, that personal information safety impact assessments are completed, that registration and other regulatory formalities are carried out, and that the applications developed are consistent with mainstream values. This is a massive challenge and a big responsibility.