Written by: Lawyer Xiao Za's Team
Recently, the Beijing Internet Court disclosed two typical cases of AI infringement on its official WeChat account, directly pointing to the legal blind spots in the application of generative artificial intelligence (E-case E-trial | AI generation is not a defense! The publisher's failure to verify the obligation harms the reputation of others and should bear responsibility). The court clearly established that AI technology itself is neutral, but users and platforms must adhere to legal boundaries; AI generation cannot serve as a shield against infringement.
Case 1: Strengthening Platform Merchant Responsibility
After the death of a well-known streamer, the defendant, aiming to gain traffic, used AI generation tools to fabricate false narratives, concocting false information such as the streamer "having been live-streaming while sick for a long time" and "the team forcing high-intensity work," and creating a short video for widespread dissemination online. The defendant repeatedly defended in the lawsuit, claiming that the content was generated automatically by AI and that they had no subjective fault, thus should not bear responsibility.
After examination, the Beijing Internet Court rejected their defense. The court pointed out that AI tools are merely auxiliary means, and the content publisher has a legal obligation to verify the authenticity of information. The defendant failed to conduct a substantive review of the content generated by AI, did not indicate the source of information, and directly published and spread false information through the internet, exhibiting clear subjectively wrongful conduct. The content in question sparked a large number of negative comments, directly leading to a decrease in social evaluation for the plaintiff, thus constituting infringement of reputation rights. The court ultimately ordered the defendant to publicly apologize and compensate for the corresponding economic losses.
This case clarified the important principle that users of AI content are the primary bearers of responsibility. Regardless of whether the content is generated by AI, the publisher must assume the obligation of review and verification. If information is spread that is false due to inadequate fulfillment of this obligation, resulting in harm to others' reputation, they must bear corresponding infringement liability. AI technology cannot absolve human subjects of their legal duty of care.
Case 2: Clarifying AI User Obligations
A certain cultural media company, without authorization, used AI deep synthesis technology to replace the portrait of a well-known actress with the face of a character in a 44-episode online short drama and profited from it on various platforms. The producer argued it was a "technological coincidence," while the platform claimed "unawareness" and "only providing playback services" as grounds for exemption.
The court ruled that the core of portrait rights protection lies in recognizability rather than complete replication. The AI-generated face-swapping image bore a high resemblance in features and expressions to the plaintiff, making it easily recognizable to the average viewer as a specific actress. According to Article 1019 of the Civil Code, no organization or individual may infringe on another's portrait rights through means such as falsifying using information technology, nor may they produce, use, or publicly disclose another’s portrait without consent. The defendant, for profit, improperly utilized AI technology to fabricate and disseminate a short drama featuring another's portrait, which constituted an act of infringement.
In response to the platform's defense, the court applied the "red flag principle" to determine joint infringement. The short drama in question employed a clearly identifiable star's portrait, representing obvious infringing content. The platform, as a professional internet service provider, held a higher duty of care but failed to fulfill its reasonable review responsibilities, constituting assistance to infringement. This ruling established "recognizability" as the core criterion for determining AI portrait rights infringement; as long as the public can easily identify a star’s traits, it constitutes infringement.
Technological Progress and Rights Protection Dilemmas
(1) Difficulties in Evidence Collection
AI-generated content features characteristics such as being traceable, easily modifiable, and difficult to source, making it the first hurdle in the path to rights protection. In cases of AI face-swapping and AI text infringement, infringing content can be generated en masse within a short time and spread rapidly, making the original data easy to delete, overwrite, or modify. In the aforementioned AI face-swapping case, the court required the producer to reproduce the video generation process, but the producer evaded responsibility, claiming "technical failure," "data loss," or "algorithmic auto-generation with no retention records," refusing to provide key creation processes and material sources.
At the same time, the "black box property" of AI-generated content makes it difficult for rights holders to demonstrate a direct link between the infringing content and the infringement act. Even if the infringing product is fixed, tracing back the algorithm logic, training data, and operational traces is not easily achievable. This results in an overwhelming burden of proof and difficulty in establishing logical proof, significantly increasing the costs and risks of rights protection failures.
(2) Ambiguity of Platform Responsibility
As the main carriers of information dissemination, the boundaries of network platforms' responsibilities remain unclear in AI infringement scenarios. Most platforms adhere to the "technical neutrality" principle and use "merely providing information storage space," "algorithmic auto-recommendation," and "not receiving infringement notices" as excuses for exemption, neglecting their obligations for prior review, interim supervision, and post-disposal.
(3) Cross-Border Rights Protection Issues
Currently, a large number of AI generation tools and deep synthesis models are deployed on overseas servers, allowing infringing content to be rapidly disseminated to domestic audiences through foreign platforms, with infringers, server addresses, and operational information all concealed overseas.
Rights holders facing cross-border infringement encounter multiple obstacles, including conflicts in legal applications, difficulties in cross-border evidence collection, and challenges in executing judicial decisions. The legislative standards for AI infringement and personal rights protection vary significantly across different countries. Even if domestic courts issue infringement judgments, they struggle to exert substantial binding force on overseas platforms and infringers, leading to cross-border AI infringement becoming a "gap in rights protection," where rights holders can only lament the situation.
In Conclusion
The judgments from the two cases by the Beijing Internet Court provide a clear legal framework for the AI industry. Technological innovation must not breach the legal bottom line, and AI applications must operate within a legal framework. Whether for individual users, content producers, or internet platforms, there should be a cultivation of AI compliance awareness, ensuring that while enjoying the benefits of technology, legal obligations are earnestly fulfilled to jointly maintain a healthy and orderly digital ecosystem.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。