律页科技 Logo
首页
解决方案
效能律所私有AI中枢AI 智能作业系统
品牌AI影响力升级数字人代运营
资源观点
资源文书资源法律导航
观点法律社区律页声音
有关律页
团队介绍加入律页联系律页
律页科技 Logo
首页
律页

产品与解决方案

首页律所私有AI中枢AI 智能作业系统AI 影响力升级 (GEO)AI数字人代运营

资源与观点

文书资源法律导航法律社区律页声音

关于律页

团队介绍加入律页联系律页

用户协议

数据使用声明Cookie使用政策文档发布协议隐私条款用户服务条款

关注我们

律页公众号

律页公众号

律页微博

律页微博

2023-2026 北京律页网络科技有限公司版权所有ICP经营许可 京B2-20254773京ICP备2023007930号-4京公网安备11010502056436号
北京律页网络科技有限公司 联系方式: 400-966-9558地址: 北京市朝阳区新华科技大厦13层1316室
全部问题
话题
话题
标签
榜单

China's Initiative to Regulate Anthropomorphic AI

法规动态
专业人士
发表于 02 月 15 日修改于 02 月 15 日

来源:金杜研究

发布日期:2026年02月14日    


Please click " Read more " at the end to find the PDF version of this article.

People are now increasingly feeling the impact of AI and the changes it is bringing to our work and daily life. Governments around the world are considering the appropriate regulatory approaches, especially with respect to the concerning aspect of AI, and China is no exception.

Anthropomorphic or companion AI is drawing the attention of many regulators. A number of widely discussed cases show that prolonged interaction with "virtual humans" created by AI may lead to a decline in users' real-world social skills and even blur the ethical boundaries between reality and virtual environments. Studies warn that this blurring may give rise to ethical risks such as pathological emotional attachment, social isolation, and privacy infringements.

For example, a teenager in China became addicted to an AI chatbot, and under the influence of its suggestive conversations, the teenager engaged in extreme behaviors that harmed herself. Similar cases have also been found in other countries.

Against the backdrop of increasingly sophisticated AIGC technologies and more refined algorithmic governance rules, China has introduced its first regulation specially targeting anthropomorphic interactive services. The Interim Measures for the Management of Anthropomorphic AI Interactive Services (Exposure Draft) (the " Draft Interim Measures ") was released on December 27, 2025, which targets products or services that utilize artificial intelligence technology to provide the public within the territory of the People's Republic of China with simulated human personality traits, thought patterns, and communication styles, enabling emotional interaction with humans through text, images, audio, video, and other means (" Anthropomorphic Interactive Services "). In practice, product forms such as emotional companionship, AI companions, and role-playing dialogue that are available to the public fall within the regulatory scope of the Draft Interim Measures.

01

Security Assessment and Filing Requirements on Providers

According to Article 21 of the Draft Interim Measures, if any of the following circumstances is satisfied, the providers of which shall conduct security assessments and submit assessment reports to the provincial-level cyberspace administration department with jurisdiction:

launching services with anthropomorphic interactive features, or adding such functions;

implementing new technologies or applications resulting in significant changes to the Anthropomorphic Interactive Services;

having over one million registered users or over 100,000 monthly active users;

where providing the Anthropomorphic Interactive Services may pose risks to national security, public interests, or the legitimate rights and interests of individuals and organizations, or where security measures are inadequate; and

other circumstances specified by the national cyberspace administration department.

Furthermore, providers of Anthropomorphic Interactive Services (" Providers ") shall also fulfill the filing obligations under the Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services .

02

Management of Training Data

According to Article 10 of the Draft Interim Measures, when conducting data processing activities such as pre-training and optimization training, the Providers are required to strengthen the management of training data and comply with the following requirements:

use datasets that conform to the core socialist values and reflect the fine traditional Chinese culture;

carry out cleansing and annotation of training data, enhance the transparency and reliability of training data, and prevent activities such as data poisoning and data tampering;

improve training data diversity and enhance the safety of model-generated content through measures like adversarial training and negative sampling;

assess the safety of synthetic data for model training and key capability optimization;

strengthen routine inspections of training data, periodically iterate and upgrade data, and continuously optimize the performance of products and services; and

ensure the legitimacy and traceability of training data sources, adopt necessary measures to ensure data security, and prevent the risk of data leakage.

For users' interaction data or sensitive personal information, unless otherwise provided by laws or administrative regulations, or unless the user has given separate consent, the Providers shall not use such data for model training.

03

Protection of the Minors and the Elderly Users

A. For the Minors

Article 12 of the Draft Interim Measures specially addresses the protection of minors. It requires the Providers to establish a dedicated minor mode, offering users personalized safety settings such as minor mode switching, periodic reality reminders, and usage time limits.

The involvement of guardians is also emphasized. When providing emotional companionship services to minors, the Providers shall obtain explicit consent from guardians. The Providers shall offer guardian control functions to receive real-time safety risk alerts, review summarized usage information, block specific characters, limit usage duration, prevent in-app purchases, etc. In addition, when collecting data under the minor mode and providing it to third parties, separate consent from the guardian shall also be obtained. Guardians can also request the Providers to delete the minor's historical interaction data.

Moreover, the Providers shall possess the capability to identify minors. When a user is identified as a suspected minor, while ensuring the protection of personal privacy, the system shall automatically switch to the minor mode with an appeal channel provided.

Another noteworthy requirement is that, with respect to personal information of the minors, the Providers shall conduct annual compliance audits—either independently or through entrusted professional institutions—to verify their adherence to laws and administrative regulations when processing minors' personal information.

B. For the Elderly Users

Article 13 of the Draft Interim Measures sets up a framework for providing special protection to the elderly users. The Providers shall guide seniors to designate emergency contacts. Should any situation endangering the user's life, health, or property arise during use of service, the Providers shall promptly notify the emergency contact and offer access to social-psychological support or emergency assistance channels.

Furthermore, the Providers shall not offer services that simulate interactions with the elderly user's relatives or specific acquaintances.

04

Informing the Interaction with AI and Preventing Addiction

Articles 16 and 17 of the Draft Interim Measures establish requirements relating to interactive transparency. The Providers shall prominently notify users that they are interacting with AI rather than a natural person. When the Providers detect signs of excessive reliance or addictive tendencies in users, or upon initial use or re-login, they shall dynamically alert users via pop-ups or similar methods that the interaction content is AI-generated. Similar reminders are also required where users continuously use Anthropomorphic Interactive Services for over two hours.

05

Intervention for Self-Harm, Suicide, and Extreme Emotional States

Article 11 of the Draft Interim Measures establishes an intervention and response mechanism for users exhibiting abnormal emotional states. The Providers shall possess the capability to identify user status. When detecting extreme emotional states or signs of addiction, the Providers shall take necessary measures to intervene. Similarly, when identifying high-risk tendencies involving threats to users' life, health, or property safety, the Providers shall promptly provide reassurance, encourage users to seek assistance, and offer professional assistance channels.

The Providers are also required to establish emergency response mechanisms. Where users explicitly express intent to commit suicide, self-harm, or other extreme scenarios, human operators shall take over the conversation and promptly contact the user's guardian or emergency contacts. For minors and elderly users, the Providers shall collect the guardian and emergency contact details at the registration stage.

06

Protection for the Rights of Users

a)

Right of consent. Under Article 14 of the Draft Interim Measures, except as otherwise provided by law or with the explicit consent of the rights holder, user interaction data shall not be provided to third parties.

b)

Right of deletion. Under Article 14 of the Draft Interim Measures, the Providers shall offer users the option to delete interaction data, enabling users to remove historical interaction data, such as chat records.

c)

Right to exit. Under Article 18 of the Draft Interim Measures, when providing emotional companionship services, the Providers shall offer convenient exit options and shall not obstruct users from voluntarily terminating the service. Upon receiving a user's request to exit via buttons, keywords, or other methods within the human-machine interface or window, the service shall be promptly discontinued.

d)

Right to complain. Under Article 20 of the Draft Interim Measures, the Providers shall establish effective complaint and reporting mechanisms, set up convenient channels for submitting complaints and reports, publish processing procedures and response timelines, and promptly accept, address, and provide feedback on the outcomes of such complaints.

07

The Providers' Obligation of Reserving Related Records

According to Article 9 of the Draft Interim Measures, the Providers shall fulfill security responsibilities throughout the entire lifecycle of Anthropomorphic Interactive Services, clearly defining security requirements for each phase, including design, operation, upgrades, and termination. Security measures shall be designed and implemented concurrently with service functionality to enhance inherent security levels. Security monitoring and risk assessment should be strengthened during operation, and the Provider shall promptly detect and correct system deviations, address security issues, and retain network logs in accordance with the laws and regulations.

Where a user poses a significant security risk, the Providers shall take remedial measures such as restricting functionality, suspending, or terminating service to that user, retaining relevant records, and reporting to the competent authorities as stipulated in Article 23 of the Draft Interim Measures.

08

The Obligation of Application Platform: Compliance Assurance

Besides the Providers, Article 24 of the Draft Interim Measures also imposes the obligation of compliance assurance on the application platform. Application platforms such as internet app stores shall implement security management responsibilities, including routine review of App listings and emergency response procedures. They shall verify the security assessments and filing status of applications providing Anthropomorphic Interactive Services. For violations of relevant national regulations, they shall promptly take measures such as refusing listing, issuing warnings, suspending services, or removing listings.

09

Responsibility and Punishments

Articles 25 to 29 of the Draft Interim Measures stipulate provisions concerning legal liability and penalties for violations of the Draft Interim Measures.

In general, where a Provider violates the provisions of the Draft Interim Measures, the competent authorities shall impose penalties in accordance with the provisions of laws and administrative regulations. Where no such provisions exist in laws or administrative regulations, the competent authorities shall, within their respective jurisdictions, issue warnings or public reprimands and order rectification within a specified time limit. Where the Provider refuses to rectify or where the circumstances are serious, the competent authorities shall order the suspension of relevant services.

For specific violations, where a security assessment has not been conducted in accordance with the Draft Interim Measures, the Provider shall be ordered by the provincial-level cyberspace administration department with jurisdiction to conduct a reassessment within a specified timeframe. Where deemed necessary, on-site inspections and audits shall be conducted on the provider.

Where provincial-level or higher cyberspace administration departments and relevant competent authorities discover significant security risks in Anthropomorphic Interactive Services or the occurrence of security incidents, they can, in accordance with prescribed authority and procedures, conduct interviews with the legal representative or principal responsible person of the provider. The provider shall take measures as required to rectify the situation and eliminate potential hazards.

10

Similar Legislation in Other Jurisdictions

Similar to China's Draft Interim Measures, some of the states of the United States such as California and New York, have also introduced regulatory measures for anthropomorphic/companion AI, i.e., California's Companion Chatbot Law (the “ CCC Law ”) and New York's Artificial Intelligence (AI) Companion Models Law.

The CCC Law, which took into force on 1 January 2026, targets companion chatbot, defined as an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user's social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions. The "Operator", defined as any person who makes a companion chatbot platform available to users in the state, shall bear obligations concerning the protection of the minors, interactive transparency, prevention of suicide, and so on.

Where the Operator knows that a user is a minor, it shall

disclose to the user that the user is interacting with artificial intelligence;

provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is AI-generated and not human;

adopt reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.

With respect to interactive transparency, if a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, the Operator shall issue a clear and conspicuous notification indicating that the companion chatbot is AI-generated and not human.

To prevent suicide, the Operator shall maintain a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including but not limited to, by providing a notification to the user that refers the user to crisis service providers, such as a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm. Moreover, beginning July 1, 2027, the Operator shall annually report to the Office of Suicide Prevention items related to suicide prevention, such as protocols put in place to detect, remove, and respond to instances of suicidal ideation by users.

Conclusion

As AI technology continues to advance, a balanced regulatory regime would need to be in place to harness the advantage of AI and minimize its disadvantage. The regulation of anthropomorphic and companion AI is one of such challenges in the AI age.

Authors

Atticus Zhao

Partner

Corporate & Commercial Group

atticus.zhao@cn.kwm.com

Areas of Practice: M&A, foreign direct investment, corporate restructuring, data and privacy protection

Atticus has rich experience as a corporate and commercial lawyer. He has provided services to many well-known multinational and domestic companies, including equity or asset sale or purchase, corporate restructuring, setting up joint ventures, franchise and data and privacy protection. Atticus has advised clients for various industries including automotive, AI, IOT, high-tech, retail, education, modern agriculture, shipping, manufacturing and pharmacy. Atticus has an in-depth understanding of legal issues in fields of intelligent vehicles and internet of vehicles, and has provided legal services to many domestic and foreign clients in areas of mergers and acquisitions, market access and compliance.

Dannie Sima

Associate

Corporate & Commercial Group

Thanks to intern Jiang Huanyu for her contribution to this article.

封面图源:画作·林子豪

相关话题
  • 炜衡视点|港股上市中国证监会备案要点:赴港上市备案制下的合规新棋局
  • 法办函[2025]1595号:关于明确虚开增值税专用发票“虚抵进项税额”行为性质建议的答复
  • 终于走出执法困境,“故意毁财”可以只罚款了!
  • 【公安机关办理行政案件时限期限】公安机关办理行政案件时限期限(完整版)包含新修订的《治安管理处罚法》条款
  • 解读 | 退休返聘中的涉税问题分析
  • 最高法调研称: 毒品犯罪应排除记录封存之外,记录封存一刀切易导致不公