AI Act and Compliance Requirements for AI System Providers
《人工智能法案》及人工智能系统提供方的合规要求
With the adoption of Regulation (EU) 2024/1689, known as the AI Act, the European Union has introduced a unified regulatory framework aimed at governing the development and use of artificial intelligence systems within the European market. This measure is part of a broader regulatory project aimed at ensuring that new technologies are consistent with the protection of fundamental rights, user safety, and trust in the digital market.
随着《欧盟条例(EU)2024/1689》(即《人工智能法案》(AI Act))的通过,欧盟引入了一套统一的监管框架,旨在对人工智能系统在欧洲市场内的开发和使用进行规制。该项措施属于一项更为广泛的监管项目的一部分,其目标在于确保新技术的发展与基本权利保护、用户安全以及对数字市场的信任保持一致。
The new framework does not apply only to companies established within the European Union; it also extends to non-EU economic operators that intend to provide artificial intelligence (AI) systems where the output produced by such systems is used within the EU. In this context, understanding how the AI Act operates and the main compliance obligations it introduces becomes an essential step for technology companies seeking to operate or expand in the European market. Approximately one and a half years after its adoption, it is now possible to analyse the topic and refer to a practical case relating to its application.
这一新的监管框架并非仅适用于在欧盟境内设立的公司;其适用范围同样延伸至拟提供人工智能(AI)系统的非欧盟经济运营者,只要该等系统所产生的输出在欧盟境内被使用。在此背景下,理解《人工智能法案》的运作机制及其所引入的主要合规义务,成为希望在欧洲市场开展或扩展业务的科技公司不可或缺的一项关键步骤。自该法案通过约一年半之后,现已具备条件对相关议题进行分析,并结合其实施情况引入与其适用相关的实践案例。
Compliance requirements for AI system providers
人工智能系统提供方的合规要求
It is well known that the AI Act introduces a regulatory regime based on a risk-based approach, classifying AI systems according to their potential impact on individuals’ rights and on society. Certain applications are deemed incompatible with European values and are therefore prohibited, while others may be used provided that they comply with specific requirements relating to transparency, safety, and human oversight.
众所周知,《人工智能法案》引入了一套基于风险导向方法的监管制度,根据人工智能系统对个人权利及社会可能产生的潜在影响对其进行分类。某些应用被认定为与欧洲价值观不相容,因此被明确禁止;而其他应用则可以在符合与透明度、安全性以及人类监督相关的特定要求的前提下予以使用。
Accordingly, for AI system providers, the first operational step is to correctly identify the risk category into which the developed or marketed system falls. This classification makes it possible to determine which obligations apply and what level of documentation and control must be implemented prior to placing the system on the market.
因此,对于人工智能系统提供方而言,首要的操作步骤在于正确识别其所开发或投放市场的系统所属的风险类别。该项分类使得能够明确适用的具体义务,以及在系统投放市场前必须达到的文件记录和控制水平。
Furthermore, close attention is devoted to the quality of the data used to train AI systems. Incomplete or biased datasets may generate discriminatory or inaccurate outcomes, with significant legal and reputational consequences. For this reason, the Regulation requires the adoption of control measures aimed at ensuring the quality and reliability of the system throughout its entire lifecycle.
此外,法规对用于训练人工智能系统的数据质量给予了高度关注。不完整或存在偏见的数据集可能导致歧视性或不准确的结果,从而引发重大的法律后果及声誉风险。基于这一原因,该条例要求采取相应的控制措施,以确保人工智能系统在其整个生命周期内的质量和可靠性。
Another key element concerns cooperation between providers and users of AI systems. Providers are required to make available all information necessary to enable users to comply with their own regulatory obligations, including the assessment of potential impacts on fundamental rights. This cooperation is particularly sensitive where the provider operates from outside the EU and must adapt to regulatory standards that differ from those of its domestic legal system.
另一个关键要素涉及人工智能系统提供方与使用方之间的合作关系。提供方有义务提供一切必要信息,以使使用方能够履行其自身的监管义务,包括评估对其基本权利可能产生的影响。在提供方位于欧盟境外,且需要适应与其本国法律体系不同的监管标准的情形下,这种合作尤为敏感。
In addition, compliance with the AI Act must be coordinated with other regulatory areas, such as personal data protection, cybersecurity, and intellectual property protection, all of which are highly regulated within the European market. For many companies, this entails reviewing internal processes and contractual models, as well as implementing appropriate systems of technological governance.
此外,《人工智能法案》的合规要求还需要与其他监管领域相协调,例如个人数据保护、网络安全以及知识产权保护,而上述领域在欧洲市场内均受到高度监管。对于许多公司而言,这意味着有必要对其内部流程和合同模式进行审查,并建立或完善相应的技术治理体系。
For providers established outside the European Union, access to the EU market may also require the appointment of an authorised representative within the EU and engagement with European and national supervisory authorities. Failure to comply with regulatory requirements may result not only in financial penalties, but also in restrictions or bans on the marketing of AI systems within the European territory, as occurred in the past with DeepSeek, which was blocked in Italy by the Italian Data Protection Authority on the grounds that the company was transferring European users’ personal data without ensuring compliance with EU rules on transparency and privacy.
对于设立于欧盟境外的人工智能系统提供方而言,进入欧盟市场还可能需要在欧盟境内指定一名授权代表,并与欧盟层面及各成员国的监管机构进行沟通。如未能遵守相关监管要求,可能不仅会面临经济处罚,还可能遭受在欧洲境内对人工智能系统进行营销的限制或禁令。此前即曾出现类似情况,例如 DeepSeek 因被意大利数据保护机构认定在未确保符合欧盟关于透明度和隐私保护规则的情况下转移欧洲用户的个人数据,而被禁止在意大利境内运营。
Conclusions
结论
The AI Act represents a turning point in the regulation of artificial intelligence and requires providers to adopt a more structured approach to managing the technological and legal risks associated with their systems. Compliance cannot be treated as a merely formal exercise; rather, it requires a prior assessment of the system’s characteristics and an adjustment of the company’s organisational and technical processes.
《人工智能法案》标志着人工智能监管领域的一个重要转折点,并要求人工智能系统提供方在管理其系统所涉及的技术风险和法律风险方面采取更加结构化的方法。合规不应被视为一项仅具形式意义的工作,而是需要在事前对系统特性进行评估,并据此对公司的组织结构和技术流程作出相应调整。
Timely understanding of the Regulation’s scope of application and the implementation of appropriate compliance measures enable companies to reduce the risk of sanctions as well as delays in entering the European market. In an environment increasingly focused on technological accountability and user protection, a sound approach to compliance can become not only a legal obligation, but also a source of trust and a competitive advantage for companies operating in the artificial intelligence sector.
及时理解该条例的适用范围并实施适当的合规措施,有助于企业降低遭受制裁的风险,并减少进入欧洲市场过程中可能出现的延误。在一个日益强调技术问责和用户保护的环境中,稳健的合规方法不仅是一项法律义务,也可能成为人工智能领域企业建立信任并取得竞争优势的重要来源。