China, EU and U.S. regulators ponder generative AI regulations, and offer compliance considerations for businesses.
The rise of generative artificial intelligence (AI) applications has sparked regulatory reforms the world over as regulators consider risk management. Proposed regulations have centered on governance, personal data protection and intellectual property concerns.
The Cyberspace Administration of China (CAC) is at the forefront of regulatory action, having recently outlined a proposed regulatory regime to address technology and operational risks specific to the use of generative AI. The CAC has identified key areas of operational risk and governance which cover a range of legal concerns, including data security, possible misrepresentation, disclosure and intellectual property.
Highlights of draft measures to generative AI regulations
The CAC issued draft measures on April 11, titled Administrative Measures for Generative Artificial Intelligence Services. The draft measures, which are open for public consultation until May 10, 2023, intend to regulate all generative AI services that are available to the public. The draft measures will apply to research and development, as well as the use of generative AI within the territory of China. In their current form, the draft measures leave open the possibility of extra-territorial application to generative AI services providers based outside of the Greater China region that engage with consumers in China or process data from China.
Proposed regulatory requirements in the draft measures pertain to operational and risk management aspects of AI-related information technology, including governance, data security, algorithmic transparency and content moderation.
The draft measures seek to impose liability on service providers for content moderation. Service providers bear responsibility for ensuring content generated by AI is accurate and does not endanger China’s national security. Service providers are also required to adopt measures to avoid discrimination and use legitimate data to train their generative AI. Service providers also bear the responsibility of ensuring that their generative AI services comply with Chinese intellectual property laws.
Liability implications for the use of data to train generative AI are potentially vast. Service providers could be held accountable for any infringement of intellectual property laws and bear the burden of ensuring that they have obtained valid consent for using personal information. Additionally, service providers are expected to guarantee the “objectivity and diversity” of data training sets, a requirement that lacks specific definition.
Also of note, service providers will be required to complete a security assessment with the CAC prior to offering generative AI services to the public. The assessment requires service providers to demonstrate that they have effective controls in place to verify the real identity of users, protect personal information and maintain mechanisms for content review. The assessment requirements are currently applicable to internet information sharing services such as live streaming. In addition to the security assessment, generative AI service providers would also be required to disclose algorithmic information to the CAC.
The draft measures build on provisions released by the CAC earlier this year to regulate the use of deep synthesis technology. The scope of the guidance, which introduced numerous compliance requirements for technology service providers, extends to the same applications used in generative AI. As the sector continues to evolve, businesses can expect the CAC to keep pace with supervision and policymaking.
Elsewhere, regulations governing the use of generative AI are emerging as well. Regulators in Europe and the United States have begun to shape their respective approaches to some of the issues addressed by the CAC in the draft measures.
On the same day as the CAC issued its draft measures, the National Telecommunications and Information Administration, a department of the U.S. Department of Commerce, issued a request for comment on AI accountability policies. The request for comment is open until June 12, and seeks input on policies that can support the development of AI audits, certifications and other mechanisms that can attest to the trustworthiness of an AI-based service.
Across the European Union, copyright infringement has dominated recent discussions for the proposed EU AI Act. An expansive legislation that, if implemented in its current form, would regulate all products and service providers that use AI. Early proposals under the Act, which received backing from EU Members of Parliament at the start of this month, will require companies that use generative AI to disclose any copyrighted material used to develop their systems.
The EU AI Act has proposed to use a risk-based approach to regulate AI, with services considered to be higher risk to be subject to more stringent supervision and transparency requirements.
Generative AI technology promises to transform the global business landscape; at the same time, regulatory reform will influence the development of this evolving sector and how it can be used by businesses.
The rapid development of generative AI and its widespread adoption has raised technology and legal risks at the center of proposed legislative reforms. Intellectual property is a focus area for regulators, and in some jurisdictions, copyright infringement litigation has arisen over the use of copyrighted material in generative AI services.
While it remains to be seen how copyright issues will be regulated in different regions, companies engaging with generative AI should consider pathways to developing training sets for AI with data that is legally licensed.
Data privacy and data security are additional areas of concern for regulators. The CAC has already highlighted expectations in the draft measures that generative AI services observe personal information protection laws in China in using personal data to train AI. The EU AI Act is likely to impose similar compliance standards, in tandem with the General Data Protection Regulation (GDPR).
From a practical standpoint, complying with copyright laws and personal data protection regulations governing user consent, the right to opt-out of data collection and the right to correct inaccuracies about their personal data could prove challenging for generative AI service providers. These aspects have also historically been contentious in terms of regulatory enforcement action. As generative AI gains even more widespread adoption and use, businesses must stay up to date with regulatory developments and emerging enforcement trends.
This article first appeared on Thomson Reuters Regulatory Intelligence and features on Business Insight with permission.