Global AI Regulation: A Closer Look at the US, EU, and China

At a glance

  • As the generative artificial intelligence (AI) revolution continues to unfold, AI regulation has become a primary focus of regulators in the US and across the globe. 

  • Though headlines make AI seem like an unruly, unregulated Wild West, there are several laws in force today that directly affect the use and creation of AI, and can help guide companies towards effective AI governance.

  • This guide will explore key AI laws in the United States, the European Union, and China—looking at laws already on the books, as well as those coming down the pipeline. 

Table of contents

AI regulation in the United States

With a market-driven approach and a track record of difficulties build bi-partisan consensus at the Federal level, the United States has made the progress on sector and state-level AI regulations. 

Despite several closed sessions on the topic of AI, with more on the horizon, no Federal AI legislation has been introduced in the U.S. However, there are several laws currently in force today that can help guide AI governance initiatives, as they directly impact the way organizations can deploy and market AI systems and AI-powered tools. 

AI and employment

Laws in Illinois and New York—the Illinois AI Video Interview Act and the Automated Employment Decision Tool Law, often referred toas  the New York AI Bias law— are already having an impact on how AI can be used in the context of employment. 

Illinois AI Video Interview Act

The Illinois Artificial Intelligence Video Interview Act (AIVIA), passed in 2019 and enacted in 2020, is a pioneering piece of legislation that directly addresses the use of AI during the hiring process. This law requires that employers inform applicants if they intend to use AI systems when evaluating video interviews—ensuring that candidates are aware of their rights and can consent to the use of AI prior to the interview. 

The AIVIA also outlines protocols for handling video interview data, including its destruction within 30 days upon request. An early effort to shape the ethical use of AI in employment, this law will likely set a precedent for future AI-related employment legislation.

New York AI Bias Law

Going into effect on July 5, 2023, New York’s AI Bias Law requires that companies conduct regular audits of their hiring algorithms for bias. These audits must look for bias, both intentional and unintentional, that could discriminate against protected classes. 

The law also requires that companies publish the findings of these audits, in order to support greater transparency about how AI tools are used during the hiring process. By encouraging transparency and accountability, New York's AI bias law aims to minimize the perpetuation of systematic bias and inequality in employment processes through the use of AI. 

State privacy laws

While requirements vary from state to state, most US state privacy laws on the books today have stipulations around the use of automated decision making and profiling. Most states require that companies disclose when they are using AI for automated decision-making processes and give consumers a way to opt-out of this type of data processing. 

Some states go even further, requiring companies to disclose the logic their AI systems use when making decisions and conduct assessments on how these processes may impact consumers. This additional transparency allows consumers to understand how decisions about them are being made and can potentially challenge them if they appear unfair or discriminatory.

These privacy laws demonstrate a significant step towards holding companies accountable for their use of AI and can serve as a model for comprehensive federal legislation in the future. However, as AI continues to evolve rapidly, there is a need for ongoing evaluation of these laws to ensure that they adequately protect consumers and keep pace with technological advancements.

AI Bill of Rights

Introduced in October 2022, the AI Bill of Rights represents a significant stride towards the ethical use of artificial intelligence in the United States. This document, supported by the Biden administration, lays out a set of voluntary commitments for companies involved in the development, deployment, and management of AI technologies.

The AI Bill of Rights’ goal is to ensure fairness, inclusivity, and accountability in AI systems—emphasizing the principles of transparency and privacy, as well as advocating for users' rights to know when they are interacting with AI systems and how their personal information is being used.

AI, intellectual property, and copyright

Following several lawsuits from high-profile writers, comedians, and celebrities, intellectual property and copyright laws have quickly become one of the most active pieces in the discussion about regulating AI. 

In Aug 2023, a U.S. judge ruled that AI-generated artwork cannot be copyrighted. The judge was presiding over a case brought against the US Copyright Office by defendant Stephen Thaler, after the office repeatedly refused to issue a copyright for images created by Thaler’s Creativity Machine algorithm. 

The US Copyright Office stated that "only works created by a human can be copyrighted under United States law, which excludes paintings specifically produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author." 

This stance has sparked ongoing debates among legal scholars, with some arguing for the need to revise copyright laws in light of AI advancements.

AI and the Federal Trade Commission

The Federal Trade Commission (FTC) has ramped up its scrutiny of artificial intelligence and AI-powered products in recent years, aiming to ensure consumer protection and fair business practices. Recognizing the potential for misuse, the FTC has been proactive in its examination of AI ethics, algorithms, and data practices.

A notable case includes the ongoing investigation into OpenAI, a leading AI research organization. The FTC's interest in OpenAI demonstrates its intent to ensure that AI systems are developed and used in a manner that does not infringe upon consumer rights or lead to unfair or deceptive practices.

In addition, the FTC has been vigilant in cracking down on baseless claims about AI-powered products. For instance, companies making false or misleading claims about the capabilities or benefits of their AI technologies have been subjected to FTC investigation and enforcement actions.

The FTC has also issued guidance for companies using AI, emphasizing the need for transparency, explainability, fairness, and robust data security practices. In this light, it is clear that the FTC's role will continue to evolve with AI advancements, reinforcing the need for businesses to stay updated with these regulatory shifts and comply with all FTC guidelines related to AI.

The EU AI Act

Passing the EU AI Act in early 2023, the EU has continued its role as a leader in passing strong legislation for emerging technologies. The EU AI Act takes a risk-based approach, establishing four risk categories for AI systems: unacceptable risk, high risk, limited risk, and minimal risk.

Unacceptable risk

This category refers to AI systems that pose extremely high levels of risk to consumers, including systems that manipulate human behavior, exploit vulnerable groups, or use social scoring as a means of consumer control and governmental decision making. AI systems that fall into this category are banned under the EU AI Act. 

High risk

Defined as AI systems that have a negative impact on consumer safety or fundamental rights, the EU AI Act divides high risk AI systems into two categories.

The first is AI systems used in products like planes, medical devices, vehicles, and toys i.e. products affected by the EU’s product safety laws. While the second is AI systems used in the following industries or contexts: 

  • Biometric identification

  • Border control, migration, and asylum

  • Critical infrastructure

  • Essential public and private services

  • Legal interpretation

  • Education and training

  • Employment

  • Law enforcement

AI systems in the high risk category must meet stringent requirements before they can be put on the market, including: 

  • Deploying adequate risk management systems

  • Using high-quality datasets to minimize bias

  • Providing detailed documentation on the system

  • Appropriate human oversight measures

  • A demonstrated commitment to system security and accuracy

Limited risk 

Limited risk AI systems include chatbots or systems that create or manipulate visual or audio content. These types of systems are beholden to specific transparency requirements and must make users aware they’re interacting with an AI.

Minimal risk 

Minimal risk AI systems, such as spam filters, represent the majority of AI systems in use today. These systems can operate freely as they pose little to no risk to citizens' rights.

Outside of establishing a risk framework, the EU AI Act also establishes a European Artificial Intelligence Board—tasked with advising and assisting the Commission on matters related to AI. The Board plays a crucial role in ensuring a harmonized application of the regulation across the EU.

AI Regulation in China

China’s approach to AI is characterized by a dual emphasis on promoting AI innovation, while ensuring state control over the technology. This is evident in key policy documents such as the New Generation Artificial Intelligence Development Plan, which sets out China's strategy for becoming a global leader in AI by 2030.

In terms of regulation, China takes a more vertical approach—using discrete laws to tackle singular AI issues. This stands in contrast to the EU AI Act, which takes a notably more horizontal approach by applying flexible standards and requirements across a wide range of AI applications. 

So far, China’s AI regulation has addressed two distinct challenges: AI-driven recommendation algorithms and deep synthesis tools, often used to create deepfakes. 

AI-driven recommendation algorithms

The regulation around AI recommendation algorithms requires that service providers in this space limit discrimination, work to mitigate the spread of negative information, and address exploitative work conditions for delivery workers. This law also grants Chinese consumers the right to turn off algorithmic recommendations and receive explanations when an algorithm significantly impacts their interests.

Deep synthesis

China’s deep synthesis regulation covers the use of algorithms to synthetically generate or alter online content. This law requires that deep synthesis content conforms to information controls, is labeled as synthetically generated, and that providers take steps to prevent misuse. It also includes vague censorship requirements and mandates that deep synthesis providers register their algorithms.

Facial recognition

And though China’s law enforcement and surveillance system make prolific use of AI, China has introduced regulations meant to address the use of this technology by non-governmental agencies. This regulation stipulates that facial recognition tools may only be used for a specific purpose and when other tools won’t do the job, as well as that use of these tools in public places must be for the purpose of public safety. 

In sum, China's approach to AI regulation exhibits a unique blend of innovation promotion, state control, and societal protection, reflecting its unique political, economic, and cultural context.

Conclusion

As AI continues to evolve, so too do the regulatory frameworks governing its use. 

While the US has a more decentralized approach focusing on specific applications of AI, the EU has taken a comprehensive and risk-based approach. In contrast, China combines national-level, provincial, and local regulations with an emphasis on upholding state power and cultural values. 

As these regulatory landscapes continue to evolve, they will undoubtedly influence AI development and deployment globally.


About Transcend

Transcend is the governance layer for enterprise data—helping companies automate and future-proof their privacy compliance and implement robust AI governance across an entire tech stack.

Transcend Pathfinder gives your company the technical guardrails to adopt new AI technologies with confidence. While Transcend Data Mapping goes beyond observability to power your privacy program with smart governance suggestions.

Ensure nothing is tracked without user consent using Transcend Consent, automate data subject request workflows with Privacy Requests, and mitigate risk with smarter privacy Assessments.


References

Share this article

Discover more articles

Snippets

Sign up for Transcend's weekly privacy newsletter.

    By clicking "Sign Up" you agree to the processing of your personal data by Transcend as described in our Data Practices and Privacy Policy. You can unsubscribe at any time.

    Discover more articles