BLOG POST

The Emergence of ChatGPT: Where Do EU and U.S. Regulations Stand?

Amid the excitement and alarm around large language models, regulators around the globe are considering how best to respond.
/

Since OpenAI debuted ChatGPT in November of 2022, large language models (LLMs) have taken the world by storm with their seemingly superhuman abilities. Other tech companies have quickly followed in OpenAI’s footsteps with the releases of Bard, LLaMA, Sydney, and others. Although heralded for their use cases—from writing code to poetry—LLMs can pose potentially serious harm when it comes to facilitating financial crime, spreading false or biased information, compromising data privacy and security, and supplanting jobs. ChatGPT can write convincing phishing messages, write malware, and be leveraged to create convincing deepfakes.

Amid the excitement and alarm, regulators around the globe are considering how best to respond. While some regulatory frameworks may already suffice to meet the emerging needs of this brave new world, others may have to be built from the ground up. Meanwhile, regulators must contend with the rapid-fire pace of change in AI technologies—which far outstrips that of slow and incremental regulatory change.

The European Union – Leading the Charge

The EU is leading the charge in global efforts to create a comprehensive regulatory framework for AI. Its Artificial Intelligence Act, first proposed in 2021 and expected to be adopted by the end of 2023, takes a risk-based, so-called “human-centric” approach to AI regulations. This Act classifies AI systems into four risk categories, from “minimal” to “unacceptable,” banning unacceptable systems—such as real-time biometric identification in public spaces and emotion recognition systems.

In response, OpenAI CEO Sam Altman noted that, although the company will attempt to comply, it has several criticisms over the wording of the bill. In its current version, the bill could classify ChatGPT as high-risk, which would impose additional transparency and safety requirements upon it—and potentially lead OpenAI to pull the system from the 27-country bloc.

The newest draft of the bill, adopted May 11th, 2023, requires generative foundation models like ChatGPT to disclose that content is AI-generated and be designed to prevent the generation of illegal content. In the amendments proceedings, members of the European Parliament (MEPs) also called for the creation of a standardized, technology-neutral definition of AI to account for future developments in the industry and ensure the continued relevance of the Act.

Although the EU is close to finalizing the bill, it will likely not be fully enforced until a couple years after its adoption—allowing for regulated entities to ensure compliance before penalties and bans come into play. Still, even a couple years is a long time given the speed with which AI technologies advance, and this time period could allow for unchecked and potentially harmful developments. (In March, a group of over 1,000 technology leaders issued an open letter calling for a pause in the development of advanced AI systems, speaking to the depth of concern surrounding them.)

U.S – Historically Slow, Progress Is Picking Up Steam

In the U.S., attempts to regulate AI have historically been fragmented and slow, but progress has been brewing over the past eight months. In October 2022, the White House issued the Blueprint for an AI Bill of Rights, which aims to “guide the design, development, and deployment” of AI systems such that rights are safeguarded. The emergence of ChatGPT, a mere month after the release of the Blueprint, has put further pressure on U.S. regulators.

In January 2023, the National Institute of Standards and Technology (NIST) released the AI Risk Management Framework, and in April, Senator Chuck Schumer said in a statement that he had drafted a regulatory framework to both manage risks and encourage further innovation in AI. On May 16th, 2023, the U.S. Senate Judiciary Subcommittee on Privacy, Technology, and the Law Subcommittee held a hearing entitled “Oversight of AI: Rules for Artificial Intelligence,” during which Sam Altman appeared to discuss ChatGPT’s capabilities and risks.

Although the U.S. lags behind the EU in its efforts, these are promising first steps—especially with a concrete bill in the works. As the legislative process gains traction, U.S. regulators should pay special attention to data privacy and security implications surrounding ChatGPT and similar models. There is currently no national regulatory regime on data privacy and security—only a handful of state ones, such as the California Consumer Privacy Act (CCPA). This month, the Federal Trade Commission (FTC) opened an investigation into OpenAI, in which it has asked the company to provide documentation on its data security and privacy practices.

Moving Forward

Going forward, regulators must seek to foster international cooperation surrounding generative AI—as models like ChatGPT are increasingly adopted and fine-tuned, they will have strong ripple effects across the global economy. Even OpenAI has proposed the creation of an international regulatory body akin to the International Atomic Energy Agency, which would have authority to inspect systems, impose restrictions, require audits, and test for compliance with safety standards. Wrote the company’s founders, “Given the possibility of existential risk, we can’t just be reactive.”

Going forward, financial institutions and solution providers must seek to stay informed, engage with regulators, and be methodical as they begin any adoption. To learn more about the risks associated with ChatGPT, read our report Large Language Model Threat: What CISOs Should Know About the World of ChatGPT.