Navigating AI Regulations: A Comparative Analysis of US and EU Frameworks


Posted on by Greg McDonough

As artificial intelligence (AI) continues to drive almost every aspect of technology and expand further into uncharted territories, there is global concern regarding the rules and regulations being put into place to protect the general public from its abuses. In the first of this two-part series, we will look at the regulatory landscape in the US, which recognizes the necessity for guardrails but has yet to develop a comprehensive plan regulating the development and implementation of AI related technology. 

Despite this recognition, the current US approach utilizes a patchwork interpretation of existing laws and the adaptation and modification of previously written regulations. The rapid speed with which AI is developing and evolving makes it necessary for the US to take a more aggressive, proactive approach in developing regulation that is strong enough to protect against its potential misuses, while still allowing enough flexibility for innovation to flourish.

A Broad Stroke AI Blueprint

In 2021, President Biden signed an executive order establishing the National AI Initiative Office within the White House Office of Science and Technology Policy (OSTP). As part of the government’s commitment to ensuring the security of Americans’ civil liberties and maintaining democratic principles, the administration identified five key areas that should shape the implementation and design of automated systems. The five key areas identified by the OSTP are: 

  • Safe and secure systems
  • Algorithmic discrimination protections
  • Data privacy
  • Notice and explanation
  • Human alternatives, consideration and fallback

These guidelines form The Blueprint for the AI Bill of Rights. The blueprint, along with Executive Order 14110, also known as Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, form the backbone of the US policy in regard to AI. Although these provide a guiding vision, they lack any corresponding legislation.

Sector Specific AI Regulations

In addition to the executive orders, there are also sector specific regulations that work as a means for regulating AI development in areas such as healthcare and finance. For healthcare purposes, AI is often referred to as augmented intelligence where it is used to support health care providers in treatment planning, regulation adherence, and patient monitoring as well as in the development of software as a medical device (SaMD) systems. However, the US healthcare system is currently relying upon interpretations of the Health Insurance and Portability and Accountability Act (HIPAA) as a means of curtailing AI abuses in healthcare. HIPAA was originally designed as a means of protecting patient privacy and rights.

Similar to healthcare, financial regulation has yet to evolve and develop extensive policies regarding the specific usage of AI, which is poised to have an effect on almost every aspect of the field. Instead, regulation has been left in the hands of entities such as the Securities and Exchange Commission (SEC) and the Federal Reserve, who are interpreting older policies and retroactively applying them to AI. As the US Department of the Treasury states in its report Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector, “although existing laws, regulations, and supervisory guidance may not expressly address AI, the principles contained therein can help promote safe, sound, and fair implementation of AI.”

The States Take on AI

With the loose framework of policies that currently exists at the federal level, many states have taken it upon themselves to write legislation regulating the usage of AI. In 2024, 45 states introduced AI bills and 31 states, as well as Puerto Rico and the Virgin Islands adopted some form of AI regulation. California has led the country by enacting 18 laws concerning AI and Colorado and Utah have also passed noteworthy pieces of legislation to regulate its usage.

As AI rapidly expands into every aspect of our technological world with near limitless potential, the speed at which the technology grows is difficult to track. Innovation provides for exciting breakthroughs and remarkable applications, yet the quickness with which it evolves makes laws and policies regulating its development difficult to write. The US, and other governments around the world, recognize the importance of allowing AI the room to grow without being stifled by short-sighted laws. However, the potential for AI abuse is growing, making a laissez-faire approach to its governance no longer feasible. 

It is necessary for legislators the world over to write a comprehensive policy providing effective guardrails that anticipate the future development of AI, while safeguarding the privacy and safety of individuals. In the meantime, those in the US looking to align themselves with the guiding principles of the current administration can look to the National Institute of Standards and Technology’s resources concerning AI and their AI Risk Management Framework. In the second of this two-part blog series, we’ll look at AI legislation in the EU.


Contributors
Greg McDonough

Cybersecurity Writer, Freelance

Machine Learning & Artificial Intelligence

Standards / Frameworks Artificial Intelligence / Machine Learning law government regulations governance risk & compliance

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs