Insights

AI regulations are coming – are your customers ready?

Adnan Masood, PhD, Chief AI Architect, UST.

The time is now – this evolving landscape of AI regulations presents a pivotal moment for businesses, requiring a proactive approach to align with emerging legal frameworks before it's too late.

Adnan Masood, PhD – Chief AI Architect, UST

Adnan leads a team of engineers and data scientists at UST, building Artificial Intelligence solutions to produce insights and business value for sustainable transformation. His extensive experience in Artificial Intelligence and Machine Learning, as a Stanford visiting scholar, Microsoft regional director, and machine learning PhD, with his commitment to excellence, helps deliver high-quality products and services that meet the needs of UST clients.

Download the CIO’s guide

Generative AI has dramatically expedited the adoption of Artificial intelligence (AI) and machine learning technologies in the enterprise. The need for comprehensive and pro-innovation regulations has never been more critical. Most AI organizations practice self-regulation, but it isn't enough - the opacity of AI algorithms and their far-reaching implications necessitate enforceable governance frameworks that ensure transparency, safety, and respect for societal values.

After the White House's AI executive order (EO) and NIST AI Risk management framework advisories, the European Union's AI Act has become the first mainstream regulation of artificial intelligence. Even though NIST RMF, AI EO, and EU AI Act provide a high-level overview of what AI governance may entail, there is a tremendous need to get both the public and private sectors to understand, adopt, scale, and govern AI safely and effectively. This article will review these latest regulations, what they mean for businesses, and the potential opportunities for UST in this dynamic AI governance landscape.

 DIVIDER

White House AI Executive Order

On Oct 30th, President Biden issued an Executive Order to ensure that the United States remains at the forefront of Artificial Intelligence (AI) development while managing its risks. This order sets new standards for AI safety and security, upholds privacy, promotes equity and civil rights fairness, and supports innovation and competition. It mandates developers of robust AI systems to disclose safety test results to the U.S. government. It institutes extensive safety testing and standards development led by the National Institute of Standards and Technology. The order also focuses on protecting against AI-enabled biological risks and fraudulent content and establishes advanced cybersecurity programs.

The directive includes measures to protect Americans' privacy from AI's data exploitation risks, advances equity and civil rights to prevent AI from perpetuating discrimination, and outlines support for consumers and workers by ensuring that AI benefits do not come with increased harm or surveillance. It promotes AI research and competition, particularly for small developers, and advances global American AI safety and ethics leadership. Lastly, the order aims to modernize the federal government's use of AI, ensuring responsible procurement and deployment. This comprehensive strategy reflects the administration's commitment to leading AI innovation responsibly, with a clear focus on maintaining American values and standards in the face of rapid technological advancement.

 DIVIDER

NIST AI Risk Management Framework (AI RMF)

In January 2023, NIST released the AI Risk Management Framework (RMF), which evolved in response to unique risks posed by AI technologies, influenced by the National AI Initiative Act of 2020 and broader AI governance efforts. Developed collaboratively with public and private sectors, NIST RMF provides a structured approach to managing AI risks, emphasizing responsible AI practices for diverse applications.

The AI Risk Management Framework (AI RMF) defines AI systems as autonomous entities that influence real and virtual environments. Due to their complexity and socio-technical nature, AI systems require thoughtful risk management to prevent undesirable outcomes, particularly those that could exacerbate inequity. Responsible AI practices, emphasizing human-centric and sustainable approaches, are crucial for aligning AI system design and use with societal values. The AI RMF, guided by the National Artificial Intelligence Initiative Act of 2020, provides a flexible, voluntary framework for organizations to manage AI risks effectively, aiming to build public trust in AI systems.

It outlines governance, risk mapping, measurement, and management functions to operationalize risk mitigation throughout the AI lifecycle. NIST will continuously update the framework to keep pace with technological evolution and international standards, ensuring AI's benefits are maximized while minimizing potential harms. With the AI RMF Roadmap, AI RMF Crosswalk, and various other relevant artifacts, AI RMF is potentially the most significant detailed regulation that has come out in recent years and will significantly impact future rules.

 DIVIDER

European Union AI Act

The European Union has always been at the forefront of technological governance - with the rapid advancement of Artificial Intelligence (AI), the EU is taking a bold step to harness its benefits while safeguarding fundamental rights and ensuring safety with the EU AI Act. This pioneering legislation balances the scales between technological innovation and ethical governance.

Addressing the Risks

The EU recognizes that while most AI systems pose minimal risk, specific applications – like biometric identification systems and AI in law enforcement – carry significant risks. These systems can impact everything from personal privacy to democracy itself. The AI Act categorizes risks into four levels, focusing mainly on high-risk and unacceptable-risk AI systems and setting stringent conditions for their use.

Global Impact

The AI Act isn't just for Europe – it has global implications. Any AI system operating in the EU or affecting EU citizens falls under this legislation, regardless of where the provider is based. This extraterritorial effect mirrors the influential reach of the EU's General Data Protection Regulation (GDPR).

High-Risk AI Systems

Providers of high-risk AI systems must conduct thorough conformity assessments before market introduction. This includes systems in critical infrastructure, education, employment, law enforcement, and more. These assessments ensure compliance with mandatory requirements like data quality, transparency, and human oversight.

General-Purpose AI Models

A significant focus of the Act is on general-purpose AI models, especially those with systemic risks, like large generative AI models. Providers of these models must now disclose vital information, conduct risk assessments, report incidents, and engage with the European AI Office to develop Codes of Conduct. This ensures that AI advancements do not compromise safety and compliance.

Fines and Enforcement

Noncompliance with the AI Act can result in hefty fines, reaching up to €35 million or 7% of global turnover, depending on the infringement. This stringent penalty regime underscores the EU's commitment to ensuring AI's ethical use.

Promoting Innovation

Despite its regulatory nature, the AI Act is designed to foster innovation. It provides for regulatory sandboxes and real-world testing, offering a controlled environment to trial innovative AI technologies. This approach aims to balance regulatory oversight with the encouragement of technological advancement.

 DIVIDER

Our Observations - EU vs. U.S. Perspectives

The EU AI Act and President Biden's Executive Order represent significant steps toward regulating artificial intelligence, aiming to balance harnessing AI's benefits with managing its potential risks.

We noticed that the EU AI Act and the Executive Order emphasize the importance of safety, security, and trustworthiness in AI systems, recognizing the need to protect citizens from the potential dangers of unregulated AI. They share a focus on safeguarding fundamental rights, including privacy and non-discrimination, and both contain provisions to protect against unfair and biased use of AI in critical areas like employment and law enforcement. Both frameworks also aim to foster innovation and ensure competitiveness in the AI sector while upholding ethical standards and promoting public trust in AI technologies.

The EU AI Act adopts a more granular risk-based classification system, explicitly categorizing AI systems by their level of risk to society and specifying different compliance requirements for each category. It offers a legal framework for enforcement across EU member states, with potential fines for noncompliance. In contrast, President Biden's Executive Order directs federal agencies to develop standards and best practices for AI, emphasizing the federal government's use of AI systems. It also focuses on specific American interests in AI, such as cybersecurity, national security, and the leadership of AI innovation both domestically and globally.

The Executive Order includes directives for advancing AI-related research and development and using AI in government to improve services to American citizens. Meanwhile, the EU AI Act focuses more on the commercial and public use of AI across various sectors, including stringent requirements for high-risk AI systems and prohibiting certain unacceptable AI practices.

While the EU AI Act and the Biden Executive Order share common goals regarding the ethical development and deployment of AI, their approaches diverge in scope, legal bindingness, and specific focus areas, reflecting their respective governance contexts and regulatory environments.

 DIVIDER

Customer's Roadmap for AI Regulations

As AI continues to evolve, the EU AI Act serves as a blueprint for responsible and human-centric AI governance, ensuring that the AI of tomorrow benefits everyone, respects fundamental rights, and upholds the highest safety standards.

Here is a brief, non-exhaustive checklist of what our customers need to be thinking about regarding AI regulations and application compliance.

  1. Risk Classification: Determine if their AI systems fall into any risk categories - minimal, high, or unacceptable risk - and understand the specific compliance requirements for each classification.
  2. Fundamental Rights Impact Assessment: For high-risk AI applications, businesses must be prepared to conduct and document thorough assessments that evaluate the impact of their AI systems on fundamental rights.
  3. Data Governance and Quality: Implement robust policies to ensure the quality, integrity, and representativeness of the data used in AI systems, minimizing biases and ensuring compliance with data protection laws.
  4. Transparency and Explainability: Develop mechanisms to provide transparent and understandable explanations of AI system processes and decisions, especially for high-risk AI that affects individuals' rights.
  5. Biometric Identification: Review and potentially revise the use of biometric identification systems to ensure they comply with the new limitations, particularly for law enforcement applications and in publicly accessible spaces.
  6. Consumer Rights: Establish clear procedures for handling consumer complaints related to AI decisions and ensure these procedures are easily accessible to users.
  7. Technical Documentation: Maintain detailed technical documentation and records of AI system training protocols, methodologies, and data sets to demonstrate compliance and support transparency.
  8. AI Training and Testing: Ensure that AI models are trained and tested with sufficiently diverse data sets to prevent discriminatory outcomes and are aligned with the EU's standards for high-risk AI systems.
  9. Cybersecurity and Robustness: Implement state-of-the-art cybersecurity measures for AI systems, regularly update them to mitigate vulnerabilities, and establish a protocol for reporting serious incidents.
  10. Financial and Legal Preparedness: Be financially and legally prepared for potential penalties due to noncompliance, which could be substantial, and stay informed about the evolving regulatory landscape to adapt business practices accordingly.
 DIVIDER

Opportunities for UST

The broad nature of existing AI regulations, intentional as it may be, fails to address specific areas, resulting in gaps. These gaps offer lawmakers a unique opportunity to introduce training programs, fast-track their application in enterprise environments, and make the regulations more adaptable.

Here are some ideas to address these opportunities.

The time is now – this evolving landscape of AI regulations presents a pivotal moment for businesses, requiring a proactive approach to align with emerging legal frameworks before it's too late. The EU AI Act, the U.S. Executive Order, and the NIST AI RMF mark a significant shift towards structured AI governance, emphasizing risk management, fundamental rights, and ethical AI use. These developments underscore the necessity for businesses to understand and adapt to these regulations, necessitating a thorough evaluation of AI applications in light of risk classifications, data governance, transparency, and cybersecurity. This presents many opportunities for UST, from compliance services to educational initiatives and risk assessment tools, aligning with our goal to foster sustainable AI innovation while adhering to ethical standards. As we navigate this new regulatory environment, we focus on leveraging these changes to enhance our offerings, ensuring our AI solutions are innovative, compliant, and responsible.