Maurice Sayinzoga Maurice Sayinzoga

Navigating the EU AI Act: A Risk-Based Approach to AI Governance

The EU AI Act, which entered into force on August 1, 2024, represents a pioneering comprehensive framework for regulating artificial intelligence across the European Union, emphasizing a human-centric approach to mitigate risks while fostering technological advancement. Unlike the United States, which leans heavily on voluntary frameworks and a patchwork of state rules, the EU has gone for a single, comprehensive law that sets out clear obligations for anyone building or deploying AI within its borders. The goal is to strike a balance between encouraging innovation and keeping people safe, protecting rights, and ensuring that AI reflects European values.

The EU AI Act, which entered into force on August 1, 2024, represents a pioneering comprehensive framework for regulating artificial intelligence across the European Union, emphasizing a human-centric approach to mitigate risks while fostering technological advancement. Unlike the United States, which leans heavily on voluntary frameworks and a patchwork of state rules, the EU has gone for a single, comprehensive law that sets out clear obligations for anyone building or deploying AI within its borders. The goal is to strike a balance between encouraging innovation and keeping people safe, protecting rights, and ensuring that AI reflects European values.

As of September 2025, the Act is starting to take fuller shape. One of the biggest updates is that rules for general-purpose AI models, such as large language models and versatile chatbots, began applying on August 2, 2025. To make things smoother, the European Commission issued detailed guidelines in July explaining how these rules should work in practice. General-purpose models are treated with special attention because they can be adapted to so many uses, some of which carry systemic risks.

At the heart of the AI Act is a tiered system that classifies AI based on risk. It divides systems into four categories: unacceptable, high, limited, and minimal risk. Each category comes with different obligations or bans proportionate to the level of potential harm.

The highest bar is “unacceptable risk.” This applies to AI systems that go against core EU principles, such as tools that manipulate people’s behavior in harmful ways, exploit vulnerable groups like children, or enable governments to conduct social scoring that could lead to discrimination. These systems are banned outright, with no wiggle room for most cases. The thinking here is to prevent AI from undermining civil liberties, even if that means sacrificing certain business opportunities.

Just below that is “high risk,” which covers AI used in sensitive areas such as critical infrastructure, hiring processes, education, and law enforcement. Real-time facial recognition in public spaces is a prime example, permitted only in very narrow circumstances like investigating serious crimes. Companies offering high-risk AI must conduct risk assessments, ensure their data is high quality and unbiased, build human oversight into their systems, and undergo conformity checks before launching. This means more paperwork, audits, and testing, but the aim is to ensure reliability in contexts where mistakes could have major consequences.

The next tier is “limited risk.” These are AI systems that people interact with directly, like chatbots, emotion recognition tools, or generators of synthetic content such as deepfakes. Here the requirements are mostly about transparency. Users should know when they’re talking to or seeing something created by AI. In practice, that might mean a chatbot that says “I’m an AI” at the start of a conversation or metadata that labels a photo as AI-generated. The emphasis here is on honesty, so people are not misled.

Finally, there’s “minimal risk,” which actually makes up the vast majority of AI we use day to day. Things like spam filters in email, AI opponents in video games, or navigation apps that suggest the quickest route. Because these applications are seen as posing little to no risk to people’s rights or safety, the Act does not impose binding obligations on their providers. Instead, it encourages the use of voluntary codes of conduct and best practices, leaving flexibility for innovation while still promoting responsible development.

Taken together, the EU AI Act is the world’s first attempt to create a sweeping, legally binding framework for AI. It combines strict bans with flexible requirements, depending on the level of risk, and extends its reach to general-purpose AI models that sit at the core of today’s technological ecosystem. While this approach could slow innovation in certain sectors compared to looser U.S. rules, Europe is betting that setting high standards now will pay off later, both in terms of building trust and in shaping the global conversation around responsible AI.

EU AI Act: https://eur-lex.europa.eu/eli/reg/2024/1689/oj

Read More
Maurice Sayinzoga Maurice Sayinzoga

The US Approach to AI Regulation: Deregulation and Innovation

In 2025, the United States is still taking a very different approach to AI than many other parts of the world. Rather than passing a single, comprehensive law, the U.S. relies on a mix of frameworks, voluntary guidelines, state-level legislation, and targeted federal actions. The America’s AI Action Plan, released in July 2025, reflects this philosophy, emphasizing economic competitiveness, national security, and technological leadership over prescriptive mandates. Its three guiding pillars; namely, accelerating innovation, building infrastructure, and leading internationally, to set the direction for how the U.S. intends to remain at the forefront of artificial intelligence innovation.

In 2025, the United States is still taking a very different approach to AI than many other parts of the world. Rather than passing a single, comprehensive law, the U.S. relies on a mix of frameworks, voluntary guidelines, state-level legislation, and targeted federal actions. The America’s AI Action Plan, released in July 2025, reflects this philosophy, emphasizing economic competitiveness, national security, and technological leadership over prescriptive mandates. Its three guiding pillars; namely, accelerating innovation, building infrastructure, and leading internationally, to set the direction for how the U.S. intends to remain at the forefront of artificial intelligence innovation.

The first pillar, accelerating innovation, is all about making it easier and faster to build new AI systems. The idea here is to cut through red tape and let companies and researchers move quickly. That means things like “regulatory sandboxes,” where organizations can test new AI tools in areas like healthcare or finance without getting bogged down by strict regulation, and speeding up approvals for new data centers, which are the backbone of AI computing. The plan also puts a spotlight on open-source AI projects so that more people and smaller players can contribute. In short, the goal is to give startups and researchers the freedom and resources to innovate without too many restrictions slowing them down.

The second pillar, building infrastructure, is about making sure the U.S. has the foundations in place to keep AI progress going over the long term. One of the most important pieces here is the National AI Research Resource pilot–a public tool kit for AI development–providing access to powerful computers, large datasets, and advanced software that normally only big tech companies can afford. By opening this up to universities, startups, and non-profits, the program levels the playing field and makes sure innovation isn’t concentrated in just a handful of corporations. Alongside this, the plan invests in training programs to grow the next generation of AI talent and includes safeguards like cybersecurity protections and biosecurity measures to prevent risks from advanced technologies being misused.

The third pillar, leading internationally, reflects the U.S. strategy to shape the rules of the game globally. This involves using export controls to limit access to advanced AI chips and systems by rival countries, while also working with allies to promote shared values and principles for AI use. Unlike the European Union, which is pushing ahead with binding laws like the AI Act, the U.S. approach prefers to promote voluntary standards and partnerships, aiming to influence the direction of AI worldwide without locking itself into rigid regulations.

The America’s AI Action Plan embodies the view that innovation should lead regulation, not the reverse. Its lighter-touch model promises rapid progress, but it also carries risks. Reliance on voluntary standards may leave gaps in privacy and fairness protections, uneven state laws could create fragmented oversight, and the prioritization of national security might come at the expense of civil liberties.

US America's AI Action Plan: https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf

Read More