
Introduction
The European Union’s Artificial Intelligence Act (AI Act) represents a landmark effort to regulate AI technologies across member states. Yet, a detailed legal examination reveals substantial exemptions and procedural gaps that may significantly weaken its ability to safeguard fundamental rights—especially in contexts of national security, migration, and law enforcement.
1. Exemptions for National Security
Article 2(3) of the AI Act explicitly excludes AI systems “developed or used exclusively for military purposes or for the purposes of activities concerning national security” from its scope. This broad carve-out enables member states to deploy AI-based biometric surveillance, predictive policing, and other sensitive technologies without being bound by the Act’s core obligations on risk management, transparency, and human oversight.
The absence of oversight mechanisms or external audits for such systems risks unmonitored encroachments on privacy and civil liberties, raising critical concerns under the EU Charter of Fundamental Rights.
2. Delayed Compliance for Migration Technologies
AI systems used in migration-related processes—including visa issuance, asylum determination, and border management—are categorized as “high-risk” under Annex III of the AI Act. Providers of such systems are required to comply with stringent requirements, including conformity assessments, risk management systems, and data governance.
However, transitional provisions allow certain existing high-risk AI systems to benefit from a phased compliance schedule. While the AI Act’s original proposal sets a general compliance deadline of two years after the Regulation’s entry into force (anticipated in 2024, making 2026 the effective start), debates and amendments in the European Parliament and Council have pushed for longer grace periods in specific domains, including migration and border control.
The often-cited “2030” deadline is not formally codified in the current AI Act text, but may reflect proposed extensions advocated by some member states or industry groups. This delay risks prolonging the unregulated use of AI systems that have demonstrated discriminatory biases and adverse human rights impacts.
3. Self-Assessment of High-Risk AI Systems
While the AI Act establishes a list of high-risk AI systems in Annex III, Article 6(3) grants providers discretion to determine whether their AI systems qualify as high-risk, subject to certain criteria. This self-assessment model introduces potential conflicts of interest, as companies may be incentivized to underreport to avoid onerous compliance costs.
The Act foresees market surveillance authorities and notified bodies to verify compliance, but the effectiveness of enforcement and audits depends on adequate resources and political will—elements that remain uncertain.
4. Limited Transparency in Law Enforcement Applications
The Act mandates registration of high-risk AI systems in a publicly accessible European database (Article 60). However, Article 2(3)(b) exempts AI systems used by competent authorities for “the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties” from several transparency obligations.
This exemption means law enforcement and migration agencies are not required to register or disclose key details about their AI deployments, creating an opaque “silicon curtain” that hinders public scrutiny and accountability—contradicting principles of democratic oversight and the right to effective remedy.
5. Influence of Corporate Lobbying
The legislative process for the AI Act has seen intense lobbying by large technology firms and industry associations, documented in reports by Corporate Europe Observatory and others. These actors successfully pushed for the inclusion of broad exemptions (notably for national security), extended transitional periods, and self-assessment regimes that limit regulatory burdens.
Such influence arguably dilutes the protective ambitions of the AI Act, risking regulatory capture that prioritizes corporate interests over public good and fundamental rights protections.

Conclusion
The EU AI Act, while a pioneering legal framework, currently harbours significant loopholes that threaten to undermine its core mission of ensuring trustworthy AI. Closing the national security exemption, tightening transitional timelines—especially for migration-related AI—reinforcing external oversight of self-assessment processes, and eliminating transparency exemptions for law enforcement are crucial steps to uphold democratic accountability and fundamental rights in the digital age.
Sources:
- European Commission, Proposal for a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act), COM(2021) 206 final
- European Parliament Committee Reports and Amendments
- European Digital Rights (EDRi) Briefings
- Corporate Europe Observatory Investigations
- Statewatch Analyses






