Skip to main content
Regulatory 9 min read

The EU AI Act: What Software Teams Need to Know Before August 2026

The EU AI Act keeps rolling out in 2026. Here's what's already binding, what changes on August 2, and how the Digital Omnibus could shift the timeline.

BrotCode
Updated May 8, 2026
The EU AI Act: What Software Teams Need to Know Before August 2026

Three Months. That’s What’s Left Before The Big Deadline.

The EU AI Act’s high-risk obligations become applicable on August 2, 2026. As of today, that’s the date that legally binds. Most teams haven’t started.

The penalties aren’t gentle: up to EUR 35 million or 7% of global annual revenue. And over half of organisations still lack a basic inventory of the AI systems running in their environment. You can’t classify risk for systems you don’t know exist.

This isn’t a future problem either. Prohibited practices have been illegal since February 2025, Article 4’s literacy duty has been in force just as long, and GPAI obligations kicked in last August.

If your system uses social scoring, real-time biometric surveillance in public spaces, or exploits vulnerabilities in specific groups, you’ve already been in violation for over a year.

Here’s the catch. On May 7, 2026, the Council and the Parliament reached a provisional political agreement on a Digital Omnibus on AI that would push several deadlines back.

Until that text is formally adopted, the original dates still bind. Plan for August 2, 2026. Watch the Omnibus.

The Timeline: What’s Already Active and What’s Coming

The AI Act rolls out in phases. Most are already live.

February 2, 2025: Prohibited AI practices became illegal. Article 4’s AI literacy duty also kicked in. The Commission published its non-binding guidelines on prohibited practices the same week.

August 2, 2025: Governance structures, penalty regime, and obligations for general-purpose AI models entered application. The EU AI Office (inside DG CNECT) began publishing guidelines and templates. The voluntary General-Purpose AI Code of Practice was published in July 2025 and signed by most major model providers.

August 2, 2026: The big one, as currently written. Full application of Annex III high-risk obligations: conformity assessments, technical documentation, human oversight, transparency under Article 50, registration in the EU database under Article 49.

August 2, 2027: Article 6(1) obligations and full GPAI compliance for models placed on the market before August 2025.

August 2, 2028: Annex I high-risk systems embedded in regulated products (medical devices, machinery, vehicles) under existing EU product safety legislation.

What about the Digital Omnibus? The May 7 provisional agreement would push Annex III high-risk obligations to December 2, 2027, Annex I to August 2, 2028 (already the date), and Article 50(2) watermarking duties to December 2, 2026.

The text needs to be formally adopted before August 2, 2026 to take effect. Until then, the original dates legally bind. Don’t slow your prep on the strength of a political agreement that hasn’t been signed.

What Counts as an “AI System”? Broader Than You Think.

The Act’s definition is broad: a machine-based system designed to operate with varying levels of autonomy, that can exhibit adaptiveness after deployment, and infers from its inputs how to generate outputs like predictions, content, recommendations, or decisions.

That’s broad. It covers obvious things like large language models and computer vision. But it also captures traditional machine learning models, some rule-based systems, and advanced analytics that generate predictions or recommendations.

If your product uses any form of automated inference to produce outputs that influence decisions, it’s probably covered. Sound like a lot of modern software? It should.

The Four Risk Categories

Unacceptable risk: banned

Social scoring by governments or private companies. Real-time biometric identification in public spaces (with narrow exceptions for law enforcement). AI that exploits age, disability, or economic vulnerability.

Untargeted scraping to build facial recognition databases. Workplace and education emotion recognition (with narrow medical/safety exceptions).

These have been illegal since February 2025. Enforcement powers under Article 5 transferred to designated national authorities (or the EDPS, for EU institutions) on August 2, 2025. No public enforcement actions yet, but several investigations are reportedly underway around workplace emotion recognition and predictive policing.

High risk: heavy obligations

This is where most compliance work lives. High-risk AI systems include those used in:

  • Biometric identification and categorisation
  • Critical infrastructure management (energy, transport, water)
  • Education and vocational training (admissions, assessments)
  • Employment (recruitment, performance evaluation, task allocation)
  • Access to essential services (credit scoring, insurance pricing)
  • Law enforcement and migration
  • Administration of justice and democratic processes

If your AI system falls into any of these categories, the obligations are heavy: quality management, risk management, technical documentation, data governance, logging, human oversight, conformity assessments, and EU database registration under Article 49. Substantial engineering effort. Start now.

Limited risk: transparency required

AI systems that interact with people, generate or manipulate content, or detect emotions. Chatbots must clearly disclose they’re AI. Deepfake content must carry machine-readable watermarks under Article 50(2).

Emotion recognition systems must notify users before use.

If you’re running a customer-facing chatbot, this applies to you. The fix is usually straightforward: add a clear disclosure at the start of every interaction. Note: the Digital Omnibus would push the Article 50(2) watermarking deadline to December 2, 2026, but only if formally adopted.

Minimal risk: no specific obligations

Most business software falls here: spam filters, inventory optimisation, recommendation engines for non-critical applications. No specific AI Act obligations, though voluntary codes of practice are encouraged.

Provider vs. Deployer: Your Role Matters

The Act distinguishes between providers (those who develop or commission AI systems) and deployers (those who use AI systems in their operations).

Most SMBs are deployers. You’re using GPT-4, Claude, or an open-source model within your product. You didn’t build the foundation model.

Deployer obligations are lighter but real. You must use the system according to the provider’s instructions. Ensure human oversight for high-risk systems.

Monitor for risks during operation. Maintain transparency with users. Keep logs for the mandated retention period.

Conduct a fundamental rights impact assessment for certain high-risk uses.

Providers carry heavier obligations: conformity assessments, technical documentation, post-market monitoring, and incident reporting.

If you fine-tune a model or modify it substantially, you could be reclassified as a provider. That’s a significant liability shift. Know your role.

AI Literacy: Already In Force

Article 4 has been live since February 2, 2025. It requires organisations to ensure their staff have “sufficient AI literacy” to operate AI systems competently.

What does “sufficient” mean? The Act doesn’t specify hours or certifications. It means your team understands how the AI systems they use work, what their limitations are, and how to maintain human oversight. The AI Office published an FAQ in 2025 that pushes the same line: proportionate to risk and use case, not a checkbox.

For development teams, this means understanding model behaviour, bias detection, and failure modes. For business users, it means knowing when to trust AI outputs and when to override them.

Build it into your onboarding and ongoing training. Document it.

Bitkom’s KI-Studie 2026 found German enterprise AI use jumped from 17% in 2024 to 41% in early 2026. Most of that growth was deployer use of foundation models, and most of those teams have no documented literacy programme.

Practical Compliance Steps

Step 1: Inventory your AI. Every model, every automated decision system, every AI-powered feature. Map them. You can’t classify what you can’t see.

Step 2: Classify risk. For each AI system, determine the risk category. Most will fall into minimal or limited risk. The ones that don’t need immediate attention.

Step 3: Determine your role. Are you the provider or deployer for each system? This determines your specific obligations.

Step 4: Gap analysis. For high-risk systems, compare current documentation, oversight mechanisms, and logging against the Act’s requirements. Identify gaps.

Step 5: Build the documentation. Technical documentation for high-risk systems must cover: intended purpose, design specifications, training data and methodology, performance metrics, known limitations, human oversight instructions, and risk mitigation measures.

Step 6: Implement oversight. Human-in-the-loop or human-on-the-loop for high-risk systems. Not a checkbox. A real mechanism where a human can understand, intervene, and override.

Step 7: Set up logging. High-risk AI systems must maintain automatic logs. These logs must enable post-market monitoring and incident investigation. Retention period: at least as long as the system is in operation.

Architecture Patterns for Compliance

Building AI Act compliance into your technical architecture doesn’t require a complete rewrite. But it does require intentional design.

Audit logging layer. Every AI inference should be logged: inputs, outputs, confidence scores, model version, timestamp. Immutable storage. This is your compliance evidence.

Explanation capabilities. For high-risk systems, you need to explain how a decision was reached. Not full interpretability for every neural network. But enough transparency that a human reviewer can understand and contest a decision.

Model versioning. Track which model version produced which outputs. When regulators ask questions about a specific decision, you need to reconstruct what happened.

Kill switches. Human override mechanisms for high-risk systems. The ability to shut down or bypass AI decision-making when needed.

For the broader compliance context, see our pillar guide on EU compliance for software teams. If you’re building GDPR-compliant AI systems, our architecture guide covers the data protection layer.

Operating connected products too? The EU Data Act guide covers that overlap.

What Happens If You Don’t Comply

The penalty structure is tiered.

Prohibited AI practices: up to EUR 35 million or 7% of global revenue. Violations of high-risk obligations: up to EUR 15 million or 3% of global revenue. Supplying incorrect information to authorities: up to EUR 7.5 million or 1% of global revenue.

For SMBs, the percentages are what matter. 7% of revenue for a company doing EUR 5 million in annual revenue is EUR 350,000.

Not theoretical. The regulation is designed to hurt at every scale.

National market surveillance authorities will enforce the Act. In Germany, the federal government has designated the Bundesnetzagentur (BNetzA) as the lead market surveillance authority, with a coordination centre (KoKIVO) inside BNetzA supporting sector regulators.

The BfDI keeps oversight where AI systems process personal data, but it isn’t the lead AI Act regulator. The German implementing law (KI-Marktüberwachungs- und Innovationsförderungsgesetz, KI-MIG) was approved by the Federal Cabinet on February 11, 2026 and is moving through the Bundesrat and Bundestag.

Don’t Wait for August

The companies that scramble in July 2026 will pay consultants premium rates for rush assessments and find themselves patching systems under pressure. The companies that start now will build compliance into their architecture, train their teams gradually, and face enforcement with confidence.

And don’t bet on the Digital Omnibus saving you. Until the Council and Parliament formally adopt the new text, the August 2 deadline binds.

Start with the inventory. Everything else follows from knowing what AI you actually run.


Building with AI? Let’s make sure it’s compliant from the start. We design AI systems with EU AI Act requirements built into the architecture.

Share this article
AI Act compliance AI security architecture

Related Articles

Need help building this?

We turn complex technical challenges into production-ready solutions. Let's talk about your project.