Why European companies need to rethink where their AI actually runs

DNA Solutions

There's a question European enterprises aren't asking often enough: where does your AI run, and who controls the infrastructure underneath it?

Most companies adopting AI today rely on US-based hyperscalers. The model runs on someone else's servers, processes data under someone else's jurisdiction, and depends on someone else's infrastructure decisions. For a quick internal chatbot, that might be fine. For anything touching customer data, regulated processes, or strategic decision-making, it's a risk that deserves serious scrutiny.

The European Union has made its position clear. With the AI Act entering enforcement, GDPR still tightening its grip, and NIS2 expanding cybersecurity obligations, the regulatory environment is pushing organizations toward more control over their AI infrastructure. Not because sovereignty is trendy, but because the legal exposure of getting it wrong is becoming very real.

What does "sovereign AI" actually mean for a European business?

Strip away the political rhetoric and sovereign AI comes down to a practical question: can your organization guarantee that its AI systems process data within a legal and technical framework you fully control?

That means knowing where the data is stored, under which jurisdiction it falls, who can access it (including the cloud provider's own staff and government requests), and whether you can move to a different provider without rebuilding everything from scratch.

For most European companies today, the honest answer to at least one of those questions is no.

This isn't about rejecting cloud computing or building everything on-premises. It's about understanding the dependency chain. When your AI system runs on a US hyperscaler, you're subject to the CLOUD Act, which allows US authorities to request access to data stored by American companies regardless of where the servers are physically located. That's not a theoretical concern. It's a documented legal reality that directly conflicts with GDPR obligations.

Why is on-premises AI deployment gaining traction again?

Industry projections suggest that by 2028, 75% of enterprise AI workloads will run on hybrid infrastructure specifically designed for the task, including significant on-premises components. That's a notable shift from the "everything to the cloud" narrative that dominated the past decade.

Several factors drive this trend.

Data gravity. Large enterprises generate massive volumes of data. Moving that data to a cloud provider for AI processing creates latency, bandwidth costs, and security exposure. Processing data where it already lives is often faster, cheaper, and safer.

Regulatory pressure. The AI Act classifies certain AI systems as high-risk and imposes strict requirements on data handling, documentation, and oversight. For sectors like healthcare, finance, and public administration, keeping AI infrastructure within controlled environments simplifies compliance significantly.

Cost predictability. Cloud AI costs scale with usage, which makes budgeting unpredictable for compute-intensive workloads. On-premises or private cloud deployments offer more predictable cost structures, especially for organizations running AI models continuously rather than sporadically.

Supply chain risk. The past few years have taught European businesses that depending on a single non-European provider for critical infrastructure creates strategic vulnerability. Diversification isn't paranoia. It's risk management.

What is the EU actually doing about AI infrastructure?

The European Commission recently launched EURO-3C, a 75 million euro project to build a federated telecom-edge-cloud infrastructure across Europe, involving more than 70 organizations in 13 countries. The goal is to create a European alternative for deploying AI workloads that meets sovereignty, performance, and regulatory requirements simultaneously.

This isn't the only initiative. The European Chips Act, Gaia-X, and various national cloud strategies (like France's "Cloud de Confiance" or Germany's Sovereign Cloud Stack) all point in the same direction: Europe is building the infrastructure layer to support AI deployment under European rules.

For enterprises, this means the "there's no European alternative" argument has an expiration date. The ecosystem is emerging. The question is whether your organization will be ready to leverage it when it matures, or whether you'll be locked into architectures that make migration painful.

How should companies approach hybrid AI infrastructure?

The pragmatic answer is not "move everything on-premises" or "stay fully in the cloud." It's to design an infrastructure strategy based on the sensitivity and regulatory classification of each workload.

Tier 1: regulated and sensitive workloads. AI systems that process personal data, make decisions affecting individuals, or fall under AI Act high-risk categories should run on infrastructure you control. That can be on-premises, a European sovereign cloud provider, or a private cloud environment with contractual guarantees on jurisdiction and access.

Tier 2: internal optimization workloads. AI used for internal processes (demand forecasting, resource planning, code assistance) where data sensitivity is moderate can run on hyperscaler infrastructure, provided data residency and access controls are properly configured.

Tier 3: experimental and non-sensitive workloads. POCs, research projects, and AI tools processing only public data can use any infrastructure that meets basic security requirements.

The mistake most organizations make is treating all AI workloads identically. Either everything goes to the cloud (ignoring sovereignty risks) or a blanket "no cloud" policy creates unnecessary friction for low-risk use cases. The right approach is granular, workload by workload.

What does this mean for companies still running legacy infrastructure?

Here's where it gets interesting for organizations with significant technical debt. Sovereign AI isn't just a cloud strategy question. It's an infrastructure modernization question.

If your current systems can't support AI workloads at all (because they're too old, too rigid, or too fragile), the sovereignty discussion becomes moot. You can't choose where your AI runs if your infrastructure can't run AI in the first place.

This creates a strategic opportunity. Organizations that modernize their infrastructure now can design for sovereignty from the start, rather than retrofitting compliance later. Building a hybrid architecture that accounts for data residency, regulatory classification, and workload sensitivity from day one is significantly cheaper than migrating an existing cloud-native AI stack to a sovereign setup after the fact.

We see this pattern with our clients. The ones who approach modernization with sovereignty requirements already in scope end up with cleaner architectures, lower long-term costs, and fewer compliance headaches. The ones who modernize first and think about sovereignty later end up doing parts of the work twice.

The real question European enterprises should be asking

The sovereign AI conversation is often framed as a geopolitical issue. It's not. For individual companies, it's a risk management issue with concrete financial and legal implications.

The question isn't "should we support European digital sovereignty?" The question is: "if a regulator asks us tomorrow where our AI processes customer data, under which jurisdiction, with what access controls, and with what documentation, can we answer confidently?"

For most European enterprises today, the answer is no. And the regulatory window to fix that is closing.

At DNA Solutions, we help organizations design and implement AI infrastructure that meets European regulatory requirements without sacrificing performance or agility. We've spent over six years working with the European Commission on critical systems. We understand what compliance looks like in practice, not just on paper.

The companies that get this right early won't just avoid regulatory risk. They'll build a trust advantage that becomes harder to replicate as the market matures.