For a decade, venture capital treated AI as a global abstraction: data flowed freely, models scaled endlessly, and cloud boundaries were someone else’s problem. That assumption is broken.
In Davos this year, Lu Zhang, founder and managing partner of Fusion Fund, described a world where AI adoption is increasingly shaped by national rules, regional trust, and energy constraints—not just technical performance. “Geopolitical issues have influence on their decision making process of which type of solution they’re going to use,” she said after meetings with European leaders questioning whether US-based AI systems are safe to deploy.
Zhang put it bluntly when we asked whether geopolitics is changing how she thinks about investing. In the near term, she argued, Silicon Valley’s innovation engine remains structurally global—“50% of the local resident is actually first-generation immigrants like me,” she said, pointing to the diversity of founders building the next wave of AI companies.
But then Zhang described a shift she says she only fully appreciated after stepping outside the Valley bubble and into Davos meeting rooms. “When I’m here talking to leader from Europe… I do realize that geopolitical issues have influence on their decision-making process of which type of solution they’re going to use,” she said. And the questions she’s hearing aren’t abstract: “Are we safe to use a solution provider from us? Are we safe to use our own model? Our safe user cloud services from us… which is surprise news to me.”
The next AI platform cycle isn’t just a competition of models; it’s a competition of trust boundaries—who controls the data, who can access it under what laws, and which vendors can credibly promise sovereignty, compliance, and continuity if politics turn.
This is why Zhang’s firm reads like a Davos-native venture strategy. Fusion Fund just closed an oversubscribed $190 million Fund IV, explicitly focused on “next-generation technology and AI solutions” across healthcare, enterprise, and industrial tech. (Those are precisely the sectors where data governance and regulatory exposure are not edge cases; they are the product surface.
From “AI Everywhere” to “AI within Borders”
Davos 2026 was framed by the World Economic Forum’s “age of competition” thesis, with “geoeconomic confrontation” ranked as the top near-term risk in the Global Risks Report. Zhang’s on-the-ground assessment of Europe’s posture aligns with the report’s macro diagnosis: fragmentation is no longer just about tariffs and supply chains. It’s moving into cloud procurement, model selection, and the rules governing where training data can reside.
Even the EU’s AI policy calendar sharpens the edge of that conversation. The European Commission notes the AI Act entered into force on August 1, 2024, with major obligations phasing in through 2025 and the act becoming fully applicable on August 2, 2026 (with certain high-risk system timelines extending further). For US AI vendors selling into Europe, “trust” increasingly means auditability, governance, and demonstrable controls—not just SOC2 badges and security blog posts.
You can see the political logic bleeding into day-to-day tooling decisions. France has announced it will replace US videoconferencing tools such as Zoom and Microsoft Teams in government departments with a domestic alternative, citing sovereign control and security concerns, and pointing to broader European sovereignty arguments about dependence on non-European tech providers. Zhang’s point is that those instincts are now arriving at the AI stack itself—models, cloud, and the systems that move sensitive data through them.
Why Fusion’s Sectors Suddenly Look like the Frontier
Fusion Fund’s stated focus—enterprise AI, healthcare AI, and industrial automation—can sound conservative in an era of consumer chat products. Zhang frames it as consistency: “in the past 10, almost 11 years… we invest in consistent, three vertical… enterprise AI, healthcare AI, the industry automation.”
But in 2026, those sectors may be where the next venture-scale platforms are built, precisely because they demand sovereignty-grade deployment patterns: privacy-preserving architectures, secure edge inference, segmented networks, and robust governance.
Zhang also names a second constraint that Davos people understand viscerally: the world can’t scale AI as if energy and compute were infinite. “The competition of AI is competition of cost,” she said. And she immediately defines “cost” in infrastructural terms: “make AI cheaper… reduced energy consumption and the GPU consumption,” plus the ability to deploy “on the edge devices… [and] the private network,” and to secure “the data of the model.”
The data support her instinct. The IEA projects global data centre electricity consumption will roughly double to about 945 TWh by 2030, with data center electricity demand growing around 15% per year from 2024–2030—far faster than overall electricity demand growth. In other words: the cost curve is now physical, not just computational.
ModelMonoculture is Breaking
Zhang described another consensus shift: enterprises are increasingly questioning whether “a large language model solves all the problems,” or whether they need “multiple, small… language models [s] focus[ed] on specific vertical applications.”
That’s not just an engineering choice; it’s a compliance posture. Smaller, specialized systems can be easier to validate, monitor, and constrain. They can also be deployed closer to the data—on private infrastructure or at the edge—reducing cross-border exposure and, in some cases, compute costs. In a world of contested cloud dependencies, “one model to rule them all” starts to look like a single point of geopolitical failure.
Zhanns with leaders in pharma, banking, and finance also point to a maturity transition: companies moving from experimentation to integration, and building internal capacity to manage it. “They were telling me how much they’re investing into AI integration,” she said, including “an in-house AI University for the employee to work through.”
This is where Fusion’s early-stage posture becomes legible. If you believe the AI economy is shifting from deployment capability—and from open globalism to a patchwork of regulatory zones—then the winners aren’t just the best researchers. They’re the teams who can ship infrastructure-adjacent product inside regulated workflows, under realistic compute budgets, and within increasingly hard national rules.
“My hope is we can really solve the disagreements and be able to continue working as an ecosystem together, driving the innovation of AI,” she said. But she ends with the line that investors in Davos repeat privately: “on the other side… we have to really be prepared for the worst case scenario.”