Opportuna Newsletter #7 | Mar-25 Edition

AI Infrastructure and the Rise of Reasoning Models

The start of 2025 has been extremely busy, as we continue to deploy into high quality venture opportunities. Contact us if you want to know how we approach them.

We are participating in an event in Zurich focused on the intersection of Semiconductors and AI. You are invited.

This month we also launched our first edition of The Assembler, a new collaboration with Mark Campbell which provides insights into software and cybersecurity in a shorter format. Mark is the CEO and Founder of 3dot Insights, which derives insights from the venture and startup communities, enabling us to further identify, evaluate and leverage emerging tech. Take a look here.

This edition remains focused on the convergence of private and public tech investing. We explore AI infrastructure trends, highlighting how, in our view, the rise of reasoning models is set to unlock a wave of investment opportunities. In “Chart of the Month” and “Current Topics”, we analyse hyperscaler Capex trends and explain why spending on AI infrastructure is set to keep increasing. Our “Long-Term Theme” examines how reasoning models are driving significant innovation and new opportunities.

You can see past editions here. Anybody can sign up following this link.

🚨 Highlights of the Month

The attractiveness of secondaries as a liquidity solution to keep employees and satisfy investors is increasing. In a recent survey of European Tech start-ups, 80% indicated they are considering a secondary share sale in 2025. Companies have increased equity in the package, perhaps as a way to save cash. What also stroke me is the gap between Continental Europe and the UK/US as an IPO venue. EU Scale-ups list in the UK or in the US (see Klarna) vs Continental Europe. But UK Scale-ups are happy to list in the UK.

VC-backed buyers as an Exit Route. The share of global exit via acquisitions keeps increasing and represented 18% of global exit value and 27% of exit count. The burst of the bubble has not disrupted that trend. May be that presents an option for companies too small to IPO, not growing enough to raise VC money, yet too risky to attract PE. The poor track-record of M&A for value creation certainly begs the question of whether this strategy will pay out for the sponsor.

Tricky IPOs. Turo pulled IPO plans. It is difficult to know from the outside whether this decision relates to the two deadly incidents this New Year’s or to fundamentals. Apparently, the company delivered 7% Sales growth recently, which will hardly draw the crowds in an IPO. It is a difficult situation.

On the other hand, CoreWeave is going through. CoreWeave S-1 shows an asset-heavy balance sheet. Here are key facts from the filing. This asset resembles the infrastructure part of Azure, AWS or Google Cloud, without the internal customer. CoreWeave two largest customers accounted for 77 percent of revenue in 2024, with Microsoft making up 62 percent. It will be very important to understand the stickiness of that revenue stream; the FT reported that Microsoft was reducing its commitments. Is this simply a commodity serving marginal demand? Is that demand going away as hyperscalers build up their own capacity through the massive capex increase they guided in 2025? In that context, buying an AI developer platforms make total sense.

Klarna to File for $1 Billion-Plus IPO as Soon as Next Week. The $1bn of raise is small relative the $800m they raised in July 2022 to expand in the US. Equally, the $15bn minimum seems low relative to Affirm $27bn market cap, as they have similar gross profits. In our view, it indicates cautiousness around a high-profile IPO; it is very important that it pops on the day and shows appreciation over time for the gates to re-open.

📈 Chart of the Month: Hyperscalers Spent 50% of Operating Cash Flow on Capex in 2024

Capex gets funded by operating cash flow. That is the ceiling for hyperscaler capex spending. Microsoft and Google have the most room to increase.

Source: Companies Financials

🌐 Current Topics: The Sustainability of AI Capex - A Reality Check

The launch of DeepSeek has not dented hyperscalers’ capital expenditures. Meta, Google, Microsoft, and Amazon have collectively projected $304 billion in capex for 2025—$75 billion more than in 2024. The AI arms race is in full swing, and these companies show no signs of slowing down.

As Andy Jassy put it in Amazon’s last earnings call:

AI represents for sure the biggest opportunity since cloud—probably the biggest technology shift and opportunity in business since the internet.”

Three of these four companies were born from the internet; now, they face their most disruptive business model shift yet. They will spend whatever it takes to stay ahead.

And they can still afford to. Capex will consume 57% of operating cash flow in 2025—historically high but not unsustainable. On earnings calls, analysts probe the return on investment, but only at the margins. For now, the market is giving them the benefit of the doubt. Valuations suggest investors see AI infrastructure as a necessary cost of future dominance. The moment when shareholders push for spending cuts has not yet arrived.

Underneath the surface, Capex is shifting away from training towards inference, driven by the rise of reasoning models (more about this in the next section). These models have higher computational-to-token ratios, which weaken inference economics. Infrastructure teams must rethink deployment architectures, capacity planning models, and scaling strategies to handle computationally intensive, unpredictable workloads. At both the hardware and software levels, inference infrastructure for reasoning models will demand significant innovation—from semiconductor design to resource management and load balancing. This evolving landscape will continue to generate investment opportunities in companies developing the next generation of AI infrastructure.

🧭LT: The Rise of Reasoning Models

Over the past six months, the most significant shift in AI has been the growing prominence of reasoning models. When OpenAI o1 launched in September, it took the industry by storm. More recently, DeepSeek R1 forced many to reconsider their assumptions about what large language models (LLMs) can achieve. This section provides a primer on reasoning models—how they differ from general-purpose LLMs, where they are best applied, and how their adoption will shape the broader tech ecosystem.

What Are Reasoning Models?

Reasoning models excel at multi-step problem-solving and logical reasoning. Unlike general-purpose LLMs, which rely on statistical pattern matching, reasoning models break problems down into explicit steps, leading to more accurate results. The concept of Chain-of-Thought (CoT) originally emerged as a prompting technique to encourage LLMs to decompose problems into intermediate steps before arriving at an answer. In reasoning models, however, this approach is built into the model’s architecture and training, rather than applied externally through prompts.

Examples of Reasoning Models

Name

Producer

Launch Date

o1

OpenAI

Sept-24

R1

DeepSeek

Jan-25

Claude 3.7 Sonnet

Anthropic

Feb-25

These models perform best in specialized domains. In an October blog post, Sequoia outlined its vision, offering one of the clearest articulations of the future: LLMs serve as infrastructure—the AWS of the AI era—while agentic applications are built on top. However, integrating LLMs into workflows is far from straightforward, creating opportunities for startups to simplify adoption. Following this logic, reasoning models are likely to drive a new wave of domain-specific agentic applications. Is Vertical Agentic AI the new Vertical SaaS?

The Computational Cost of Reasoning Models. Reasoning models require significantly more computational power than general-purpose LLMs. They often perform multiple internal passes or intermediate computations per output token. While a general-purpose LLM may generate a token in a single forward pass, a reasoning model may require 2–4 forward passes to produce a well-reasoned response. This results in 2–4× the FLOPs per token during inference.

During training, which involves backpropagation through these additional reasoning steps, the computational overhead is even higher. In production environments, real-time constraints become a challenge: when processing millions of tokens per second, even a small increase in per-token compute cost (say, ) can drive up infrastructure  demands and energy consumption. Beyond sheer compute power, reasoning models also place greater stress on memory bandwidth and cache performance, necessitating state-of-the-art GPUs with larger memory pools.

The rise of reasoning models marks a fundamental shift in AI, moving beyond pattern recognition toward structured, multi-step problem-solving. These models offer a glimpse into the future of AI applications: more precise, more capable, but also more computationally intensive. Scaling reasoning models requires rethinking how AI is deployed in production. While general-purpose LLMs will continue to serve broad use cases, reasoning models are poised to unlock new efficiencies in finance, law, scientific research, and other high-stakes fields.

📌 Conclusion

As we progress deeper into 2025, AI infrastructure remains a dominant investment theme. This edition has highlighted the sustained Capex commitments of hyperscalers, the impact of reasoning models, and the resulting opportunities emerging in semiconductor and cloud infrastructure.

The AI landscape is evolving rapidly, and now is the time to position ahead of the curve.  

If you are seeking tailored liquidity solutions or quality private market exposure, we would love to hear from you and explore new opportunities together. Please get in touch here.

We look forward to an exciting year ahead.

Warmest regards,
The Opportuna Team