There is a particular kind of power that accrues not to the company whose name is on the product, but to the one whose engineering is inside every competing product. Intel had it in the PC era. ARM has it in mobile. In the age of custom AI silicon, that company is Broadcom — and its Q1 fiscal 2026 earnings, reported March 4, laid bare just how far the shift has progressed.
The headline numbers are striking enough: $19.3 billion in total revenue, up 29% year-over-year, with AI semiconductor revenue specifically reaching $8.4 billion — a 106% increase. But the numbers that matter most are the ones that describe a structural change in how AI compute gets built. Broadcom now has six major customers designing custom AI accelerators, up from what was effectively a Google-only story two years ago. CEO Hock Tan told analysts the company has a line of sight to more than $100 billion in AI chip revenue by 2027. That figure — chips alone, not systems — would represent one of the fastest scaling trajectories in semiconductor history.
![]()
The XPU Rotation
The story begins with a simple economic pressure. Nvidia’s general-purpose GPUs are extraordinary machines, but they’re also extraordinarily expensive — and for hyperscalers running specific workloads at planet-scale, they’re overprovisioned. A company training a particular family of models doesn’t need a chip that can do everything. It needs a chip optimized for exactly what it does, manufactured at lower cost per unit of useful compute.
This is the logic driving what Broadcom’s analysts call the “XPU rotation” — the migration from off-the-shelf GPUs toward application-specific integrated circuits designed by the hyperscalers themselves. Google has been on this path longest with its Tensor Processing Unit, now in its seventh generation (Ironwood). But the roster has expanded dramatically. Anthropic has confirmed custom chip orders reportedly worth $21 billion, with Tan projecting one gigawatt of TPU compute for the company in 2026 and over three gigawatts in 2027. Meta’s MTIA custom accelerator program — which some analysts had written off — is, according to Tan, “alive and well.” OpenAI is co-developing what Broadcom described as a massive ten-gigawatt compute system, with volume production ramping through 2026.
![]()
The common thread: none of these companies fabricate their own chips. They design them. Broadcom provides the intellectual property, the backend engineering, and the translation layer that turns a hyperscaler’s architectural vision into physical silicon manufactured at TSMC. It is, in effect, the universal adapter between AI ambition and semiconductor reality.
The Networking Moat
Custom accelerators are only half the picture. The harder problem in modern AI infrastructure isn’t making chips faster — it’s connecting them. When a data center houses more than a million processing units, the networking fabric that links those units becomes the binding constraint on system performance. This is Broadcom’s second structural advantage.
The company’s Tomahawk switching platform operates at 102.4 terabits per second. Its 200-gigabit SerDes (serializer/deserializer) technology enables the high-speed interconnects that stitch million-XPU clusters together. Tan told analysts that AI networking is growing as a share of total AI revenue, rising to 40% of the segment in Q2 — and that Broadcom is “clearly gaining share” as the industry migrates from Nvidia’s proprietary InfiniBand standard toward open Ethernet.
That migration matters enormously. InfiniBand locked customers into Nvidia’s ecosystem. Ethernet, standardized through the Ultra Ethernet Consortium, opens the back-end fabric to competition — and Broadcom’s switches, optics, and SerDes IP sit at the center of every major Ethernet-based AI cluster deployment. Much as Ethernet eventually standardized the internet three decades ago, it is now standardizing AI’s physical infrastructure. Broadcom doesn’t merely participate in that transition. It architects it.
The Valuation Puzzle
Despite all this, Broadcom’s stock has declined roughly 8% year-to-date through early March 2026. The disconnect between operational trajectory and market price has a few explanations. High-bandwidth memory shortages and advanced packaging constraints at TSMC have created real bottlenecks. Investors worry that hyperscaler capital expenditure — which has surged dramatically, with Google’s 2026 consensus up 117% year-over-year and Oracle’s up 264% — cannot sustain this pace into 2028.
Those concerns aren’t frivolous. But they apply to the entire AI infrastructure complex, Nvidia included. What they miss is Broadcom’s particular resilience: an AI-related backlog exceeding $73 billion, multi-year supply agreements securing key components through 2028, and a customer base that is structurally locked in through multi-generation chip design cycles lasting three to five years. A hyperscaler that has spent two years co-designing its next-generation XPU with Broadcom doesn’t switch vendors on a quarterly earnings miss.
The Q2 guidance underscored this momentum — approximately $22 billion in revenue, 47% above the prior year, with AI semiconductor revenue alone projected at $10.7 billion, representing 140% growth. The board authorized $10 billion in share buybacks, a signal of confidence that reads differently from a company generating $8 billion in quarterly free cash flow than it would from one leveraging its balance sheet.
The Invisible Kingmaker
The AI chip narrative has been a two-character play: Nvidia as protagonist, everyone else as aspiring competitor. That framing was always incomplete, but it’s becoming actively misleading. The real structural shift isn’t about who makes the best general-purpose GPU. It’s about who controls the translation layer between silicon design and silicon fabrication — and who owns the networking standard that connects it all at scale.
Broadcom now occupies both positions. It doesn’t compete with Nvidia for GPU sales. It competes for something more durable: architectural relevance in a world where every major AI platform is designing its own compute. The company’s customers aren’t buying chips off a shelf. They’re commissioning bespoke silicon — and Broadcom is the only firm executing those commissions at scale across six simultaneous programs. In the custom silicon wars, the winner isn’t the army with the most soldiers. It’s the forge that makes all the swords.
Tags
Related Articles
Sources
Broadcom Q1 FY2026 earnings call transcript, CNBC earnings coverage, Futurum Research analysis, Yahoo Finance earnings highlights, Investing.com transcript, 24/7 Wall Street live blog, TradingKey analysis, GuruFocus earnings summary, Broadcom press release via PR Newswire