We are excited to announce that Premji Invest is co-leading Upscale AI's Series A, and to be partnering with founders Rajiv Khemani and Barun Kar, along with the rest of the Upscale AI team.
We are witnessing a massive architectural shift within the data center. As we enter a multi-trillion dollar capex cycle, the ratio of network-spend to compute-spend is rising alongside cluster sizes. This creates an opportunity for Upscale AI to become the critical silicon provider, providing a step-change in performance and efficiency in a market that will be worth tens of billions in value. Members of the founding team previously architected Innovium, the only startup in the last decade to take significant market share from Broadcom (Premji led their Series E prior to their $1.1B acquisition by Marvell). We are thrilled to back this world-class team again as they revolutionize the AI networking market.
History & Problem Statement: the Rise of the AI Factory
The deployment of GenAI is driving one of the largest, fastest, and most coordinated capex cycles in modern history of technology. We are transitioning from the era of general-purpose cloud computing, characterized by virtualization of CPU-centric workloads, to the era of the "AI Factory." In this new paradigm, the fundamental unit of compute is not a single server but a tightly synchronized cluster of tens of thousands of accelerators, where massive parallelism requires low-latency coordination across devices.
While the rise of LLMs recently resurfaced memory bottlenecks (driving demand for HBM), the continued scaling of accelerator compute has now pushed the bottleneck to I/O: the emerging "networking wall." As monolithic chip scaling approaches the reticle limit and Moore's Law slows, further scale can no longer be easily achieved by simply making individual accelerators larger or packing transistors more densely. Instead, improvements increasingly depend on efficiently distributing computation across multiple devices.
In the CPU era, the challenge was instruction feeding. In the early deep learning era, the challenge was matrix multiplication throughput (FLOPS). Today, with mixture-of-experts (MoE) models and massive context windows, the challenge is data movement. Models grow 10x per year, requiring constant exchange of gradients and weights via model and tensor parallelism. GPUs often spend significant cycles idle, waiting for data to arrive from its neighbors. This "traffic jam" caps the ROIC for the entire AI industry.
Scale-up vs. scale-out
The industry distinguishes between two networking systems, "scale-up" and "scale-out". While these terms are used informally to describe communication within a single rack vs. across multiple racks, that physical distinction is becoming increasingly blurred. A clearer conceptual definition of a scale-up domain is one that shares memory semantics, i.e. a system that presents distributed accelerators as a single machine with a "shared pool" of memory. In contrast, scale-out networks rely on explicit data transfers and are more forgiving of latency.
In the near term, "rack-scale" competitiveness relies on performant scale-up silicon, a requirement shared by both hyperscalers and merchant XPU vendors. Furthermore, while the space is undoubtedly competitive, the recent proliferation of open networking standards reflects growing commercial appetite for high-performance and interoperable alternatives to vertically integrated solutions.
Upscale AI has emerged as the definitive answer to this mandate. Their SkyHammer (scale-up) [1.1]architecture represents flexible silicon purpose-built for AI workloads, combining the competitive performance traditionally associated with proprietary fabrics with the ecosystem benefits of open, standards-based protocols.
Opportunity
There is no merchant silicon provider today that offers a ground-up scale-up switch that is:
- Open: compatible with UALink (the AMD/Intel standard)
- Optimized: free from Ethernet’s legacy baggage
- Neutral: unaligned with a specific GPU vendor
Enter Upscale AI.
Firstly, Upscale AI is the only switch engineered to support both major open standards for scale-up: UALink (backed by AMD/Intel and others) and ESUN (Ethernet for scale-up networking, backed by AMD, Meta, MSFT and others). Upscale AI also supports UEC (Ultra Ethernet consortium, backed by AMD, Intel, Broadcom, and others).
Secondly, Upscale AI's approach outperforms Ethernet. SkyHammer is a purpose-built ASIC designed from the ground up for scale-up networking. By avoiding legacy networking bloat (IP routing, legacy MACs), Upscale AI achieves strong "cut-through" latency, supporting deterministic flow control and adaptive load handling, while outperforming Broadcom's Ethernet adaptions. The technical moat is how SkyHammer handles memory semantics. In a scale-up pod, GPU A needs to read data from the memory of GPU B. NVLink does this natively. Ethernet does not: it wraps the data in a packet, sends it, unwraps it, and acknowledges it. Critically, SkyHammer reduces overhead to near zero for massive "all-to-all" bandwidth.
Thirdly, Upscale AI is architecting the "Switzerland" of the AI data center. They hope to partner with all major clouds, working across any type of GPU (including custom ASICs at the large hyperscalers). Beyond silicon, Upscale AI is the 2nd largest contributor to SONiC, the open-source OS that ensures interoperability across different ASIC platforms (vendors provide their own SAI libraries that SONiC manipulates) and fast time-to-market. Their switches are also future-proofed for optical engines.
Finally, Upscale's team has a stellar track record of taping out high-performance switches, a highly specialized area with a steep learning curve and little margin for error. Their founders and lead engineers bring rare, first-hand experience building leading-edge networking silicon and shipping reliably in complex production environments.
Conclusion
The "I/O wall" is now one of the defining challenges of the next decade. As AI workloads drive larger, more tightly coupled accelerator fabrics, networking has moved from a supporting role to a core architectural constraint. In response, the industry demands a dedicated, low-latency, memory-semantic fabric open to all.
Upscale AI is a first-mover that has positioned itself at the center of the AI infrastructure shift. They provide the critical connectivity that allows companies to compete with established players on a rack-scale level. This neutrality is Upscale AI’s biggest strength: in a market eager to diversify away from proprietary solutions, Upscale AI represents a rare and high-value alternative. Combining this scarcity with a massive TAM and a superior open-standard approach, we are convinced this is the team destined to build a defining infrastructure company in the AI era.
Related articles

Premji Invest & Upscale AI Partnership

Premji Invest & Upscale AI Partnership
The Long Game: Building the Future of Premji Invest
The Long Game: Building the Future of Premji Invest

Kriya Therapeutics | Building a Generational Biotech

Kriya Therapeutics | Building a Generational Biotech

The Anti-Exit Strategy: Premji Invest’s Approach to Evergreen Capital

