Cerebras IPO 2025: What the $24.6B Backlog Means

Key Takeaways

- Cerebras grew revenue from $24.6M to $510M in three years but remains unprofitable
- 86% customer concentration creates significant business risk despite impressive growth numbers
- New AWS and OpenAI deals could transform Cerebras from a niche player to mainstream AI infrastructure provider

Read in Short
Cerebras is going public with a $24.6 billion backlog and deals with OpenAI and AWS. Revenue grew 20x in three years. But 86% of that revenue comes from just two Middle Eastern customers, and the company is still burning cash. For CTOs evaluating AI infrastructure and investors watching the AI hardware space, this IPO represents both massive opportunity and concentrated risk.
According to [Tom's Hardware](https://www.tomshardware.com/tech-industry/artificial-intelligence/cerebras-files-for-ipo-company-remains-unprofitable-despite-20x-revenue-growth), Cerebras has filed for an IPO for the second time, revealing financial results that show the company grew revenue from $24.6 million in 2022 to $510 million in 2025. That's 20x growth in three years. It's also one of the most lopsided customer concentration ratios you'll see in any tech IPO filing.
The Cerebras IPO story is a case study in AI market dynamics. On one hand, you have genuinely innovative technology that solves real problems in AI training. On the other, you have a business model that relies heavily on a few large customers in geopolitically complex regions. Let's break down what this means for your investment thesis and your AI infrastructure decisions.
Why Did Cerebras File for IPO Now?
This is Cerebras' second attempt at going public. The company cancelled its first IPO plans due to scrutiny around its ties with G42, an Abu Dhabi-based AI company backed by sovereign wealth fund Mubadala. Those ties haven't disappeared. They've just been joined by more palatable names.
The timing makes strategic sense. Cerebras recently signed agreements with Amazon Web Services and OpenAI. These deals do more than add revenue. They add credibility. When a company's customer list includes the world's largest cloud provider and the company that kicked off the generative AI revolution, the investment story gets much easier to tell.
The company expects to recognize about 15% of this backlog revenue within the first 24 months through December 2027. Another 43% will come during months 25 to 48. That's a long revenue runway, assuming the deals hold.
How Does Cerebras Technology Compare to Nvidia?
Cerebras takes a fundamentally different approach to AI hardware. While Nvidia sells GPUs that get networked together into clusters, Cerebras builds wafer-scale engines. That means an entire silicon wafer becomes a single processor instead of being cut into hundreds of individual chips.
| Factor | Cerebras WSE | Nvidia GPU Clusters |
|---|---|---|
| Architecture | Single wafer-scale chip (900K cores) | Multiple GPUs networked together |
| Memory | 44 GB on-chip SRAM | 80-192 GB HBM per GPU |
| Bottleneck | Manufacturing yield | Inter-chip communication |
| Bandwidth | 21 PB/s on-chip | Limited by interconnects |
| Sales Model | Full rack-scale systems only | GPUs to complete systems |
| Customer Type | Large infrastructure buyers | Everyone from researchers to enterprises |
The business implications are significant. Cerebras trades system complexity for silicon complexity. You don't have to figure out how to network thousands of GPUs together. But you do have to trust that Cerebras can manufacture these massive chips reliably. That's not trivial. Wafer-scale chips are notoriously difficult to yield, though Cerebras uses redundant cores and memory cells to work around defects.
For CTOs evaluating AI infrastructure, this means Cerebras is best suited for specific workloads where inter-chip communication is the bottleneck. That's often the case with large language model training. It's less compelling for inference workloads where Nvidia's broader ecosystem and software support matter more.
What's the Customer Concentration Risk?
Here's where the Cerebras IPO story gets complicated. About 86% of the company's revenue comes from just two customers: G42 and Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). Both are based in the UAE.
This level of customer concentration would be a red flag in any industry. In AI hardware, where geopolitical tensions around chip exports are constant, it's particularly concerning. The remaining 14% of revenue comes from a fragmented base of smaller enterprise, government, and cloud customers. None contribute enough individually to reduce the heavy reliance on the top two clients.
The AWS and OpenAI deals are meant to address this. If Cerebras can execute on these contracts, the customer concentration picture looks very different in 24 to 48 months. But execution is the key word. These are complex, long-term infrastructure deals with sophisticated buyers who have alternatives.
Customer concentration and market timing lessons apply to evaluating Cerebras' IPO positioning
Is Cerebras Worth the Investment at IPO?
The investment case for Cerebras comes down to one question: Can the company diversify its customer base before the current concentration becomes a problem? The numbers tell a story of explosive growth built on a fragile foundation.
✅ Pros
- • 20x revenue growth from $24.6M to $510M in three years
- • $24.6 billion backlog provides demand visibility through 2027+
- • Deals with AWS and OpenAI add credibility and diversification
- • Unique technology solves real AI training bottlenecks
- • No direct competitor in wafer-scale AI processing
❌ Cons
- • Still unprofitable despite massive revenue growth
- • 86% customer concentration in geopolitically sensitive region
- • Manufacturing complexity limits ability to scale quickly
- • Systems-only sales model limits addressable market
- • Nvidia ecosystem advantages in software and integration
The backlog provides some comfort. A $24.6 billion order book isn't something companies walk away from easily. But backlog isn't revenue. The expected recognition schedule of 15% in the first 24 months and 43% in months 25 to 48 means investors need patience. That's a long time in AI hardware, where the landscape shifts quarterly.

What Does This Mean for AI Infrastructure Buyers?
If you're evaluating AI infrastructure for your organization, the Cerebras IPO filing provides useful intelligence about the market. The fact that AWS and OpenAI are buying Cerebras systems suggests the technology solves problems that even Nvidia's most advanced offerings don't address completely.
Key Decision Factors for CTOs
Consider Cerebras if: You're training very large models, inter-GPU communication is your bottleneck, you can commit to full rack-scale deployments, and you have the engineering team to work with a less mature software ecosystem. Stick with Nvidia if: You need flexibility in deployment size, inference is your primary workload, software ecosystem maturity matters, or you're building for multiple AI use cases.
The AWS partnership is particularly interesting for enterprise buyers. If AWS offers Cerebras as a cloud service, the barrier to trying the technology drops significantly. You could test wafer-scale processing for specific workloads without the capital commitment of buying systems outright.
Infrastructure decisions require security considerations alongside performance
How Long Until Cerebras Becomes Profitable?
The IPO filing reveals that Cerebras remains unprofitable despite the revenue growth. This isn't unusual for a company at this stage. Wafer-scale chip manufacturing requires massive upfront investment in R&D and production capabilities. The question is whether the backlog and new customer wins can drive margin improvement.
The path to profitability likely depends on three factors. First, manufacturing yield improvements. As Cerebras refines its production processes, the cost per system should decline. Second, software and services revenue. Like Nvidia, Cerebras can potentially build recurring revenue streams around its hardware. Third, customer diversification. More customers typically means better pricing power and more predictable revenue.
The Bigger Picture for AI Hardware Markets
The Cerebras IPO matters beyond just one company. It tests whether the AI hardware market can support multiple winners or whether Nvidia's dominance is permanent. Every major cloud provider and AI company is looking for alternatives to reduce dependence on a single supplier.
AMD has made progress in this space. Google has its TPUs. Amazon is developing Trainium. But Cerebras represents something different. A fundamentally new architecture that doesn't just compete with Nvidia on specifications but rethinks the problem entirely.
For business leaders, this diversification matters. A market with multiple viable AI hardware suppliers means better pricing, more innovation, and reduced supply chain risk. The Cerebras IPO, whatever its outcome, advances that diversification.
Hardware evaluation frameworks apply across categories
Frequently Asked Questions
How much does a Cerebras system cost?
Cerebras doesn't publicly disclose system pricing, but based on the revenue and deployment numbers in the IPO filing, we can estimate rack-scale systems run into the millions of dollars. The company sells complete systems, not individual chips, so the entry point is higher than buying Nvidia GPUs but potentially lower total cost of ownership for large-scale AI training.
Is Cerebras a good investment compared to Nvidia stock?
The risk profiles are very different. Nvidia is a proven market leader with diversified customers and consistent profitability. Cerebras offers higher growth potential but comes with 86% customer concentration risk and no profitability track record. Cerebras is a higher-risk bet on AI infrastructure diversification.
Should my company evaluate Cerebras for AI infrastructure?
Consider Cerebras if you're training very large language models and inter-GPU communication is limiting your performance. The AWS partnership may soon offer a lower-risk way to test the technology before committing to purchased systems. For most enterprise AI workloads, Nvidia remains the safer choice.
What happens to Cerebras if the OpenAI deal falls through?
The $20 billion OpenAI deal represents most of the backlog. If it doesn't convert to revenue as expected, Cerebras would face significant challenges. However, the deal structure likely includes milestones and commitments that make complete cancellation unlikely. Partial reduction or delays are more realistic concerns.
How does Cerebras technology work for AI training?
Cerebras builds entire silicon wafers into single processors with 900,000 cores, 44 GB of on-chip memory, and 21 PB/s of internal bandwidth. This eliminates the communication bottleneck that occurs when networking thousands of separate GPUs together. The tradeoff is manufacturing complexity, as wafer-scale chips are much harder to produce reliably.
Logicity's Take
We build AI automation systems for businesses using Claude, n8n, and custom integrations. We're not hardware investors. But we watch infrastructure trends closely because they affect what's possible for our clients. The Cerebras IPO signals that the AI hardware market is maturing beyond Nvidia-or-nothing. That's good for everyone building AI applications. More competition means better economics for compute-intensive workloads over time. For Indian tech companies specifically, the diversification of AI hardware suppliers could eventually mean more options for domestic AI infrastructure. Right now, getting Nvidia H100s is hard and expensive. If Cerebras succeeds and expands through cloud providers like AWS, that's another path to advanced AI compute. Our practical advice: Don't wait for hardware market dynamics to change. Build your AI capabilities with what's available now. The software layer, including agents, automation, and integration, is where most businesses create value anyway. Hardware gets cheaper and better every year. Your competitive advantage comes from what you build on top of it.
Need Help Implementing This?
At Logicity, we help businesses build AI automation systems that don't depend on having the latest hardware. Our Claude-powered agents and n8n workflows run on standard cloud infrastructure while delivering enterprise-grade results. Whether you're evaluating AI infrastructure options or ready to build, we can help you make smart decisions. Contact us at logicity.in to discuss your AI strategy.
Source: Latest from Tom's Hardware
Manaal Khan
Tech & Innovation Writer
Related Articles
Browse all
Apple Pippin Failure: 5 Product Launch Lessons for CEOs

DIY Genome Sequencing at Home: What It Means for Biotech

Galaxy S26 Plus vs Ultra: Which Flagship for Business?



