OpenAI has 800 million weekly active users. Only 5% of them pay.
That ratio should make you pause. The company that defined the AI era, that gave the world ChatGPT, that raised over $60 billion in aggregate funding, is subsidizing 95% of its user base. And to hit its own revenue targets by 2030, it needs to nearly quadruple its total users while doubling its paid conversion rate. All while competitors close the gap from every direction.
I spend my days building with AI tools and teaching executives how to think about AI adoption. The question I keep hearing has shifted. A year ago, people asked "which AI model should we use?" Now they ask "is OpenAI going to survive?" The answer is more complicated than the headlines suggest, and more interesting.
The Numbers Behind the Curtain
Caleb Wright's Code recently did a detailed breakdown of OpenAI's financial trajectory that's worth unpacking, because the raw numbers tell a story that gets lost in the hype cycle.
Here's the setup. OpenAI targets $174 billion in annual revenue by 2030. To get there through subscriptions alone, HSBC estimates they'd need 3 billion users with 10% paying. That's roughly 35% of the global population using ChatGPT weekly, and 300 million of them handing over $20 a month. Even at that optimistic ceiling, subscription revenue tops out around $72 billion. Which means OpenAI still needs to find another $100 billion in annual revenue from somewhere else.
Where does that gap get filled? The list of potential revenue streams reads like a wish list: consumer hardware (rumored for late 2026 with 40 million devices), API and agentic applications, enterprise licensing deals (companies like Perplexity and Harvey already use OpenAI's models under the hood), advertising for free users, and a speculative royalty model that CFO Sarah Frier floated for breakthroughs that achieve market adoption.
Each of these is plausible individually. Together, they need to produce over $100 billion in annual revenue on top of subscriptions. That's a lot of things that need to go right simultaneously.
I think about this through the lens of what I see in enterprise. When I advise companies on AI tool selection, the conversation has changed dramatically. A year ago, OpenAI was the default. "We'll use ChatGPT" was the whole strategy. Today, teams run benchmarks across Claude, Gemini, and ChatGPT before committing. The moat is thinner than it looks.
The Market Share Problem
The competition picture is where things get really uncomfortable for OpenAI's projections.
Gemini recently hit 650 million monthly active users. Claude's user base is growing fast, particularly among developers and power users. Grok rides on X's distribution. The underlying models are getting good across the board, and that's the core issue: when the quality gap narrows, the product around the model matters more than the model itself.
Google has its entire productivity suite baked into Gemini's pricing. Anthropic has Claude Code, which has quietly become the default coding tool for a growing segment of developers (I use it every day, and 4% of public GitHub repositories already use it). Grok has X's distribution. OpenAI has ChatGPT. A great product, but one that doesn't lock you into anything. There's no switching cost beyond habit.
I see this play out in my own workflow. A year ago, I used GPT-4 for nearly everything. Today, my primary tool is Claude with Opus, and I use Gemini for specific tasks where its context window or multimodal capabilities fit better. I didn't leave OpenAI because it got worse. I left because everything else got better, and nothing about ChatGPT made leaving difficult.
Scale that behavior across millions of users, and OpenAI's path to 3 billion weekly users starts looking less like a growth plan and more like a prayer. They don't just need to grow. They need to grow four times their current size while competitors are pulling users in the opposite direction.
The API side faces similar pressure. APIs are inherently commoditized. Developers switch between providers with minimal friction because the integration patterns are nearly identical. When Anthropic or Google matches OpenAI's model quality (which happens more frequently now), the deciding factor becomes price and reliability, not brand loyalty. OpenAI still dominates API revenue, but dominance built on being first isn't the same as dominance built on being irreplaceable.
The Funding Tightrope
OpenAI's Series F, led by SoftBank, closed out 2025. SoftBank had to liquidate positions elsewhere just to close the deal by year end. That level of financial gymnastics from your lead investor isn't a great signal about the ease of future fundraising.
The company has raised over $60 billion so far. Estimates suggest they need an additional $27 billion by 2030, which is three and a half times more than their total raise to date. Their next round targets $50 billion, with additional capital expected from UAE sovereign funds.
The part that fascinates me: prediction markets on Polymarket already have active bets on multiple OpenAI scenarios. What its market cap will be at IPO, whether the US government will backstop it before a certain date, and whether it gets acquired before 2027. When the betting markets start pricing in government bailouts and acquisitions alongside a standard IPO, the range of outcomes is wider than the company's PR would suggest.
The talent situation adds another dimension. Key AI researchers have been leaving OpenAI. While this isn't unique (talent rotates across all frontier labs), it's harder to replace researchers at this level than it is to replace capital. There are only so many people on the planet who can push the frontier of AI capabilities, and they have more options than ever.
Then there's the Elon Musk lawsuit asking for $134 billion in damages. Whether it succeeds legally is debatable. That it exists at all, filed by someone with the resources and motivation to make it as painful as possible, is another variable that no financial projection can fully account for.
Too Big to Fail, Too Expensive to Sustain
I keep arriving at the same conclusion: OpenAI probably survives, but not on the terms it's currently projecting.
The "too big to fail" logic is compelling. Over $60 billion in invested capital. Major banks, sovereign wealth funds, and some of the largest VCs in history have their reputations tied to this outcome. The US government has strategic interest in maintaining a leading AI company. The economic and political costs of letting OpenAI collapse are high enough that someone will step in before it happens.
But surviving and thriving are different things. The most likely path looks something like this: OpenAI rushes toward an IPO while public enthusiasm for AI is still high, shifting the burden of funding from private investors to public markets. This is probably their best move. The window for an AI IPO at a premium valuation is open now, but the market's patience with unprofitable AI companies isn't infinite. Waiting until 2030 to go public, after securing 20 to 30 gigawatts of data center capacity, risks running into a market that's already digested the AI hype and wants to see actual margins.
The alternative scenarios are less attractive. Government funding comes with strings. Acquisition by Microsoft (or anyone else) means losing independence and likely losing Sam Altman's vision for the company's trajectory. More private funding means more dilution and increasingly desperate deal terms, like SoftBank's last-minute scramble.
What This Means If You Work With AI
For practitioners and business leaders, the takeaway isn't "avoid OpenAI." It's "don't build your AI strategy around any single provider."
I learned this through direct experience. When I moved from GPT-4 to Claude for my daily work, the transition was seamless precisely because I hadn't locked myself into OpenAI's proprietary features. My prompts, my workflows, my integrations all worked across providers with minor adjustments. The teams I advise who went deep on OpenAI-specific APIs and tooling face a harder migration path.
The broader lesson is about how AI infrastructure economics are shaking out. The cost of inference is dropping fast (I wrote recently about Minimax's model costing $1,892 per year for continuous operation). The models themselves are converging in quality. What matters increasingly is the application layer: how you use AI, not which company's model sits underneath.
OpenAI might IPO at a massive valuation and prove everyone wrong. Or it might become the next cautionary tale about scaling costs outpacing revenue. Either way, the AI capabilities it helped create aren't going anywhere. The models will keep getting better, cheaper, and more accessible regardless of what happens to any single company's balance sheet.
Build your systems to be portable. Invest in understanding AI architecture, not just AI brands. And watch the IPO filing closely when it drops, because the S-1 will contain the most honest accounting of OpenAI's position that we've ever seen.
The company that started the AI revolution might not be the one that finishes it. That's not a failure story. That's how technology markets work.
