Estimated Read Time: 6 minutes

While competitors chase app store glory with viral video generators, Anthropic is doing something decidedly less flashy: building AI for banks, hospitals, and Fortune 500 companies. It's the kind of strategic choice that makes for terrible Twitter moments but excellent balance sheets.

The company behind Claude has essentially placed a bet that the future of AI isn't in consumer novelty—it's in the unglamorous work of enterprise transformation. Recent conversations with Anthropic's leadership reveal the architecture behind this approach: "unconflicted" relationships with clients, surgical international expansion, and a counterintuitive thesis that safety features aren't compliance burdens but competitive weapons.

The numbers suggest they're onto something. Anthropic is now the fastest-growing frontier AI company in absolute terms, even faster when you isolate enterprise revenue.

The Circular Economy Problem

Here's an uncomfortable truth about the AI industry: much of it operates as a closed loop. Model companies sell to startups, which sell to other startups, which sell to yet more startups. Actual end users—people with real problems and real budgets—remain somewhere off in the distance.

That end user looks different from what you'd expect if your mental model of AI adoption comes from tech Twitter. It's Commonwealth Bank running fraud detection systems. It's pharmaceutical companies accelerating drug development timelines by months. It's global investment banks streamlining know-your-customer processes that used to require armies of analysts.

These aren't experimental deployments where someone's testing whether AI can do something interesting. They're mission-critical operations where failure means regulatory problems, customer losses, or worse. Which means the requirements are fundamentally different from consumer AI: you need reliability, explainability, and above all, trust.

The contrast with competitors is deliberate. While OpenAI and Google capture headlines with image and video models designed to generate viral moments, Anthropic has doubled down on what its Chief Commercial Officer, Paul Smith, calls "enterprise security and safety." It's not that they couldn't build Sora. It's that building Sora would compromise everything else they're trying to do.

The Unconflicted Advantage

The AI industry has developed a platform risk problem that's getting harder to ignore. You're a founder building on OpenAI's API. Things are going great. Then OpenAI launches a consumer product that competes directly with yours. Suddenly your foundation model provider is also your competitor, and that partnership you built your company on feels considerably less stable.

Anthropic has structured itself to make this scenario impossible. When an enterprise works with Anthropic, they're not lying awake wondering whether next quarter will bring a competing product launch. The focus is singular: providing tools that make the client's business more successful.

"We want to be unconflicted," Paul emphasizes. "When someone works with us, they can trust that we're an enterprise AI provider focused on driving their success. It's not about us. It's about making them more successful—whether that's a startup built on top of Claude or a Fortune 100 software company or bank."

This sounds like corporate speak until you look at where it's actually creating advantage. Some of Anthropic's fastest-growing segments are the most heavily regulated industries: banking, insurance, healthcare, life sciences. These sectors won't adopt AI from a provider that might decide to compete with them six months from now. In regulated industries, trust isn't a differentiator—it's a prerequisite.

The approach is showing up in unexpected places. Anthropic has partnerships with both AWS and Google Cloud Platform that feel genuinely collaborative rather than the tense arrangements you see elsewhere in tech. "We're not trying to go around each other," the Paul notes. "We're very clear and very deliberate in that."

The Coding Flywheel

Anthropic's enterprise strategy isn't just about avoiding consumer markets. It's built on a specific technical foundation that creates what the company calls a flywheel effect, and that foundation is Claude Code.

The pattern starts simply enough. An enterprise puts Claude Code in the hands of its ten thousand software engineers. Those engineers use it for coding tasks and experience immediate productivity gains. Standard AI adoption story so far.

But then something more interesting happens. "The natural thing that happens after that is, okay, I've done that—now what are these developers working on?" Paul explains. "They're working on transforming this line of business process. Then the question becomes: how do I start deploying other agents across the enterprise?"

This is where the flywheel accelerates. Commonwealth Bank uses Claude for fraud detection. Investment banks simplify know-your-customer processes. Pharmaceutical companies eliminate months of manual documentation analysis from drug development cycles—potentially shaving three months off timelines that determine when life-saving medications reach patients.

Each solved business problem reveals the next one. The enterprise isn't making a one-time AI deployment. They're building momentum across multiple use cases, creating what Anthropic describes as "a flywheel of just solving the next business problem and solving the next business problem."

The recent launch of Haiku 4.5 fits into this framework. While Sonnet 4.5 delivers cutting-edge intelligence for complex tasks, Haiku serves a different segment: high-volume, cost-sensitive applications that still need reliable performance. The message is clear: Anthropic is building a full-stack enterprise offering, not a one-size-fits-all consumer product.

Scaling Through Partners, Not Armies

Anthropic has tripled its international workforce over the past year, but not in the way traditional enterprise software companies scale. The company isn't building the massive sales armies that have defined enterprise go-to-market for decades. Instead, it's pursuing something more surgical: boots on the ground where absolutely necessary, combined with aggressive partner-led scaling everywhere else.

The boots-on-ground piece is non-negotiable for major enterprise clients. "If you're dealing with a major enterprise, they want to see the team that's going to be managing them," the Paul Smith explains. "The number one question out of any meeting is: can we have your applied AI team member on site, helping our teams create that flywheel of innovation?"

That requirement means building teams in the UK, France, Germany, India—where the company recently opened an office during a visit by CEO Dario Amodei—and Japan, with more countries planned. But these aren't traditional sales teams. They're applied AI specialists who work alongside enterprise teams to unlock use cases and accelerate deployment.

The real scale comes from partners. Deloitte's recent rollout of Claude to 470,000 employees across 150 countries exemplifies the model. Anthropic continues building its direct capabilities, but the company recognizes it can't keep pace with customer demand through direct sales alone—nor should it try.

This creates an interesting dynamic. Anthropic maintains its technical focus and safety-first culture while partners like Salesforce, Deloitte, and major consultancies handle the complexities of enterprise deployment at scale. It's a model that learns from traditional enterprise software without simply replicating what came before.

Safety as Accelerant

The AI industry's conversation about safety has become tediously binary. One camp warns that aggressive regulation will stifle innovation and hurt startups. The other pushes for comprehensive oversight. Anthropic's position is more interesting: the two goals aren't in conflict—they're complementary.

"We don't want to be a bottleneck on any innovation at all," Paul Smith clarifies. "We want safe and trusted AI, and I think those two things go entirely well together. You can have safe, trusted AI and an incredible amount of innovation."

The proof shows up in deployment speed. Constitutional AI—a set of principles baked into Claude's architecture from day one—prevents the model from taking users down paths that might create problems with employees, customers, or regulators. There's overhead to this approach initially, but it creates downstream advantages that matter more.

Anthropic's work on mechanistic interpretability illustrates this dynamic perfectly. The research team is essentially building an MRI scan that peers into Claude's decision-making process, revealing how the model arrives at specific outputs. This isn't just academic research, it's exactly what banks and healthcare providers need to demonstrate model behavior to regulators.

When a financial institution can show a regulator precisely how an AI model made a lending decision, or when a hospital can explain how a diagnostic AI reached its conclusion, deployment barriers dissolve. Safety investments become adoption accelerators in heavily regulated industries, which happen to be some of the most valuable enterprise markets.

A Different Scoreboard

There's a fundamental tension in the AI industry between reach and revenue, between virality and viability. Watching Sora skyrocket to the top of the app store generates headlines and social media buzz. Building fraud detection systems for Commonwealth Bank generates sustainable enterprise contracts and recurring revenue.

Anthropic has chosen the latter path with unusual discipline. That choice flows from the founding team's mission—Dario and Daniela Amodei's focus on safe, trusted AI—but it's been operationalized into every aspect of how the business actually runs. Product decisions. Partnership strategy. Geographic expansion. Hiring priorities.

Most importantly, it's created a competitive position that's increasingly difficult to challenge. Enterprise clients in regulated industries aren't shopping for the flashiest AI—they're looking for the one they can trust with mission-critical operations. Trust isn't built through viral moments on social media. It's built through consistent performance, transparent decision-making, and an unconflicted focus on client success over an extended period.

The question for the rest of the AI industry isn't whether Anthropic's approach is working—the revenue growth and enterprise adoption numbers answer that clearly enough. The question is whether other frontier AI companies can maintain their dual focus on consumer virality and enterprise viability, or whether they'll eventually need to choose one.

Anthropic already made that choice. And they're betting that in the long run, enterprises will matter more than app store rankings. Given where the revenue is actually coming from, it's hard to argue they're wrong.

Reply

or to participate