AI agents are no longer a speculative corner of the software market. According to an analysis cited by EigenCloud, a growing group of products led by Cursor, Harvey, Replit Agent, Sierra Agent and others is already generating close to $2.9 billion in annual recurring revenue. That scale has turned autonomous software from an interesting experiment into a commercial force, but it has also exposed a more awkward problem: the systems making increasingly important decisions are still hard...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
to verify.
Cursor’s ascent has been especially striking. EigenCloud’s analysis says the coding agent passed $2 billion in annual recurring revenue in February 2026, after doubling in just three months. Harvey, which focuses on legal work, reportedly reached $195 million after tripling revenue in 2025. Replit Agent and Sierra Agent are each said to be at about $150 million, while other names in the sector, including Fin, Cognigy and Devin, are also contributing to the total.
What distinguishes this wave from earlier enterprise software is that these products are doing far more than answering prompts. Cursor can now manage substantial coding tasks from requirements gathering through testing and deployment. Harvey is being used to draft language in major disputes. Cognigy, meanwhile, is handling more than a billion customer interactions a year, underscoring how quickly AI agents are moving into operational roles once reserved for humans.
The economics help explain the pace. EigenCloud’s analysis says Anysphere, Cursor’s parent company, generates roughly $13.7 million in annual recurring revenue per employee, far above Salesforce’s roughly $290,000. In customer service, Intercom’s Fin charges by resolution rather than by message, with the analysis arguing that this can translate into a large reduction in cost versus human support staff. Klarna has also used its AI deployment to promote the argument, saying it replaced the equivalent of 700 full-time customer service roles and saved about $60 million annually.
Yet the same qualities that make AI agents commercially attractive also make them harder to trust. A mistake in code can often be rolled back. A poor customer service answer can be corrected. But in settings such as logistics, law or finance, an autonomous error can have immediate and costly consequences. The EigenCloud analysis points to scenarios such as a freight-routing agent mishandling temperature-sensitive pharmaceuticals or a legal agent inventing a precedent in settlement paperwork.
That is where verifiable compute comes in. According to EigenCloud, its approach relies on hardware-isolated execution inside trusted environments, where software runs in an enclave and produces a cryptographic attestation of what was executed. The company says those attestations are tied to the deployed code and recorded onchain, creating a permanent trail that can be audited later. It also says the system is backed by $17.5 billion in restaked assets, with penalties for operators that produce false attestations.
Other approaches have struggled to solve the same problem at scale. Zero-knowledge proofs are still too expensive for most large-model inference, while optimistic fraud proofs can require rerunning work that may not be deterministic on GPUs. That leaves a gap between what AI agents can do and what users can prove they did.
The result is a new infrastructure race. The market has already answered one question: autonomous agents can generate serious revenue. The bigger unanswered question is who can provide the proof layer that makes their decisions acceptable in higher-stakes settings. For now, that remains the missing piece.
Source: Noah Wire Services