Box chief executive Aaron Levie is not losing sleep over how many AI tokens his engineers are burning through, arguing that a high bill can be a sign the company is pushing into new territory rather than wasting money.
Speaking on a recent episode of the a16z Show, Levie said: “We should probably waste a lot of tokens because that means that we’re trying new things.” The comment reflects a broader mood in Silicon Valley, where some executives are increasingly ...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
treating heavy token usage as a proxy for ambition, experimentation and speed.
That attitude is not universal, but it is gaining ground. Nvidia chief executive Jensen Huang recently said he would be “deeply alarmed” if an engineer on a $500,000 salary were not using the equivalent of $250,000 in tokens, while companies such as Meta and OpenAI have reportedly encouraged more aggressive usage by displaying internal leaderboards for heavy users.
Levie’s stance fits with how he has framed AI more generally over the past year. In a September 2025 interview, he said there is “no free lunch right now in AI”, stressing that the value of agents depends heavily on access to the right context, especially when they are working across unstructured enterprise data. He has also argued that AI agents are more likely to sit on top of existing software systems than replace them outright.
At Box, that means the debate is not just about engineering teams or model costs. Levie said the same questions are now spreading across the business, with legal, sales and other departments also drawing on AI tools and driving up usage. In his telling, the shift is forcing companies to rethink their spending assumptions as AI becomes embedded in everyday operations.
That, in turn, raises operational problems that go well beyond the price of inference. Levie said companies are having to decide whether a task should run as a long prompt or as a longer-lived agent, whether work should be parallelised, and how much inefficiency they are willing to tolerate in exchange for discovery and automation.
He also pointed to a more basic constraint: capacity. In his view, many of these questions will remain unresolved until the industry can build far more data centre infrastructure. Only then, he suggested, will AI providers be in a position to ease pricing pressure and stop treating tokens as such a scarce resource.
The issue is becoming more urgent as agentic AI spreads through enterprise software. Box’s own roadmap has increasingly centred on AI-enabled workflows, including a multi-year collaboration with Amazon Web Services announced in late 2025 to deepen its use of AI infrastructure and tools such as Bedrock and Anthropic’s Claude. The company has said that work is aimed at automating workflows, creating FAQ-style assistance and improving content analysis inside its platform.
For Levie, the bigger worry is not token consumption itself but the governance problems that come with agents acting at scale. He said finance chiefs and technology leaders are scrambling to determine whether their current IT and integration controls are fit for an environment in which systems may be hit thousands of times an hour by autonomous software.
The practical concern, he suggested, is less about speed than about coordination: making sure one agent does not move a file while another is writing to it, or delete something while a third process is still depending on it. In that sense, the token debate may be a proxy for a larger transition. The question for companies is no longer simply how much AI they can afford, but how much autonomy they are willing to let it have.
Source: Noah Wire Services