Major tech leaders forecast that by 2026, AI will only achieve widespread adoption through strict governance, secure environments, and demonstrable business value, signalling a shift towards controlled, trustworthy AI systems.
Leaders from major technology companies are signalling a clear shift in how enterprises will deploy artificial intelligence in 2026: the priority will be strong governance and demonstrable return on investment, with many executives arguing that co...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
According to The Register, Dell chief technology officer John Roese argues that AI must move closer to the enterprise if organisations are to maintain control over security, governance and costs. “This is not just risky; it’s unsustainable,” Roese wrote, warning that AI agents and chatbots rushed into production without adequate policies create exposure. He urged a move towards running models locally, “on-premises or in controlled AI factories”, to provide “a stable foundation and insulate organisations from external disruptions,” a prescription he framed as both prediction and “urgent appeal.” Entrepreneur similarly reported Roese’s insistence that robust governance frameworks and controlled environments will be essential by 2026 to mitigate risk and build trust.
That emphasis on governance is echoed across the sector. Snowflake’s chief information security officer, Brad Jones, told The Register that data governance must strike a balance between restraining agent behaviour with guardrails and allowing operators to experiment. “There are likely to be many documents or data sets in a company that don’t have permissions correctly locked down,” he said, warning that poorly protected material could be exposed if fed into generative or agentic AI.
Microsoft, meanwhile, has framed trust and identity as core to safe agent deployment. Vasu Jakkal, corporate vice president of Microsoft Security, said every agent needs “a clear identity, limits on accessing systems, protocols for managing data they create, and ways to protect that information from attackers,” adding that agents should have “similar security protections as humans, to ensure agents don’t turn into ‘double agents’ carrying unchecked risk,” according to The Register. Microsoft’s Azure CTO, Mark Russinovich, described the next phase of AI infrastructure as one of efficiency and density, predicting “the rise of flexible, global AI systems , a new generation of linked AI ‘superfactories’” that will drive down costs. The Register noted Microsoft has already begun linking large-scale facilities, citing its Wisconsin supercluster unveiled in September as an early example.
For business leaders, however, governance alone is insufficient; AI must prove its value. ServiceNow’s Heath Ramsey told The Register that “That’s the only question that matters,” urging organisations to start with tasks that “are bleeding time and money” and fix them end-to-end. According to a ServiceNow press release, the company expects enterprise AI adoption to be defined by ROI and trust, and it recommends one entry point with clear policies and approvals so successful pilots scale into repeatable, enterprise-wide patterns. Commercial signals back that focus on value: Bloomberg reports ServiceNow projects its Now Assist product will reach $1 billion in annual contracted business by 2026, up from more than $250 million in annual contract value reported earlier, while investment analysts cited by TipRanks and The Motley Fool highlight ServiceNow’s AI monetisation and growth as drivers of its market outlook. Axios has also reported on ServiceNow’s strategic £(US)2.85 billion acquisition of Moveworks, a deal the company made to bolster its enterprise AI assistant capabilities and accelerate automation across business workflows.
The interplay between protecting intellectual assets and preserving business continuity is another emerging theme. Roese said the proliferation of AI hardware and services changes disaster recovery priorities: protecting vectorised data and AI artefacts will be essential to ensure intelligence persists through outages, a point he framed as requiring innovation across data protection, cybersecurity and core AI technology providers.
Taken together, these corporate forecasts sketch a 2026 in which enterprises demand both stronger guardrails and concrete outcomes. Industry executives emphasise that the winners will be those who can combine secure, governed environments with focused, measurable use cases that scale. According to The Register and company statements, the conversation has shifted from “can we build it?” to “can we build it safely and make it pay?”, and that, they say, will determine which organisations and brands succeed in the age of agentic AI.
Source: Noah Wire Services



