The adoption of autonomous, agentic AI systems is accelerating in enterprises, promising enhanced efficiency and real-time automation, but faces significant hurdles in data infrastructure, governance, and security as organisations transition from pilot projects to large-scale deployment.
Enterprise adoption of Large Language Models (LLMs) has evolved significantly from exploratory experiments to production-scale tools, primarily aiding content creation and data synthesi...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
This challenge has sparked interest in what is known as agentic AI, a paradigm shift that transcends mere generative assistance. Unlike generative AI, which primarily responds creatively to prompts, agentic AI functions as an autonomous execution engine. It leverages LLMs not just to generate content but to reason through goals, decompose them into actionable tasks, interface independently with various enterprise systems like CRM and ERP platforms, and dynamically respond to evolving business contexts. Such systems act as proactive orchestrators, for example identifying a sales lead, crafting and sending personalised communications grounded in real-time customer data, and updating records automatically, thus embedding a cognitive control layer absent from traditional automation.
Described as digital co-workers, these agentic systems possess the capacity for ongoing learning and adaptation. They cycle through perceiving their environment, planning steps, executing actions, and reflecting on outcomes to refine future performance. This adaptive loop mirrors trends in other fast-paced industries such as mobile gaming and streaming, where platforms continuously evolve to meet user needs. The agentic AI paradigm demands robust real-time data infrastructures and dynamic governance models that enable real-time supervision rather than post-hoc auditing, given its broad integration with key operational systems and the criticality of compliance and security.
Despite the promising vision, practical adoption of agentic AI remains emergent and uneven. While some early adopters in IT operations, financial services, and supply chain management report encouraging benefits—like autonomous infrastructure monitoring, real-time regulatory compliance automation, and predictive demand forecasting—many initiatives are still pilots or proofs of concept. Reports from Gartner forecast that over 40 percent of current agentic AI projects will be discontinued by 2027, largely due to high costs, murky business value, and operational complexity. Gartner also highlights the problem of “agent washing,” where vendors inaccurately brand conventional AI tools as agentic without genuine autonomous capabilities, noting that out of thousands of vendors, only a small fraction offer authentic agentic AI solutions.
Experts emphasise that widespread, effective deployment of agentic AI hinges on several critical factors: trustworthy, unified data environments with real-time accessibility; tightly integrated systems that facilitate seamless agent communication; persistent memory capabilities; and strong governance frameworks to mitigate risks like hallucinations or misuse. Emerging technologies such as data fabric architectures—which provide holistic, context-rich data layers—are gaining prominence over more fragmented models, supporting the scalability and reliability essential for agentic workflows. Standards like the Model Context Protocol (MCP) and Agent-to-Agent (A2A) communication protocols are also pivotal for enabling intelligent agent collaboration.
Nonetheless, security concerns loom large. Industry voices like Haider Pasha, EMEA CISO at Palo Alto Networks, caution that agentic AI poses significant cybersecurity challenges, potentially exceeding predicted failure rates if not carefully managed. Risks include unauthorized memory or tool access, objective drift where agents diverge from intended goals, and exploitation by malicious actors via prompt injections or compromised privileges. Pasha advocates for practical governance akin to managing interns: restricting agent capabilities, enforcing strict identity and access management, and integrating advanced security tools such as runtime monitoring and firewall protections. Given the explosive growth in machine identities—which now outnumber human ones by over 80 to 1—strengthening digital identity frameworks is fundamental to safeguarding agentic AI deployments.
Looking forward, professional services projections suggest that agentic AI adoption will accelerate notably. Deloitte forecasts that by 2027, half of companies currently using generative AI will have launched agentic AI pilots or proofs of concept. Gartner projects that by 2028, one-third of enterprise software will embed agentic AI, and 15 percent of routine business decisions will be autonomously made by these agents. Yet the journey to broad operationalisation is expected to be gradual, as organisations grapple with technical, financial, and governance complexities.
In summary, agentic AI represents a profound evolution in enterprise AI applications—from reactive content generation to autonomous, context-aware digital co-workers capable of optimising workflows end-to-end. Its promise includes improved efficiency, reduced human bottlenecks, and new possibilities for real-time, cognitive automation across industries. However, realising this potential demands significant shifts in data infrastructure, governance philosophy, integration capabilities, and security posture. As enterprises cautiously advance from experimentation to scaled use, they must balance ambition with diligence to navigate the considerable hurdles that remain.
Source: Noah Wire Services