As organisations explore agentic AI capabilities beyond text generation, experts emphasise the importance of layered architectures, robust data governance, and cautious deployment strategies amid high costs and regulatory complexities.
When industry practitioners speak of agentic AI they are describing a step-change: systems that do more than generate text or answers, and instead plan and execute multi-step tasks on behalf of users. According to the original report by R...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
The architectural reality is complex. Industry data shows agentic systems typically pair a planning LLM with specialist smaller models and a web of connectors and APIs; Retrieval‑Augmented Generation (RAG) patterns and disciplined data pipelines are essential to avoid “hallucination” and ensure outputs are traceable to source records. Jerrom stresses the importance of model resilience, capacity and distributed inference , and points to open source inference engines and projects that enable in‑house, Kubernetes‑friendly serving as a way for organisations to control costs and sovereignty risks. The company said in a statement that platforms built on open principles can help deploy and manage agents across hybrid cloud and edge environments, which is especially relevant where connectivity or power constraints make local inference desirable.
Cost and complexity, however, are not theoretical. A Gartner report warned in June 2025 that more than 40% of agentic AI projects will be scrapped by 2027 because of high costs and unclear business outcomes. At the same time, Gartner forecasts that agentic AI will handle a growing share of routine business decisions , forecasting 15% of daily decisions and adoption in a third of enterprise software by 2028 , underscoring a gulf between potential and practical delivery. That tension highlights why careful use‑case selection, staged pilots and rigorous governance are not optional. Cognizant’s enterprise guide recommends a phased rollout from foundations and copilots through to supervised and collaborative autonomy, emphasising early prototypes, measurable KPIs and governance frameworks that build trust as capabilities scale.
Data quality and document fidelity remain decisive variables. Reporting on failures across sectors makes the point bluntly: poor data and weak OCR or extraction processes can turn autonomous workflows into sources of costly error , from flawed loan assessments in banking to misinformation‑driven clinical decisions in healthcare. TechRadar and other analyses argue that advanced scanning, authoritative metadata, lineage tracking and enterprise data governance must be part of any agent programme before operators grant agents meaningful autonomy.
Regulation and sovereignty further shape deployment choices. Jerrom notes that many EMEA businesses prioritise AI sovereignty and that national laws , in South Africa, the Protection of Personal Information Act (POPIA) , impose obligations on how agents access, process and store personal data. The company said that robust guardrails are essential: content filtering, policy enforcement, audit trails and consent mechanisms must be embedded into agent workflows so escalation thresholds and human‑in‑the‑loop controls are clear. In regulated industries such controls are prerequisites rather than conveniences.
Major vendor activity and public demonstrations illustrate both progress and caution. OpenAI unveiled a ChatGPT agent in July 2025 that can run multi‑step tasks using an internal “virtual computer” and integrate with external apps, and demonstrations show capabilities such as calendar planning and content creation. However, OpenAI’s CEO Sam Altman cautioned against relying on such agents for high‑stakes uses, a warning echoed in independent reporting that highlighted exploitation risks and the need for human oversight. The mixed signals , impressive demos coupled with explicit cautions , reinforce that enterprises must limit agent authority until reliability at scale can be proven.
Practical deployment therefore comes down to three linked priorities. First, pick use cases where value is quantifiable and risks are manageable: routine triage, data gathering, appointment scheduling, automated monitoring and low‑risk customer interactions are the typical starting points. Second, invest in the plumbing: RAG, connectors, identity, encryption, logging and Model Context Protocol‑style interfaces that standardise how agents discover and interact with enterprise systems. According to the original report, MCP is emerging as a promising standard to reduce bespoke integrations and vendor lock‑in. Third, govern tightly and iterate: supervised autonomy with well‑defined escalation, auditability and metrics for accuracy, cost and customer impact.
If implemented thoughtfully, agentic AI is complementary rather than replacementist. Jerrom frames agents as tools that “make organisations more capable by making their people more capable” , automating the routine and surfacing relevant information so experts can apply judgement. Security and fraud use cases also show upside: autonomous detection and near‑real‑time response can materially reduce attack windows when agents are constrained by clear policy and oversight.
But the path to scale is neither cheap nor guaranteed. Analyst warnings about project cancellations, vendor demonstrations that include public caveats, and repeated findings about data quality all point to a simple conclusion: agentic AI will deliver when organisations align ambition with realistic planning, invest in infrastructure and governance, and start small with measurable outcomes. For enterprises in South Africa and beyond, hybrid deployments that balance local inference for continuity and cloud capabilities for scale , reinforced by open, interoperable platforms , appear to be the pragmatic route to turning agentic promise into sustained value.
Source: Noah Wire Services



