CNCF warns that while generative AI boosts creativity and throughput, much of the business value remains unrealised unless organisations add agentic layers that perceive, decide and act — requiring new infrastructure, governance and measurable outcomes.
According to a CNCF blog post published on 18 August 2025, a familiar pattern is emerging across enterprises: heavy investment in generative AI (GenAI) is delivering clear gains in creativity and throughput, yet much of that value is left unrealised because systems are not being paired with agentic AI — software that can perceive, decide and act autonomously. The distinction matters. GenAI excels at generation; agentic AI turns those generations into outcomes.
What agentic AI is and why it matters
Agentic AI architectures, as described by CNCF and outlined in more technical depth by Markovate, combine three core layers: perception, cognition and action. The perception layer ingests signals — sensors, logs, vision and text — to form a situational picture. The cognition layer interprets context, sets goals and plans multi‑step interventions. The action layer executes changes via actuators or APIs and then feeds outcomes back into the loop for continuous learning. Markovate additionally emphasises system orchestration, multi‑agent coordination and the need for robust data storage and retrieval for scalable deployments.
This pipeline is what allows systems to move beyond being prompt‑driven tools to becoming autonomous, outcome‑oriented agents. In logistics, for example, perception could mean live telematics and inventory telemetry; cognition would plan reorder points and route optimisations; action would place orders and instruct carriers. The result is operational cadence that adapts in real time rather than awaiting human direction.
How GenAI and agentic AI complement each other
GenAI and agentic AI are not substitutes but complements. GenAI — large language models and multimodal generators — produces text, code, dialogue and proposals rapidly. GitHub’s Copilot is a practical example: its documentation shows how generative assistants speed engineers by offering completions, conversational help and even an agent mode that can edit files and open pull requests autonomously. But generation alone does not close the loop.
Combine generation with agentic control and you can automatically generate proposals, validate them against live data and execute changes. CNCF points to e‑commerce: GenAI crafts personalised recommendations; an agentic layer adjusts prices, manages inventory and triggers fulfilment actions. The combined system delivers not just an output but a measurable business outcome — conversion lift, fewer stockouts, lower fulfilment costs.
Techniques and practical limits
Engineering such systems draws on a range of machine‑learning techniques. Reinforcement learning (RL) sits at the heart of many agentic approaches because agents learn through action and feedback; DeepMind’s research overview charts both the promise and the caveats of deep RL — notable successes in games and control, but also challenges in sample efficiency, stability and compute cost. Supervised and unsupervised methods remain essential for perception and representation learning, while retrieval‑augmented generation (RAG) is an effective way to ground generative outputs in up‑to‑date, private data. Microsoft Azure’s RAG explainer highlights how retrieval improves factual accuracy, reduces hallucinations and supports domain customisation — important when agents must act on enterprise knowledge.
But there are trade‑offs. Agentic systems are complex and resource intensive: they require reliable telemetry, vector stores and retrieval services, continuous training pipelines and orchestration layers to manage distributed agents. They can be brittle if reward signals are poorly designed, and RL’s sample requirements can make training expensive. Markovate and DeepMind both warn that safety, governance and performance engineering are first‑order concerns when moving from research prototypes to production.
Regulation, explainability and human oversight
Where autonomous systems make high‑risk decisions — in finance, healthcare, critical infrastructure — the need for explainability and governance is acute. The European Union’s AI Act, which entered into force on 1 August 2024, adopts a risk‑based approach that places obligations on high‑risk systems, including transparency, documentation and human oversight requirements. That regulatory backdrop changes deployment calculus: organisations must be able to demonstrate how agents reach decisions, retain auditable records and preserve human‑in‑the‑loop control for regulated actions.
CNCF and others therefore recommend pragmatic patterns: deploy GenAI for augmentation in low‑risk settings, introduce agentic automation in controlled subdomains with strong monitoring, and keep human review gates where stakes are high. GitHub’s own guidance for Copilot reiterates the necessity of human review for code suggested or acted upon by agents — a reminder that autonomy does not absolve responsibility.
A practical playbook for leaders
For senior technology leaders considering agentic systems, several practical steps reduce risk and accelerate value:
- Start with clear outcomes. Identify a measurable process where generation plus action can change a KPI (e.g. reduced fulfilment time, fewer service escalations, faster incident resolution).
- Prototype in a narrow domain. Build a bounded agent that integrates perception, a simple cognition layer and a constrained action set; this limits blast radius and improves observability.
- Ground generation. Use RAG and domain retrieval to ensure outputs are factual and auditable before they become inputs to actions.
- Instrument and measure. Capture telemetry, rewards and failure modes; tune reward functions carefully to avoid perverse incentives.
- Preserve human oversight. Keep human review on dangerous or irreversible actions and log decision trails for compliance.
- Prepare infrastructure and skills. Invest in vector databases, model ops, orchestration and personnel who understand RL, safety and systems engineering.
- Align governance with regulation. Map agent behaviours to regulatory categories such as those in the EU AI Act and document mitigations.
Looking ahead
The most disruptive AI systems of the coming years will be those that blend creative generation with autonomous execution. But the journey from promise to production is neither trivial nor risk‑free. As CNCF and its technical counterparts observe, success requires selective use cases, rigorous engineering, explainability and governance — and, crucially, organisational patience to build the plumbing that converts generative suggestions into reliable business outcomes.
For leaders, the imperative is clear: move beyond experiments that only explore GenAI’s creative potential and ask where automated action would materially change outcomes. Where it does, pair generation with a carefully constrained agentic architecture, instrument it, and hold it to legal and ethical standards. The result can be not only faster productivity but systems that learn to operate at scale — provided they are built with the rigour that enterprise reality demands.
Source: Noah Wire Services



