The swift rise of agentic AI, driven by user-friendly tools like ChatGPT, is reshaping enterprise procurement and operations. Industry leaders emphasise embedding AI into workflows, prioritising data quality, human expertise, and trust to harness its full potential as AI capabilities become integral to modern technology infrastructure.
Over the past decade, the evolution of artificial intelligence (AI), particularly agentic AI, has dramatically reshaped enterprise funct...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
Research underscores the unprecedented speed of generative AI adoption. A study co-conducted by the St. Louis Federal Reserve, Vanderbilt University, and Harvard Kennedy School revealed that generative AI reached a 39.5% adoption rate within just two years, outpacing the 20% adoption level PCs and the internet achieved over similar timeframes. This rapid uptake has been driven by user-friendly tools such as ChatGPT, which have catalysed the integration of AI technologies into everyday and professional environments with remarkable speed.
Despite such advancements, the practical challenge for IT leaders and organisations lies in harnessing AI’s potential while managing its complexity and achieving measurable business outcomes. This is especially true for agentic AI, which goes beyond answering questions to acting autonomously and intelligently—echoing early AI experiments like Shakey the Robot from the 1970s, which demonstrated rudimentary agency through physical interaction with its environment.
Experience from industry players like Globality highlights key lessons gleaned from building agentic AI systems over the last decade. The company stresses the importance of integrating AI into products from the outset rather than layering it on as an afterthought. Centralised AI teams, while valuable, often lack the embedded domain knowledge critical for developing relevant, scalable models. This results in slower turnarounds and less impactful solutions. In contrast, tightly integrating AI development with specific workflows ensures systems address real-world enterprise needs effectively.
Human expertise remains indispensable in this process. Globality’s approach leaned heavily on recruiting PhDs and specialists skilled in AI and machine learning to develop its natural language processing and interactive dialogue capabilities. This deep integration of data science expertise enabled the creation of AI agents capable of managing complex, multi-step sourcing workflows in procurement—far beyond the scripted responses typical of earlier chatbot technology.
Data quality also emerges as a cornerstone of agentic AI performance. While large language models (LLMs) exhibit resilience to imperfect, unstructured inputs, consistent, clean, and domain-specific data remain essential for delivering reliable, enterprise-grade results. Globality’s proprietary dataset—a structured collection shaped by procurement experts—exemplifies the value of tailored data over generic or web-scraped information. This domain expertise ensures AI agents comprehend nuanced procurement challenges, from compliance constraints to negotiation dynamics, reinforcing their trustworthiness and effectiveness.
Moreover, understanding the limitations of AI technology is critical for building user confidence. Unlike in theoretical scenarios, large language models still fall short in tasks requiring consistent precision, such as mathematical calculations. Trust is built by combining these models with specialised tools designed for reliability, supporting the kind of consistent performance necessary to meet stringent enterprise security and regulatory requirements.
Client concerns about data privacy also play a central role. Globality’s clear policy of not using customer data for model training, coupled with agreements preventing data leakage via LLM providers, reflects a broader industry emphasis on safeguarding sensitive information while leveraging the power of foundation models through carefully designed prompting.
As AI-enabled personal computers gain traction—accounting for 14% of PC shipments in mid-2024 and projected by Gartner to constitute over half the market by 2026—organisations face a landscape where AI capabilities are becoming intrinsic to every layer of technology infrastructure. This makes the lessons from building agentic AI systems, which balance innovation with foundational rigor, all the more relevant.
Ultimately, while the pace of AI adoption is unprecedented, the journey underscores a vital truth: meaningful returns on agentic AI investments depend not solely on speed but on integrating core principles—robust human expertise, clean data, domain-specific knowledge, technological humility, and unwavering attention to trust and reliability. These principles, exemplified by firms like Globality, pave the way for transforming complex enterprise functions into smarter, more efficient, and fairer processes, fulfilling AI’s long-promised potential across industries.
Source: Noah Wire Services



