Despite rapid advances in generative AI, many organisations struggle with strategic missteps, technical reliability issues, and lack of employee training, threatening to undermine potential benefits.
Generative artificial intelligence (genAI) is rapidly gaining prominence in corporate settings, finding applications across a range of functions—from customer service chatbots to content creation tools and decision-support systems. Yet, despite this growing enthusiasm, ma...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
A recurring problem among companies is a misplaced focus on the technology itself rather than on the specific business challenges it should address. As Diego Garagorry, writing in Diario Los Andes, highlights, many firms ask “What can this technology do?” rather than “What problem do we need to solve?” This inversion often results in generic, disconnected solutions that come with high implementation costs, low returns, and limited internal uptake. Thus, the key to realising impact lies not merely in deploying genAI tools, but in integrating them effectively within an organisation’s operational framework.
Common pitfalls identified include launching projects without a clear proof of concept (PoC), rushing into customising models without sufficient validation, and neglecting to establish concrete success metrics. Furthermore, companies frequently underestimate the critical importance of data governance, privacy, and security. Generative AI models depend fundamentally on access to well-organised, relevant, and protected data, which demands time, structured processes, and strategic oversight.
A significant technical challenge remains the phenomenon of “hallucinations,” where AI systems produce confidently presented but erroneous or fabricated information. Recent progress has brought down hallucination rates substantially. According to an ACM article cited by Garagorry, state-of-the-art models like OpenAI’s GPT-5 and Google’s Gemini 2.5 Pro report hallucination incidences below 1.5%, a marked improvement from earlier ranges of 2.5% to 8.5%. Despite these advances, mitigation strategies remain essential, especially for applications in critical sectors that demand the highest reliability.
Supporting this perspective, academic research also underscores the persistent risks hallucinations pose. A 2023 study involving GPT-3.5 found that novel validation techniques reduced hallucination rates from nearly half of outputs to around 15%, significantly enhancing trustworthiness. Meanwhile, research on multi-modal large language models (MLLMs) reveals that hallucinations can be exacerbated in scenarios involving degraded input data, such as blurry or cropped images, pointing to ongoing reliability challenges in complex AI deployments.
Beyond technical hurdles, a broader cultural and organisational gap hampers genAI integration. Surveys through various sectors reveal a stark lack of comprehensive training for employees on how to use these systems effectively. As highlighted by industry data, less than half of companies surveyed provide any form of generative AI education, leaving many employees to self-learn and often resulting in underutilised investments and unrealised productivity gains. Time constraints and a shortage of skilled trainers further complicate this landscape.
Adding to these difficulties, the corporate world remains cautious about genAI adoption overall. Reports from global business environments show that 95% of pilot genAI projects fail to deliver measurable outcomes, hampered by challenges including poor system integration, lack of traceability, and compliance concerns. A growing concern is the rise of “shadow AI,” where employees independently use generative AI tools without organisational oversight, presenting security and governance risks.
In regions such as Chile, for example, many companies are still hesitant to embrace genAI beyond exploratory phases. According to surveys, over half of workforce respondents lacked encouragement or clear directives regarding generative AI usage, underscoring the need for coherent strategies and structured training to move from curiosity to effective deployment.
Opinions from experts further reinforce the importance of a measured, strategic approach. Notably, computing industry commentators warn against common missteps such as neglecting standardised workflows, premature adoption of advanced prompt engineering, insufficient monitoring, and overlooking privacy considerations. These failures can stall or even reverse progress.
To overcome these challenges, thought leaders recommend a phased, evidence-based adoption framework encompassing initial problem identification, rigorous PoC stages, defined success metrics, robust data governance, and continuous feedback loops. This approach helps organisations learn and adapt, ensuring genAI’s potential is realised not as a one-off technological novelty but as a sustainable, value-adding asset.
In summary, while generative AI is poised to transform multiple sectors, its effective adoption requires careful alignment with business goals, disciplined validation processes, sound data practices, and comprehensive workforce readiness. Only through such integrated strategies can companies truly scale genAI innovations from pilot curiosities to core drivers of competitive advantage.
Source: Noah Wire Services



