A survey reveals that less than a quarter of UK chief information officers can oversee all AI activity within their organisations in real time, highlighting mounting regulatory and operational risks as AI becomes integral to business processes.
Fewer than one in four chief information officers at major UK firms say they can observe every AI agent operating inside their organisations in real time, according to recent research, even though such systems are already woven i...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
A survey commissioned by Dataiku and carried out online by The Harris Poll between December 2025 and January 2026 of 600 CIOs at organisations with annual revenues above USD 500 million, including 75 from the United Kingdom, found that 23% of UK respondents reported full real-time monitoring of their AI agents. At the same time, 92% said agent-based tools are embedded in business‑critical workflows.
The gap between rapid deployment and available controls is drawing heightened board scrutiny. The Dataiku research found 85% of UK CIOs reporting increased pressure from boards to demonstrate measurable returns on AI investment since 2024, compared with 74% globally. UK IT leaders also signalled growing alarm about employee-created tools: 84% agreed that staff are building agents and apps faster than IT can govern them, while 83% warned that citizen-developed AI may put sensitive corporate data at risk.
That dynamic reflects a broader industry concern about “shadow AI”, automations constructed outside formal IT oversight, which can complicate traceability, incident response and regulatory compliance. According to the Dataiku findings, 84% of UK CIOs said shortcomings in traceability or explainability have delayed or prevented AI projects from entering production.
Florian Douetteau, co-founder and CEO of Dataiku, framed the shift as a change in accountability. “CIOs are moving from experimentation into accountability faster than most organizations expected,” he said. “The pressure is real, and the timeline is tight, but there is a path to success. It favors CIOs who act decisively now, building AI systems they can explain, govern, and stand behind before accountability is imposed rather than chosen.”
Industry analysts and consultancy reports underscore the urgency. McKinsey highlights that durable governance requires clear objectives, robust data practices and systems for ongoing oversight, arguing that governance models must evolve as AI moves from pilot projects into routine operations. Gartner has predicted that 80% of organisations will have adopted AI governance frameworks by 2026, emphasising policies, assigned responsibilities and auditing mechanisms to mitigate risk.
Commentators have outlined practical elements of a governance programme. A piece in CIO argued that enterprises need defined roles, monitoring tools and transparency to avoid unintended harms such as biased decisions or security vulnerabilities. A Forbes analysis similarly recommended regular audits, strong data controls and a culture of accountability to support safe integration of AI into business processes.
The growing expectation of formal oversight is also reflected in anticipations of regulatory change. Dataiku’s respondents expect new audit and explainability requirements within the next 12 months, a view echoed by public reporting from outlets such as Reuters and the BBC that have documented mounting calls for clearer rules and stronger corporate controls as incidents and public scrutiny increase.
For many organisations, the governance challenge is operational as well as strategic. Unmonitored agents and rapidly developed internal applications create potential new pathways into protected datasets and can hamper the ability to reconstruct how decisions were reached. Explainability , the capability to show which data, prompts, models and approvals produced an outcome , is emerging as a gating factor for production deployments.
As enterprises confront these pressures, advisers recommend a combination of technical and organisational measures: inventory and discovery tools to map agent activity; logging and provenance systems to enable forensic reconstruction; model risk-management practices borrowed from financial services; and governance bodies that bring together IT, legal, risk and business stakeholders. Such approaches reflect the prescriptions set out by consultancy and industry commentary while acknowledging that adoption remains uneven.
The tension between innovation and control is now a boardroom issue. Dataiku’s survey suggests UK CIOs face an especially intense version of that dilemma, pressured to translate experimentation into measurable business value while establishing the assurances auditors, regulators and customers will increasingly demand. How quickly organisations close the monitoring and traceability gaps will determine which AI initiatives reach production and which are kept on hold until governance catches up.
Source: Noah Wire Services



