CrowdStrike and Cisco warn that advancing AI systems are doubling both productivity and attack surfaces, with many organisations unprepared for emerging threats and manipulation tactics.
Two new vendor reports underline a widening problem: AI systems that act on behalf of users are becoming both powerful productivity tools and attractive attack surfaces, yet many organisations remain unprepared to secure them.
According to a CrowdStrike threat brief, autonomous a...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
Cisco reaches similar conclusions in its State of AI Security 2026 report. The research finds that while the majority of organisations intend to deploy agentic AI, a much smaller share believe they can do so safely; the report highlights supply‑chain fragility, the expanding risks tied to Model Context Protocols, and the rapid evolution of prompt‑injection and jailbreak techniques. Cisco has begun repositioning its product portfolio around these risks, describing additions such as AI supply‑chain governance, runtime protections and AI‑aware traffic controls in its Secure Access Service Edge offerings to reduce manipulation and exploitation of agentic workflows.
The combined message from both vendors is stark: the ability to act multiplies risk, and current tooling lags behind attackers’ creativity. Industry incidents from 2025 illustrate the danger. High‑severity vulnerabilities and exploitation chains, ranging from an email that triggered automatic data exfiltration from a major Copilot deployment to widespread cascade compromises via a chat‑agent integration, and high success rates in data‑exfiltration tests against other agent platforms, have shown how quickly an automated assistant can become a conduit for breach.
A common criticism of enterprise responses is scope and access: vendor solutions such as endpoint sensors or cloud controls are effective for organisations that can afford and deploy them, but they do not address the full ecosystem of developers, researchers and smaller teams building agents. In a developer blog on dev.to the author behind an open‑source project called ClawMoat argues for tooling that inspects agents at runtime and in session transcripts. The project aims to detect and flag patterns associated with prompt injection and jailbreaks, search agent input/output for exposed credentials, monitor for unauthorised outbound data flows, identify malicious instructions embedded in persistent memory or context files, enforce policy between agents and external tools, and surface privilege boundary violations. The author says the tool is distributed under an MIT licence and is intended to sit in the execution path so operators can catch attacks before they escalate.
There is no single technical fix. Cisco and CrowdStrike both emphasise a mix of approaches: stronger runtime controls, supply‑chain governance, agent‑aware network and access policies, and improved telemetry to spot anomalous behaviour. CrowdStrike’s agentic threat‑intelligence approach illustrates one way vendors expect to use AI to help secure AI, while Cisco’s product updates signal a push to bake AI‑specific protections into networking and access layers.
For teams building agentic systems, the immediate priorities are practical: minimise execution privileges, avoid persisting sensitive context where it can be tampered with, apply strict allowlists and rate limits for tool integrations, and instrument agents so their actions are observable. Where organisations cannot rely on enterprise suites, community tools that monitor sessions and enforce policy can help close gaps, though they are not a substitute for architectural hardening and threat‑informed design.
The technological momentum behind agentic AI is accelerating both utility and risk. As vendors publish more detailed threat analyses and update product roadmaps in response, the critical work will be translating those findings into the controls and observability that actually prevent abuse in real deployments.
Source: Noah Wire Services



