Industry experts warn that autonomous AI agents will revolutionise software development and security, escalating threats from deepfakes to malwareless attacks while demanding new resilience and reskilling strategies ahead of 2026.
The technology landscape is approaching 2026 with a sense of consolidation and reckoning: organisations will no longer be judged by how quickly they adopt AI but by how strategically they fold it into existing processes, while adversaries expl...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
Emilio Salvador, VP of Strategy and Developer Relations at GitLab, argues that the winners will be those that embed AI throughout the software development lifecycle rather than rushing to automate discrete tasks. He forecasts a shift toward roughly “50/50 human-agent collaboration” and the emergence of “meta-agents”, autonomous agents that manage other agents, operate with their own communication identities and proactively take on work. According to Salvador’s view, these systems will change how teams schedule work and how development “shifts” occur across time zones.
That agentic future raises immediate security questions. Matt Mullins, Head Hacker and Offensive SME at Reveal Security, warns that adversaries will industrialise AI-enabled attacks. He expects deepfakes and AI-assisted exploitation to move from research prototypes to operational use, shortening exploitation timelines and enabling lone operators to achieve the effects that once required organised teams. Mullins also highlights a rise in “malwareless” attacks that abuse legitimate remote-access tools, already often whitelisted by IT, to reduce detection risk.
Tim Erlin, VP of Product at Wallarm, foresees attackers chaining AI-driven steps into multi-stage, autonomous campaigns. Where previously attackers needed human judgement at each stage, generative agents can now reason about follow-on actions, increasing the sophistication and reach of automated compromises. Erlin expects the security market to consolidate around application protection and for community-driven standards to emerge; he specifically cites momentum behind the A2AS standard from a.org as an example of nascent governance for AI-security interactions.
The scale of these threats is reflected in broader industry research. The State of AI 2025 report from AItechnologies documents sharp year-on-year rises in cyber incidents, a 15% increase in ransomware, a 53% rise in DDoS attacks and a 190% surge in phishing from 2023 to 2024, while noting new targets such as connected vehicles are suffering steep growth in remote attacks. Meta-Techs’ threat outline similarly lists autonomous agent compromise via prompt injection, hyper-realistic AI-generated social engineering and AI-enabled polymorphic malware as top risks for 2026, underscoring the variety of attack vectors that defenders must face.
Defenders are planning countermeasures that mirror attackers’ use of autonomy. KnowBe4’s 2026 predictions, compiled from its global CISO advisers, foresee agentic AI systems reducing mean time to respond (MTTR) by 30–50% in mature security operations centres by autonomously performing tier-one triage, enrichment and containment while producing immutable audit trails and regulator-ready incident summaries. But KnowBe4 also warns that the same shift will change workforce dynamics, with autonomous tools introducing new attack surfaces such as model context protocol servers and prompt-injection vectors, and enabling the formation of “shadow syndicates” that blend organised crime with cyber operations.
Those workforce dynamics are a recurrent theme. Joseph Kim, CEO of Druid AI, warns of massive reskilling as agentic assistants proliferate, engineers will spend more time reviewing and curating AI output than hand-coding, and teams will need new capabilities to manage thousands of “junior” AI contributors. Kim also calls attention to infrastructure pressures: soaring AI workloads will create data-centre capacity and energy-cost challenges that, he predicts, will drive rapid investment in energy efficiency.
The production code base itself is changing, with operational consequences. VMblog’s summary cites JJ Tang, founder and CEO of Rootly, who points to AWS figures showing that 75% of production code is now AI-generated and research indicating roughly 25% productivity gains. Tang warns that as engineers rely on AI to produce code, familiarity with the resulting artefacts will decline and incidents may become harder to diagnose. He draws attention to an emergent class of AI-driven incident response tools, so-called AISR, that automatically investigate root causes and propose fixes, potentially broadening incident response beyond specialist SRE teams.
Academic work is attempting to systematise these new risks. A 2025 arXiv paper, “Securing Agentic AI,” lays out a purpose-built threat model for generative agents and proposes frameworks (ATFAA and SHIELD) to catalogue agent-specific risks, ranging from cognitive architecture vulnerabilities to governance circumvention, and to prescribe mitigations tailored to agents’ autonomy, memory persistence and tool integration. The paper reinforces the consensus that conventional threat models and controls will not be sufficient for agentic systems.
Taken together, these perspectives suggest practical priorities for organisations planning for 2026: invest in strategic, lifecycle-wide AI adoption rather than point solutions; harden identity and remote-access controls to counter malwareless campaigns; adopt or contribute to emergent standards for AI security; accelerate workforce reskilling; and plan capacity and energy investments for rising AI compute demand. The interplay between agentic defenders and agentic attackers means the coming year will be one of both consolidation and contestation, markets and standards are likely to converge even as adversaries refine ways to weaponise the same technologies defenders use to protect systems.
Source: Noah Wire Services



