National Grid is piloting generative artificial intelligence to manage cybersecurity exposures and adapt to shifting regulatory obligations, aiming to accelerate data analysis and improve risk prioritisation.
National Grid is piloting artificial intelligence across its risk and compliance functions to help manage cybersecurity exposures and keep pace with shifting regulatory obligations, according to a report by Think Digital Partners from the ServiceNow AI Summit in Lo...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
Jody Elliott, the company’s head of risk and sustainability, told the summit that the volume and velocity of operational data produced by infrastructure operators outstrip what traditional human-led oversight can reliably absorb. Utilities maintain extensive digital estates supporting electricity transmission in the UK and parts of the United States, creating hundreds of technology initiatives and associated backlogs that complicate governance. “In large organisations you’ve got multiple agile projects running. From a risk perspective, how do I have my sight of every story and feature in every planning session and every backlog that’s continuously running?,” Elliott said.
National Grid is experimenting with generative AI to sift unstructured information from development pipelines and operational systems, flagging emergent risks so human teams can concentrate on the most consequential issues. “Generative AI in particular gives us the opportunity to analyse all that unstructured data,” Elliott said. The approach aims to reduce manual review of thousands of updates and to surface priority items for investigation.
A practical application under trial combines endpoint telemetry with vulnerability feeds and exploit reports. Elliott described an AI agent that integrates operating-system and patch-level data with disclosed weakness information, producing near‑real‑time risk rankings. “We built the agent in about an hour,” he said, adding that once active it took roughly “90 seconds to run and output the results.” Operational teams then spent several days validating outputs to establish accuracy and to ensure business context was reflected in remediation priorities. “If you overlay that with HR data,” Elliott said, organisations can determine whether at‑risk devices belong to senior executives or critical operational staff, enabling response based on potential business impact rather than technical severity alone. “It’s that business context piece that AI really elevates,” he added.
The company is also using AI to track regulatory change across jurisdictions. Elliott explained that an agent scans government and regulator updates , including frameworks such as SIP, SOX and PCI , against National Grid’s internal control framework, analysing a rolling 12‑month window and projecting likely developments over the coming year to pinpoint where policies or controls may require revision. “That agent is looking at a 12-month trailing update of all of those regulations,” he said, while also analysing the company’s control framework to determine “what we need to think about changing”.
National Grid has long presented cybersecurity and resilience as strategic priorities. According to material on the company’s website, it invests in prevention, detection, response and recovery capabilities and runs training to raise employee awareness of cyber risks. Industry guidance the company publishes highlights trends such as cloud migration, rising privacy and compliance demands, and the need for advanced security tools to protect applications and data both within and beyond traditional perimeters , factors that help explain the drive to automate risk analysis.
The company’s formal information security programme has previously been described to the National Institute of Standards and Technology as aligned with recognised enterprise risk frameworks and integrated with compliance, audit and the software development lifecycle, supported by governance tooling such as the RSA Archer eGRC Suite.
Elliott cautioned that embedding AI into risk processes requires parallel investment in employee literacy to prevent unwarranted trust in machine outputs. “There’s a risk that people become subject matter experts when they’re not subject matter experts,” he said. National Grid has rolled out AI training across its workforce, from executives to technical staff, and Elliott stressed the need for ongoing reinforcement: “It’s not a one-and-done. We need to reinforce that continually.”
While the trials point to faster synthesis of data and more targeted remediation, the company’s public statements frame these tools as accelerants to, rather than replacements for, human judgement. According to National Grid’s published material, that blend of automated analysis and human oversight underpins its broader cybersecurity strategy as it adapts to evolving threats and regulatory complexity.
Source: Noah Wire Services



