Drata, a company specialising in AI-native Trust Management, has introduced an AI Agent aimed at revolutionising Vendor Risk Management (VRM) by automating and accelerating risk assessments of third-party vendors. The announcement, made in early August 2025, positions this development as a step towards Drata’s wider ambition to transform governance, risk, compliance, and assurance (GRC-A) into a continuous, autonomous process powered by specialised AI agents.
According to Drata’s announcement, the VRM Agent leverages AI to reduce the traditionally lengthy vendor risk assessment process from several weeks down to minutes. It automates key tasks such as extracting criteria from vendor questionnaires, conducting document reviews against predefined risk benchmarks, assigning risk scores, and generating dynamic reports. The agent also facilitates follow-up activities by issuing additional questionnaires and re-assessing vendors in real time, promising greater consistency and scalability for organisations managing large supplier networks.
Drata’s CEO described the initiative as a defining moment in realising their vision of “Agentic Trust Management,” where AI agents autonomously manage trust-related tasks and provide continuous, actionable insights. This new AI capability builds on the company’s existing features, including AI-generated summaries for SOC 2 audits and continuous control monitoring, and integrates with Drata’s proprietary Model Context Protocol, designed to add live context to AI workflows.
The launch of this AI Agent aligns with broader trends in the industry, where other firms are also deploying AI to streamline third-party risk assessments. For example, competitors have introduced AI tools that dramatically cut timescales for vendor risk evaluations while maintaining or enhancing report comprehensiveness and accuracy. This reflects a growing acknowledgement across the sector that traditional GRC tools—often reliant on manual effort and fragmented data—struggle to keep pace with dynamic risk environments and complex supply chains.
While Drata emphasises the agent’s potential to shift trust management from a compliance cost centre to a business enabler, experts caution that the success of such AI-driven solutions will depend on several critical factors. These include the AI’s ability to interpret diverse and unstructured vendor data accurately, integrate seamlessly with existing enterprise systems, and maintain transparency and auditability in its decision-making processes.
Earlier in 2025, Drata has been actively enhancing its platform with expanded automation features, improved user experiences, and support for emerging compliance frameworks such as ISO 42001 related to responsible AI governance. The company has also previewed its AI advancement at high-profile cybersecurity events, signalling a strategic focus on embedding AI deeply into trust and risk management workflows.
In summary, Drata’s new AI Agent represents a notable advance in applying agentic AI to Vendor Risk Management, aiming to address long-standing challenges in manual and fragmented GRC processes. However, as with all AI implementations in risk and compliance sectors, practical effectiveness will ultimately hinge on real-world adoption, ongoing refinement of AI models, and careful governance to ensure reliability and accountability.
Source: Noah Wire Services
 
		




