Harnessing AI in Third-Party Risk Management: Maximizing Value While Mitigating Risk

This is the third installment in OCEG™’s expert panel blog series, showcasing the accomplished professionals from OCEG™’s Solution Council member companies and giving you direct access to the industry leaders who shape our standards and drive innovation in governance, risk, and compliance. Through these insights, you’ll discover the connections and expertise available through your OCEG™ membership. In this post, Dave Stapleton from ProcessUnity explores how artificial intelligence is transforming Third-Party Risk Management (TPRM) - both as a powerful tool for enhancing risk assessment capabilities and as a new source of risk when third parties adopt AI in their own operations.
Artificial intelligence transforms nearly every aspect of business operations, and Third-Party Risk Management (TPRM) is no exception. In an increasingly complex ecosystem of partners, vendors, and service providers, AI offers powerful capabilities to enhance how organizations manage third-party risks. From automating ratings and assessments to rapidly analyzing emerging threats, AI redefines the TPRM space.
While AI is shaping up to be an excellent tool for TPRM professionals, it also introduces new risks. As third parties use AI to support their product and service delivery, there are reasonable concerns about the security of their use cases and the quality of the deliverables they produce. This uptick in AI adoption, particularly among IT service providers, expands the risk attack surface and drives heightened regulatory measures.
In this article, we explore how AI changes TPRM in two major dimensions: the transformation of work for TPRM professionals and the introduction of new risks. We'll also offer practical guidance to maximize AI's value while managing the risks that come along with it.
AI-Powered Transformation of TPRM Practices
Traditionally, TPRM relied heavily on static, point-in-time questionnaires, manual document reviews, and subjective scoring models. These methods are time-consuming, inconsistent, and poorly suited to assess and protect against today’s fast-moving digital threat landscape.
AI addresses many of these limitations by enabling:
• Risk Ratings and Third-Party Tiering: AI assembles data about third parties in near real-time. As procurement requests emerge, teams gain access to inherent risk ratings that instantly prioritize their vendor portfolio. • Predictive Assessments: AI, with proper human oversight, can instantly predict how any third party will answer a standard security questionnaire.• Continuous Monitoring and Ongoing Assessments: AI models can process vast volumes of data from external and attested data sources to provide near real-time, adaptive risk scores that reflect current conditions and changing threats.• Evidence Evaluation: AI accelerates the tedious job of validating the accuracy of third-party assessment responses against provided documentation. • Contract Analysis and Policy Review: AI tools extract and compare key clauses from vendor contracts or policies to flag compliance gaps.
What’s the Risk of Third Parties Using AI?
While AI offers immense potential for greater efficiency, it also introduces new risk vectors, especially when third parties begin to use AI in their own operations or offerings. These risks include:
- Opacity: Many AI systems, especially those based on large language models (LLMs), function as black boxes, making it difficult to assess their decision-making processes or derive confidence that outputs will be accurate.
- Bias and Fairness: AI models may unintentionally encode or amplify biases present in their training data, leading to unfair outcomes or reputational damage.
- Security and Privacy: Generative AI tools can inadvertently expose sensitive data or be manipulated to produce harmful outputs, multiplying an already common threat to target data at higher and more accurate rates.
- Drift and Unpredictability: AI behavior can change over time, particularly with continuous learning models, complicating compliance and assurance efforts.
- Data Ownership and IP Rights: When third parties use AI to process or generate content, questions can arise around who owns the input data and the resulting outputs and insights, introducing potential legal disputes or exposure of proprietary information.
- Novel Threats and Attacks: The emergence of AI introduced a host of new risks, such as model theft, prompt injection, tool poisoning, etc.
These risks are compounded by the fact that many organizations lack visibility into how their third parties develop, test, or govern their AI systems or the internal proficiency and skillset to adequately assess AI-related risks. Without robust oversight and contingency planning, a third party's AI failure can quickly become your problem.
Where to Start: Frameworks for AI Risk Evaluation
Many organizations would like to evaluate third-party AI risk but may not know where to start. Several frameworks provide a solid foundation:
- NIST AI Risk Management Framework (AI RMF): This voluntary framework provides a structured approach to mapping, measuring, and managing AI-related risks. It emphasizes governance, transparency, and stakeholder engagement.
- OWASP Top 10 for LLM and Generative AI: This resource outlines the most common vulnerabilities and risk scenarios associated with large language models. It is particularly useful when evaluating third parties with embedded generative AI in their products.
Incorporating these frameworks into TPRM practices allows organizations to ask better questions during due diligence, assess the sufficiency of vendor controls, and benchmark risk across different third-party relationships.
Regulatory Considerations
As AI adoption grows, so does regulatory scrutiny. For example, the EU's AI Act establishes strict requirements for high-risk AI applications and mandates transparency and documentation.
The EU’s Digital Operational Resilience Act (DORA) includes provisions that affect how financial institutions manage risk from ICT (Information and Communication Technology) third parties, including those using AI. Importantly, all ICT third parties deemed Critical to business operations must have a plan for substitution if they suffer a failure. As niche AI technologies become increasingly ‘critical’ to business operations, organizations will have to contend with substitutability where it may not yet be possible.
In the U.S., regulators are increasingly focused on how AI affects consumer outcomes, cybersecurity, and compliance. They also expect organizations to extend this vigilance to their third-party relationships. Ignoring AI-related risks in TPRM is no longer an option.
Strategies to Maximize Value and Manage AI Risk
To fully realize AI's potential in TPRM while managing its risks, organizations should:
- Establish AI Governance: Integrate AI oversight into TPRM processes, including procurement, risk assessment, and escalation procedures. Where AI can be implemented almost universally throughout TPRM programs, prioritize the areas with greatest ROI, such as the most tedious assessment processes or time-consuming and error-prone human review cycles.
- Expand Due Diligence: Leverage an existing AI security framework and customize it to include questions about AI that are specifically relevant to your organization. These might include intended use, development and training practices, and model governance.
- Leverage AI for Assurance: Use AI to validate vendor-provided evidence, flag anomalies, and monitor third-party behavior over time.
- Stay Informed: Regularly review updates to benchmark frameworks such as NIST AI RMF and OWASP, and monitor the regulatory landscape for new requirements. Using a TPRM platform that maintains up-to-date regulatory information in one place can save your team valuable time.
Conclusion: AI TPRM Technology Maximizes Efficiency and Reduces Error (if Trained and Monitored)
AI is fundamentally changing how organizations structure and run their TPRM programs, arguably for the better.
It allows for faster, more accurate, and more adaptive risk management. It also introduces novel risks, especially when third parties deploy AI without proper controls. Organizations must approach this new frontier with both enthusiasm and caution, embracing AI's potential while building the capabilities and frameworks needed to manage its downsides.
Adopting productive AI technology while maintaining safe and compliant practices for you and your third parties, is made easier with the right AI TPRM tools and automated platforms at your disposal. ProcessUnity’s suite of AI TPRM tools and automated third-party risk and assessment workflows helps ease the growing burden of third-party management and compliance, while adopting the benefits of AI and automation for managing risk.
By accessing the best technology, TPRM professionals can position themselves as forward-looking stewards of enterprise resilience in an AI-driven world.
Learn more about ProcessUnity’s AI capabilities for TPRM on our website, or by speaking with the team today.
Featured in: Third Party Management