Five Key Themes from Our Conversation with PwC
This is the eleventh installment in OCEG™'s expert panel blog series, showcasing the accomplished professionals from OCEG™'s Solution Council member companies and giving you direct access to the industry leaders who shape our standards and drive innovation in governance, risk, and compliance. Through these insights, you'll discover the connections and expertise available through your OCEG™ membership. In this post, Dean Alms from Aravo explores five critical themes for managing AI risk effectively, from implementing phased governance approaches and addressing third-party AI concerns to navigating global regulatory complexity and building cultures of accountability with continuous monitoring.
It’s remarkable to pause and consider how quickly AI has become part of everyday life. Today, people routinely rely on AI for tasks like shopping, planning meals, finding efficient routes, brainstorming ideas, and supporting countless professional functions. Just five years ago, this level of integration would have seemed unimaginable.
AI’s relevance has accelerated in organizations as well. Companies are increasingly using AI to develop products and services, improve operations, and enhance decision-making. With this expansion comes a growing need for strong governance. Organizations now face critical questions: Who is responsible for managing AI? How should potential risks be identified, assessed, and mitigated? How can organizations ensure that AI is used responsibly, ethically, and in compliance with emerging regulations? These are not hypothetical questions—they are central to managing AI risk effectively.
Here are five key themes from our recent conversation with PwC on managing AI risk and assessing third-party AI use:
Governance Needs to Be Proactive and Phased
Effective AI governance is not a one-time initiative; it requires a deliberate, phased approach. Organizations should start with lower-risk AI use cases, such as machine learning decision models, to build trust, validate performance, and demonstrate value. These initial steps allow organizations to test governance frameworks, establish accountability, and refine processes in a controlled environment.
Once this foundation is secure, organizations can scale into more advanced AI applications, including generative AI tools and agentic AI systems. Scaling responsibly requires tighter controls on data quality, human oversight, performance validation, and ethical safeguards. A phased approach ensures that risk management evolves alongside technology adoption, reducing the likelihood of unintended consequences and creating a sustainable governance model.
Third-Party AI Risk Is a Core Concern
Organizations rarely operate AI in isolation; third-party vendors often play a key role in building, deploying, or maintaining AI systems. This reliance introduces additional risks that cannot be addressed through standard vendor assessments alone.
Effective third-party AI risk management requires understanding how vendors build, govern, and evolve their models. Organizations need visibility into training data sources, explainability practices, bias detection and mitigation measures, and ongoing performance monitoring. Contracts should include explicit governance requirements, attestation obligations, and documented processes for AI use.
Ongoing validation and oversight are critical. AI models evolve continuously as they are exposed to new data, algorithms, and updates. Organizations must ensure that vendor systems remain compliant, accurate, and aligned with organizational policies over time. This proactive approach minimizes exposure and ensures that third-party AI contributes positively to overall risk management objectives.
Regulatory Complexity Is Real—and It’s Global
AI regulations are becoming increasingly complex, and the regulatory landscape varies significantly across jurisdictions. Organizations cannot rely on a one-size-fits-all approach. In Europe, the United States, and other markets, regulations differ on transparency, human oversight, documentation, accountability, and reporting obligations.
To navigate this complexity, organizations should establish cross-functional AI governance bodies. These groups should include representatives from risk, compliance, legal, data science, and business units, ensuring that emerging regulatory requirements are monitored and integrated into operational practices. Proactive governance allows organizations to implement controls and policies before compliance gaps appear, reducing the risk of enforcement actions, reputational harm, or operational disruption.
Data Quality, Transparency, and Auditability Are Non-Negotiable
Data is the foundation of AI, and poor-quality data can lead to inaccurate, biased, or unsafe outcomes. Governance frameworks must prioritize data quality, transparency, and auditability. Organizations should understand where data originates, how it is processed, how models are trained, and how outputs are validated.
For third-party AI, this extends to contractual requirements for data provenance, model documentation, and evidence of controls. Audit trails are essential for internal review, regulatory inspection, and stakeholder assurance. Organizations that maintain rigorous documentation and transparent practices reduce operational risk, improve decision-making, and strengthen trust in AI systems.
Culture, Accountability & Continuous Monitoring
Effective AI governance is not only about processes—it is also about culture. Clear ownership of AI risks, well-defined decision-making roles, and shared accountability are critical. Organizations must continuously monitor AI performance, detect drift, and refine controls as systems evolve.
Governance frameworks should encourage feedback, learning, and adaptation. Teams need the authority and support to escalate issues, refine policies, and implement new controls in response to emerging risks. Organizations that embed accountability and continuous improvement into their culture build resilience, ensuring that AI risk management remains effective even as technology and business environments change.
Looking Ahead: What This Means for AI Risk Management
The AI risk landscape is evolving rapidly, requiring governance frameworks that are adaptable, proactive, and rigorous. Success depends on implementing phased governance, treating third-party AI risk with the same diligence as internal systems, staying ahead of regulatory developments, maintaining transparency in data and modeling practices, and fostering a culture of accountability and continuous oversight.
Organizations that integrate these principles into their TPRM programs will be well-positioned to manage AI responsibly while maximizing the benefits of innovation. Just as technology continues to accelerate, risk management practices must evolve in parallel, ensuring that AI adoption is both powerful and secure.
Forward-thinking organizations will continue to refine their AI governance frameworks, continuously strengthening oversight, improving data and model transparency, and embedding a culture of accountability. By doing so, they can confidently leverage AI technology to drive growth, efficiency, and innovation while mitigating risk effectively.
Ready to effectively accelerate AI in your TPRM program?
Watch the on-demand webinar with Aravo and PwC, “Manage AI Risk: Understand the Importance of Internal AI Governance and Assessing Third-Party Use of AI,” to learn more about a phased approach for risk professionals to advance AI initiatives and create guidelines for managing AI risks.
Featured in: AI / Artificial Intelligence , Third Party Management