Managing operational risk in the current era of enforcement, shareholder suits and explosive class-action activity poses huge risks if you failÃ¯Â¾â€”and presents game-changing opportunities if you choose to embrace them. Over the past few years, organizations have focused a lot of time, energy, and resources on designing, implementing and improving governance, risk and compliance programs to address operational risks. Now executives are appropriately asking “Is all of this work really working? Are we delivering outcomes that really matter?”
While the art, science and practice of program evaluation are still in their infancy, there are several sound practices that organizations of all sizes can use to get answers to these questions. As we approach program evaluation, remember that managing governance, risk and compliance is fundamentally similar toÃ¯Â¾â€”not fundamentally different fromÃ¯Â¾â€”other enterprise processes. Therefore, we can use tried-andtrue techniques to evaluate our approach.
That said, what should we evaluate? What are the goals of the evaluation? How should we do it? What measures must we keep to provide the information for a meaningful evaluation? Are organizations doing an effective job of evaluation today?
What to Evaluate?
Generally speaking, there are two types of evaluations to undertake: “effectiveness evaluation” and “performance evaluation.” The former helps an organization meet minimum requirements and receive credit for putting in place a program that is logically designed using sound practices. The latter helps an organization understand if the program is truly delivering business benefits and identifies where investments can be optimized.
In the world of compliance and internal control, “effectiveness” is a term of art that has specific meaning. Although legal compliance (including issues associated with preventing and detecting fraud) represents a subset of the issues typically included in operational risk, it is important that organizations use this common denominator when evaluating the GRC program, for it is this definition that will be used by enforcement entities when (not if) things go afoul.
It is important that we accept this definition, and not attempt to expand it. Doing so only invites regulatory uncertainty and confusion. And, most importantly, redefining “program effectiveness” is unnecessary as most will find more value in using “program performance” as a more powerful concept.
Performance brings into view the totality of the program and determines if it is delivering real business value. This concept certainly includes “effectiveness,” as a solid program must meet the minimum legal requirements. However, as most executives know, performance helps an organization dig in to the issues that matter most and answer, “Is our program delivering business value? Where should we focus our time and resources to make it better?”
Taking a Step Back
Take a step back and consider goals of organizational performance. At the highest level, all organizations are in business to achieve objectives while staying within specific conduct boundaries.
The governance, risk and compliance approach fits into this picture by providing a capability to identify the boundaries and obstacles and establishing a system to let management know when it is getting close to (or crossing) a boundary or approaching an obstacle. As issues are encountered and addressed, management can continuously improve the program to reduce the likelihood that prior issues resurface, or new issues arise unexpectedly.
“Effectiveness” looks at whether the program is logically designed to address all mandated and voluntary requirements (design effectiveness), and whether the program is actually operating as designed (operating effectiveness). In this sense, the evaluation helps to determine if the program is delivering required legal and regulatory outcomes and appropriately reflecting the organization’s voluntary promises regarding its approach to governance, risk and compliance. This is the evaluation contemplated by the U.S. Sentencing Guidelines and is a critical process to undertake.
Today, though, shareholders and other stakeholders are demanding more. At a practical level, neither design nor operating effectiveness will help management and the board judge performance or allocate scarce capital. Beyond design and operating effectiveness, there is a need and demand for Total Program Performance.
Yet, it is clear from a preliminary review of responses to the OCEG Proving Value Study that most entities have not yet mastered the effectiveness evaluation phase, and virtually none are undertaking the steps necessary to ensure high performance levels.
Total Program Performance
Total Program Performance looks not only at the effectiveness of the program, but also its efficiency, responsiveness and the degree to which it delivers business outcomes that go beyond legal and regulatory requirementsÃ¯Â¾â€”outcomes that really matter to stakeholders. These dimensions are similar to the classic performance triangle of quality, cost and speed.
There are numerous benefits and challenges to measuring the performance of a program. A well-known maxim is, “what gets measured gets done... what gets rewarded gets repeated.” The governance, risk and compliance capability and approach are no different.
Ideally, performance measurement will help an organization:
• demonstrate that the program meets minimum legal requirements (effectiveness),
• demonstrate how program results support objectives and create or preserve value,
• highlight what works and what doesn’t (improvement opportunities),
• justify capital allocation,
• demonstrate accountability,
• motivate and provide tangible feedback to employees, and
• enrich communications with stakeholders.
Measuring Program Performance
The measurement planning process defines the overall measurement strategy, approach, required resources and information. These activities are conducted periodically to ensure that what you are measuring remains salient to both the program and its role in the organization.
1. Align program objectives to enterprise objectives.
2. Define indicators and targets to measure performance.
Once you understand what the program is trying to accomplish and how it relates to enterprise performance, define indicators that will help you evaluate performance that can be linked or correlated to the indicators and targets used to measure the business objectives.
Once indicators are defined, management should identify targets that the program intends to deliver. Prioritize the targets based upon their degree of alignment to the business objectives. For example, if financial objectives carry the greatest weight within your organization, then attempt to set your most significant program target in this area. In this way, other valuable contributions of your program will not be as readily discounted but will be seen as enhancing the value of your program beyond making a threshold “high-profile” contribution.
3. Measure indicators.
Once indicators are in place, management should establish mechanisms to collect the appropriate data and monitor performance. Be on guard for those who make numbers mean just about anything; management also will be meticulous about whether the processes used to collect and generate the program measures are valid. Management will be on the lookout to see whether you use reliable data sources, repeatable approaches and consistent aggregation/calculation methods that will allow for year-over-year analysis.
Naturally, people will want to say that data quality is paramount. However, the reality is that if you are using the same data sources that are used for measuring performance of the business objectives, then, even if the quality of the data in those sources is poor, it is still consistent and comparable (equally bad, so to speak).
4. Analyze indicators.
The significance of an indicator lies in the ability to report period-over-period to show directional performance. This cannot be achieved unless the approach for gathering the information for the indicator is repeatable. Repeatability is a factor of how often the data will be gathered. If you intend to report an indicator monthly, then the approach must be geared to collecting the same data at that same frequency. In a dynamic business environment, identifying aggregation and calculation methods that can be applied across an enterprise presents a significant challenge. So, calculation methods have to be normalized in the same manner or in a manner consistent with the way in which business performance measures are normalized.
Measurement Presents Challenges
To effectively measure the program, you will have to overcome a number of challenges associated with performance measurement, including:
• Unintended consequences.
These can occur when inappropriate or “perverse” incentives or measures are put in place. In one professional services firm, contract compliance was historically measured in the first quarter of each year. When the firm switched to continuous monitoring of contract compliance, it found that contracts closed in the first quarter were five times as likely to comply with standard terms and conditions than contracts in the other three quarters. Knowing that the first quarter was all that really mattered led many to focus only on the first quarter when it came to contract compliance.
• Perception versus fact.
Several program outcomes require measuring the perceptions of stakeholders, typically via surveys and ethnology. These tools do not necessarily indicate fact. For instance, a survey may ask an employee if he or she has observed misconduct, and the employee may not have the appropriate knowledge to know if something is actually misconduct. Nonetheless, surveys do provide an adequate proxy for information. In some cases, the perception is the “fact” that management is looking to measure. For example, if employees perceive there is some type of misconduct going on in the organization, the perception exists and must be addressed in some manner, even if the underlying assumption is incorrect.
• Long-term results.
In some cases, the outcome of a program may not be realized for many years, which can make it difficult to obtain measurement data. For example, it may take several years to actually see that the implementation of a certain initiative (e.g., training program on fraud prevention) has helped to prevent, reduce or detect incidents of fraud. In some cases, this can be addressed by identifying meaningful output-oriented milestones that lead to achieving the long-term outcome goal (e.g., keeping track of training data that will help with the long-term goal of reducing fraud in the workplace).
To address this issue, a program should define the specific short- and medium-term steps or milestones to accomplish the long-term goal. A road map can identify these interim goals, suggest how they will be measured, and establish a schedule to assess their impact on the long-term goal. These steps must be meaningful to the program, measurable and linked to the desired outcome.
• Prevention and deterrence.
By definition, a key outcome of the program is the deterrence or prevention of negative events. It is very difficult to prove a negative. Deterrence measurement requires consideration of what would happen in the absence of the program. It is often difficult to isolate the impact of the individual program element on any behavior that may be affected by multiple other factors.
For areas where non-compliance does not threaten physical, environmental or other significant harm, a legitimate long-term target may fall short of 100 percent compliance. In these cases, short-term targets that demonstrate forward progress toward the acceptable long-range goal may make sense.
For areas where failure to prevent a negative outcome would be catastrophic (including programs to prevent life-threatening incidents), traditional outcome measurement might lead to an “all-or-nothing” goal. As long as the negative outcome is prevented, the program might be considered successful, regardless of the costs incurred in prevention or any close calls experienced that could have led to a catastrophic failure. This can be a dangerous and costly practice.
More appropriately, proxy measures can be used to determine how well the deterrence process is functioning. These proxy measures should be closely tied to the outcome, and the program should be able to demonstrate, such as through the use of modeling and/or factor and correlation analysis, how the proxies tie to the eventual outcome. Because failure to prevent a negative outcome is catastrophic, it may be necessary to have a number of proxy measures to help ensure that sufficient safeguards are in place. Failure in one of the proxy measures would not lead, in itself, to catastrophic failure of the program as a whole; however, failure in any one of the safeguards would be indicative of the risk of an overall failure.
• Multiple contributors.
Often several business processes and capabilities contribute to achieving the same goal. The contribution of any one program may be difficult to measure. One approach to this situation is to develop broad, yet measurable, outcome goals for the collection of programs, while also having program-specific performance goals.
One example of this is culture. Ideally, the program will help to develop an environment of trust, accountability and integrity. This, in turn will contribute to talent attraction, retention and satisfaction.
That said, it is difficult to prove that the program is the only contributor to those outcomes. Nevertheless, management should collaborate to better understand how the full complement of processes and programs (human resource processes, evaluation processes, compliance and ethics processes, etc.) work together to achieve desired outcomesÃ¯Â¾â€”and, if appropriate, assign some value to the contribution of the program.
• Inconsistent or incompatible information.
Data may be inconsistent or incompatible across the enterprises, and apples are not always compared to apples. For instance, the methodology used to evaluate information privacy risks may be completely different than the methodology used for employment compliance. This is especially true when analyzing information from more than one organization. Extra care should be given to normalizing data so that accurate analysis can be conducted.
OCEG has begun the work of identifying useful approaches to metric analysis by releasing the OCEG Measurement and Metrics Guide
(OMG), and undertaking a benchmarking study of more than 150 entities entitled Proving the Value of GRC: Measures and Metrics
. Both the Guide and the Study are available at www.oceg.org