Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Category: Article

Vendor Model Validation

Many of the models we validate on behalf of our clients are developed and maintained by third-party vendors. These validations present a number of complexities that are less commonly encountered when validating “home-grown” models. These often include:

  1. Inability to interview the model developer
  2. Inability to review the model code
  3. Inadequate documentation
  4. Lack of developmental evidence and data sets
  5. Lack of transparency into the impact custom settings

Notwithstanding these challenges, the OCC’s Supervisory Guidance on Model Risk Management (OCC 2011-12)1 specifies that, “Vendor products should nevertheless be incorporated into a bank’s broader model risk management framework following the same principles as applied to in-house models, although the process may be somewhat modified.”

The extent of these modifications depends on the complexity of the model and the cooperation afforded by the model’s vendor. We have found the following general principles and practices to be useful.

Model Validation for Vendor Models

Vendor Documentation is Not a Substitute for Model Documentation

Documentation provided by model vendors typically includes user guides and other materials designed to help users navigate applications and make sense of outputs. These documents are written for a diverse group of model users and are not designed to identify and address particular model capabilities specific to the purpose and portfolio of an individual bank. A bank’s model documentation package should delve into its specific implementation of the model, as well as the following:

  • Discussion of the model’s purpose and specific application, including business and functional requirements achieved by the model
  • Discussion of model theory and approach, including algorithms, calculations, formulas, functions and programming
  • Description of the model’s structure
  • Identification of model limitations and weaknesses
  • Comprehensive list of model inputs and assumptions, including their sources
  • Comprehensive list of outputs and reports and how they are used, including downstream systems that rely on them
  • Description of testing (benchmarking and back-testing)

Because documentation provided by the vendor is likely to include very few if any of these items, it falls to the model owner (at the bank) to generate this documentation. While some of these items (specific algorithms, calculations, formulas, and programming, for example) are likely to be deemed proprietary and will not be disclosed by the vendor, most of these components are obtainable and should be requested and documented.

Model documentation should also clearly lay out all model settings (e.g., knobs) and justification for the use of (or departure from) vendor default settings.

Model Validation Testing Results Should Be Requested of the Vendor

OCC 2011-12 states that “Banks should expect vendors to conduct ongoing performance monitoring and outcomes analysis, with disclosure to their clients, and to make appropriate modifications and updates over time.” Many vendors publish the results of their own internal testing of the model. For example, a prepayment model vendor is likely to include back-testing results of the model’s forecasts for certain loan cohorts against actual, observed prepayments. An automated valuation model (AVM) vendor might publish the results of testing comparing the property values it generates against sales data. If a model’s vendor does not publish this information, model validators should request it and document the response in the model validation report. Where available, this information should be obtained and incorporated into the model validation process, along with a discussion of its applicability to data the bank is modeling. Model validators should attempt to replicate the results of these studies, where feasible, and use them to enhance their own independent benchmarking and back-testing activities.

Developmental Evidence Should Be Requested of the Vendor

OCC 2011-12 directs banks to “require the vendor to provide developmental evidence explaining the product components, design, and intended use.” This should be incorporated into the bank’s model documentation. Where feasible, model validators should also ask model vendors to provide information about data sets that were used to develop and test the model.

Contingency plans should be maintained: OCC 2011-12 cites the importance of a bank’s having “as much knowledge in-house as possible, in case the vendor or the bank terminates the contract for any reason, or if the vendor is no longer in business. Banks should have contingency plans for instances when the vendor model is no longer available or cannot be supported by the vendor.” For simple applications whose inner workings are well understood and replicable, a contingency plan may be as simple as Microsoft Excel. This requirement can pose a significant challenge, however, for banks that purchase off-the-shelf asset-liability and market risk models and do not have the in-house expertise to quickly and adequately replicate these models’ complex computations. Situations such as this argue for the implementation of reliable challenger models, which not only assist in meeting benchmarking requirements but can also function as a contingency plan backup.

Consult the Model Risk Management Group During the Process of Procuring Any Application That Might Possibly be Classified as a “Model”

In a perfect world, model validation considerations would be contemplated as part of the procurement process. An agreement to provide developmental evidence, testing results, and cooperation with future model validation efforts would ideally figure into the negotiations before the purchase of any application is finalized. Unfortunately, our experience has shown that banks often acquire what they think of as a simple third-party application, only to be informed after the fact, by either a regulator or the model risk management group, that they have in fact purchased a model requiring validation. A model vendor, particularly one not inclined to think of its product as a “model,” may not always be as responsive to requests for development and testing data after sale if those items have not been requested as a condition for the sale. It is, therefore, a prudent practice for procurement departments to have open lines of communication with model risk management groups so that the right questions can be asked and requirements established prior to application acquisition.


[1] See also: Federal Reserve Board of Governors Guidance on Model Risk Management (SR 11-7)


Model Validation: Is This Spreadsheet a Model?

As model validators, we frequently find ourselves in the middle of debates between spreadsheet owners and enterprise risk managers over the question of whether a particular computing tool rises to the level of a “model.” To the uninitiated, the semantic question, “Is this spreadsheet a model?” may appear to be largely academic and inconsequential. But its ramifications are significant, and getting the answer right is of critical importance to model owners, to enterprise risk managers, and to regulators.

Stakeholders of Model Validation

In the most important respects, the incentives of these stakeholder groups are aligned. Everybody has an interest in knowing that the spreadsheet in question is functioning as it should and producing accurate and meaningful outputs. Appropriate steps should be taken to ensure that every computing tool does this, regardless of whether it is ultimately deemed a model. But classifying something as a model carries with it important consequences related to cost and productivity, as well as overall model risk management.

It is here where incentives begin to diverge. Owners and users of spreadsheets, in particular, are generally inclined to classify them as simple applications or end-user computing (EUC) tools whose reliability can (and ought to) be ascertained using testing measures that do not rise to the level of formal model validation procedures required by regulators.1 These formal procedures can be both expensive for the institution and onerous for the model owner. Models require meticulous documentation of their approach, economic and financial theory, and code. The painstaking statistical analysis is frequently required to generate the necessary developmental evidence, and further cost is then incurred to validate all of it.

Enterprise risk managers and regulators, who do not necessarily feel these added costs and burdens, may be inclined to err on the side of classifying spreadsheets as models “just to be on the safe side.” But incurring unnecessary costs is not a prudent course of action for a financial institution (or any institution). And producing more model validation reports than is needful can have other unintended, negative consequences. Model validations pull model owners away from their everyday work, adversely affecting productivity and, sometimes, quality of work. Virtually every model validation report identifies issues that must be reviewed and addressed by management. Too many unnecessary reports containing findings that are comparatively unimportant can bury enterprise risk managers and distract them from the most urgent findings.

Definition of a Model

So what, then, are the most important considerations in determining which spreadsheets are in fact models that should be subject to formal validation procedures? OCC and FRB guidance on model risk management defines a model as follows:2

A quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.

The same guidance refers to models as having three components:

  1. An information input component, which delivers assumptions and data to the mode
  2. A processing component, which transforms inputs into estimates
  3. A reporting component, which translates the estimates into useful business information

This definition and guidance leave managers with some latitude. Financial institutions employ many applications that apply mathematical concepts to defined inputs in order to generate outputs. But the existence of inputs, outputs, and mathematical concepts alone does not necessarily justify classifying a spreadsheet as a model.

Note that the regulatory definition of a model includes the concept of quantitative estimates. The term quantitative estimate implies a level of uncertainty about the outputs. If an application is generating outputs about which there is little or no uncertainty, then one can argue the output is not a quantitative estimate but, rather, simply a defined arithmetic result. While quantitative estimates typically result from arithmetic processes, not every defined arithmetic result is a quantitative estimate.

For example, a spreadsheet that sums all the known balances of ten bank accounts as of a given date, even if it is supplied by automated feeds, and performs the summations in a complete lights-out process, likely would not rise to the level of a model requiring validation because it is performing a simple arithmetic function; it is not generating a quantitative estimate.3

In contrast, a spreadsheet that projects what the sum of the same ten bank balances will be as of a given future date (based on assumptions about interest rates, expected deposits, and decay rates, for example) generates quantitative estimates and would, therefore, qualify as a model requiring validation. Management and regulators would want to have comfort that the assumptions used by this spreadsheet model are reasonable and that they are being applied and computed appropriately.

Is this Spreadsheet a Model?

We have found the following questions to be particularly enlightening in helping our clients determine whether a spreadsheet should be classified as 1) a model that transforms inputs into quantitative estimates or 2) a non-model spreadsheet that generates defined arithmetic results.

Question 1: Does the Spreadsheet Produce a Demonstrably “Right” Answer?

A related question is whether benchmarking yields results that are comparable, as opposed to exactly the same. If spreadsheets designed by ten different people can reasonably be expected to produce precisely the same result (because there is only one generally accepted way of calculating it), then the result probably does not qualify as a quantitative estimate and the spreadsheet probably should not be classified as a model.

Example 1 (Non-Model): Mortgage Amortization Calculator: Ten different applications   would be expected to transform the same loan amount, interest rate, and term information into precisely the same amortization table. A spreadsheet that differed from this expectation would be considered “wrong.” We would not consider this output to be a quantitative estimate and would be inclined to classify such a spreadsheet as something other than a model.

Example 2 (Model): Spreadsheet projecting the expected UPB of a mortgage portfolio in 12 months:  Such a spreadsheet would likely need to apply and incorporate prepayment and default assumptions. Different spreadsheets could compute and apply these assumptions differently, without one particularly   necessarily   being recognized as “wrong.” We would consider the resulting UPB projections to be quantitative estimates and would be likely to classify such as spreadsheet as a model.

Note that the spreadsheets in both examples tell their users what a loan balance will be in the future. But only the second example layers economic assumptions on top of its basic arithmetic calculations. Economic assumptions can be subjected to verification after the fact, which relates to our second question:

Question 2: Can the Spreadsheet’s Output Be Back-Tested?

Another way of stating this question would be, “Is back-testing required to gauge the accuracy of the spreadsheet’s outputs?” This is a fairly unmistakable indicator of a forward-looking quantitative estimate. A spreadsheet that generates forward-looking estimates is almost certainly a model and should be subjected to formal model validation.

Back-testing would not be of any particular value in our first (non-model) example, above, as the spreadsheet is simply calculating a schedule. In our second (model) example, however, back-testing would be an invaluable tool for judging the reliability of the prepayment and default assumptions driving the balance projection.

Question 3: Is the Spreadsheet Simply Applying a Defined Set of Business Rules?

Spreadsheets are sometimes used to automate the application of defined business rules in order to arrive at a prescribed course of action. This question is a corollary to the first question about whether the spreadsheet produces output that is, by definition, “correct.”

Examples of business-rule calculators are spreadsheets that determine a borrower’s eligibility for a particular loan product or loss mitigation program. Such spreadsheets are also used to determine how much of a haircut to apply to various collateral types based on defined rules.

These spreadsheets do not generate quantitative estimates and we would not consider them models subject to formal regulatory validation.

Should I Validate This Spreadsheet?

All spreadsheets that perform calculations should be subject to review. Any spreadsheet that produces incorrect or otherwise unreliable outputs should not be used until its errors are corrected. Formal model validation procedures, however, should be reserved for spreadsheets that meet certain criteria. Subjecting non-model spreadsheets to model validation unnecessarily drives up costs and dilutes the findings of bona fide model validations by cluttering enterprise risk management’s radar with an unwieldy number of formal issues requiring tracking and resolution.

Spreadsheets should be classified as models (and validated as such) when they produce forward-looking estimates that can be back-tested. This excludes simple calculators that do not rely on economic assumptions or apply business rules that produce outputs that can be definitively identified before the fact as “right” or “wrong.”

We believe that the systematic application of these principles will alleviate much of the tension between spreadsheet owners, enterprise risk managers, and regulators as they work together to identify those spreadsheets that should be subject to formal model validation.


[1] In the United States, most model validations are governed by one of the following sets of guidelines: 1) OCC 2011-12 (institutions regulated by the OCC), 2) FRB SR-11 (institutions regulated by the Federal Reserve) and 3) FHFA 2013-07 (Fannie Mae, Freddie Mac, and the Federal Home Loan Banks). These documents have much in common and the OCC and FRB guidelines are identical to one another.

[2] See footnote 1.

[3] Management would nevertheless want to obtain assurances that such an application was functioning correctly. This, however, can be achieved via less intrusive means than a formal model validation process. This might be addressed via conventional auditing, SOX reviews, or EUC quality gates. All of these are less intrusive.


Reducing the Cost of Model Validation Programs

Across the financial services industry, increased oversight has led to significant increases in expenses related to assessing and monitoring risk. We see over and over that institutions are weighted with significantly higher regulatory standards, but are not given commensurate financial resources. Model validation is an area where banks are incurring significant expenses to meet regulatory and internal requirements.In response to client demand, RiskSpan makes the following recommendations to institutions that are looking to maintain the quality of the model validation process while reducing the associated costs.

Model Governance Policy

The first step is devising a model governance policy that is aligned with regulations and the institution’s approach to risk management. Fundamental to the policy is the identification of the models themselves. Once the models are identified, model owners must be notified that their respective models (or tools or applications) are defined as a model, and as such, are expected to adhere to the institutions’ model governance policy. Model owners will need to fully understand the expectations of the model validation regulatory guidance in order to prepare their business units for successful model validation reviews.

Model Documentation

Once the policy is created and expectations are communicated to model owners, a risk ranking will need to be performed (for example, High, Medium and Low), which will shape the scope and prioritization of model validation activities. Risk managers within the organization may want to consider different standards for model documentation and the detail of model validation reports based on the risk ranking of the model. Model documentation is one area where there are significant cost-saving opportunities. Model validation is less costly when business owners have been given easy-to-follow documentation templates based on model governance policies. Template-building is an up-front activity that guides business units to produce quality documentation. Inversely, the lack of proper model documentation exposes the business to risk and a drag on financial resources. When there is limited communication of the model’s capabilities, purpose and limitations, the workload of a model owner ends up being transferred to a model validation team as the validators attempt to gain a basic understanding of the model. This can end up being a costly activity, and consume valuable resources.

Validation Scheduling

Scheduling of the actual validation itself should take place after model documentation is complete. In fact, the price of admission into a model validation program should be a robust set of model documents. Risk managers that coordinate the validation must be sensitive to business cycle of model owners and times of validators and regulator expectations. At the same time, validation activities should not be pushed to the very end of the year.

Validation Test Plans

An additional element that can be developed early-on is a validation test plan which increases transparency (particularly when third-party validators are used) and allows for testing to be run on a periodic basis more efficiently. Test plans can be used for years after they are first developed, and may be modified to account for market changes that could impact model performance.

Buy versus Build?

Bank executives are faced with a choice: outsource model validation or maintain an internal staff to perform model validation activities. Depending on the complexity and required technical expertise to understand a model, a bank may not have the specific expertise contained within an internal model validation department, therefore outsourcing all or part of the validation may be necessary. Alternatively, banks that prefer to keep validation resources “in-house” may consider periodic staffing support with subject matter expertise to make it through periods of high volume of validation activities (for many banks, validations end up occurring in the 2nd half of the year, and end up being in a crunch at year-end). About the Author: Pat Greene currently supports strategic and tactical initiatives by RiskSpan to enhance a suite of valuation tools that provide pricing, analytics, and risk reporting for multiple asset classes, including mortgages and structured securities. He has delivered technology solutions and provided financial model validation support to multiple RiskSpan clients whose business practices rely on credit models, interest-rate models, prepayment models, income simulation models, counter-party risk models, whole loan valuation models, and bond redemption forecasting models. Pat is an experienced executive who has been responsible for the management of a multi-billion dollar asset securitization program for a national financial institution. He has experience in the development and implementation of business unit objectives, management of a $4 million operating budget, and the oversight and monitoring of service levels with legal resources, accountants, and other financial institutions that supported an industry leading asset sales program. He is a skilled manager experienced in the development of business strategy that leads to business process change and technology implementation. Pat is a graduate of the United States Naval Academy and received a M.B.A. from Loyola College in Baltimore, Maryland.

Download RiskSpan Insight-September 2014


Get Started
Log in

Linkedin   

risktech2024