Sign In
Get Started
Category: Blog

Feature Selection – Machine Learning Methods

Feature selection in machine learning refers to the process of isolating only those variables (or “features”) in a dataset that are pertinent to the analysis. Failure to do this effectively has many drawbacks, including: 1) unnecessarily complex models with difficult-to-interpret outcomes, 2) longer computing time, and 3) collinearity and overfitting. Effective feature selection eliminates redundant variables and keeps only the best subset of predictors in the model, thus making it possible to represent the data in the simplest way.This post begins by identifying steps that must be taken to prepare datasets for meaningful analysis—and how machine learning can help. We then introduce and discuss some commonly used machine learning techniques for variable selection.

Data Cleansing

Real world data contains a wide range of holes, noise, and inconsistencies. Before doing any statistical analysis, it is crucial to ensure that the data can be meaningfully analyzed. In practice, data cleansing is often the most time-consuming part of data analysis. This upfront investment is necessary, however, because the quality of data has a direct bearing on the reliability of model outputs.

Various machine learning projects require different sorts of data cleansing steps, but in general, when people speak of data cleansing, they are referring to the following specific tasks.

Cleaning Missing Values

Many machine learning techniques do not support data with missing values. To address this, we first need to understand why data are missing. Missing values usually occur simply because no information is provided, but other circumstances can lead to data holes as well. For instance, setting incorrect data types for attributes when data is extracted and integrated from multiple sources can cause data loss.

One way to investigate missing values is to identify patterns for missing data. For example, missing answers for certain questions from female respondents in a survey may indicate that those questions are only asked of male respondents. Another example might involve two loan records that share the same ID. If the second record contains blank values for every attribute except ‘Market Price,’ then the second record is likely simply updating the market price of the first record.

Once the early-stage evaluation of missing data is complete, we can set about determining how to address the problem. The easiest way to handle missing values is simply to ignore the records that contain them. However, this solution is not always practical. If a relatively large portion of the dataset contains missing values, then removing all of them could result in remaining data that may not be a good representation of the initial population. In that case, rather than filtering out relevant rows or attributes, a more proper approach is to impute missing values with sensible values.

A typical imputing method for categorical variables involves replacing the missing values with the most frequent value or with a newly created “unknown” category. For numeric variables, missing values might be replaced with mean or median values. Other, more advanced methods for dealing with missing values, e.g., listwise deletion for deleting rows with missing data and multiple imputation for substituting missing values, exist as well.

Reducing Noise in Data

“Noise” in data refers to erroneous values and outliers. Noise is an unavoidable problem which can be caused by human mistakes in data entry, technical problems, and many other factors. Noisy data adversely influences model performance, so its detection and removal has a key role to play in the data cleaning process.

There are two major noise types in data: class noise and attribute noise. Class noise often occurs in categorical variables and can include: 1) non-standardized class labels, 2) duplicate records mapping to different class labels, and 3) mislabeled records. Attribute noise refers to corruptive values and outliers, such as percentages inappropriately greater than 100% and placeholders (e.g., 999,000).1

There are many ways to deal with noisy data. Certain type of noise can be easily identified by sorting the data—thus isolating text input where numeric input is expected and other placeholders. Other noise can be addressed only using statistical methods. Clustering analysis groups the data by similarity and can help with detecting irrelevant objects and outliers. Data binning is used to reduce the impact of observation errors by combining ‘neighborhood’ data into a small number of bins. Advanced smoothing algorithms, including moving average and loess, fit the data into regression functions to eliminate the effect due to random variation and allow important patterns to stand out.

Data Normalization

Data normalization converts numerical values into specific ranges to meet the needs of a model. Performing data normalization makes it possible to aggregate data with different scales. Several algorithms require normalized data. For example, it is necessary to normalize data before feeding into principal component analysis (PCA) so that all variables have zero mean and unit variance and therefore the same weight. This also applies when performing support vector machines (SVM), which assumes that the input data is in range [0,1] or [-1,1]. Unnormalized data slows down model convergence time and skews results.

The most common way of normalizing data involves Z-score. Also known as standard-score normalization, this approach normalizes the error by dividing the difference between the data and mean by standard deviation. Z-score normalization is often used when min and max are unknown. Another common method is feature scaling, which brings all values into range [0,1] by dividing the difference between the data and min by the difference between max and min. Other normalization methods include studentized residual, t-statistics, and coefficient of variation.

Feature Selection Methods2

Stepwise Procedures

A stepwise procedure adds or subtracts individual features from a model until the optimal mix is identified. Stepwise procedures take three forms: backward elimination, forward selection, and stepwise regression.

Backward elimination is the simplest method. It fits the model using all available features and then systematically removes features one at a time, beginning with the feature with the highest p-value (provided the p-value exceeds a given threshold, usually 5%). The model is refit after each elimination and process loops until a model is identified in which each feature’s p-value falls below the threshold.

Forward selection is the opposite of backward elimination. It includes no variables in the model at first and then systematically adds features one at a time, beginning with the lowest p-value (provided the p-value falls below a threshold). The model is refit after each addition and loops until additional features do not help model performance.

Stepwise regression combines backward elimination and forward selection by allowing a feature to be added or dropped at each iteration. Using this method, a newly added variable in an early stage may be removed later, and vice versa.

Criterion-Based Procedures

A variable’s p-value is not the only statistic that can be used for feature selection. Penalized-likelihood criteria, such as akaike information criterion (AIC) and bayesian information criterion (BIC), are also valuable. Lower AICs and BICs indicate that a model is more likely to be true. They are given as: nlog (RSS/n) + kp, where RSS is residual sum of square (which decreases as the model complexity increases), n is sample size, p is numbers of predictors, and k is two for AIC and log(n) for BIC. Both criteria penalize larger models as p goes up, and BIC penalizes model complexity more heavily, which explains why BIC tends to favor smaller models in comparison to AIC. Other criteria are 1) Adjusted R2, which increases only if a new feature improves model performance more than expected, 2) PRESS, summing up squares of predicted residuals, and 3) Mallow’s Cp Statistic, estimating the average MSE of prediction.

Lasso and Ridge Regression

Lasso and ridge regressions are powerful techniques for dealing with large feature coefficients. Both approaches reduce overfitting by penalizing features with large coefficients and minimizing the difference between predicted value and observation, but they differ when adding penalized terms. Lasso adds a penalty term equivalent to the absolute value of the magnitude of coefficients, so that it zeros out target variables’ coefficients and eliminates them from the model. Ridge assigns a penalty equivalent to square of the magnitudes of the coefficients. Even though it does not shrink the coefficient to zero, it can regularize and constrain the coefficients to control variance.

Lasso and ridge regression models have been widely used in finance since their introduction. A recent example used both these methods in predicting corporate bankruptcy.3 In this study, the authors discovered that these regression methods are optimal as they handle multicollinearity and minimize the numerical instability that may occur due to overfitting.

Dimensionality Regression

“Dimensionality reduction” is a process of transforming an extraordinarily complex, “high-dimensional” dataset (i.e., one with thousands of variables or more) into a dataset that can tell the story using a significantly smaller number of variables.

The most popular linear technique for dimensionality reduction is principal component analysis (PCA). It converts complex dataset features into a new set of coordinates named principal components (PCs). PCs are created in such a way that each succeeding PC preserves the largest possible variance under the condition that it is uncorrelated with the preceding PCs. Keeping only the first several PCs in the model reduces data dimensionality and eliminates multi-collinearity among features.

PCA has a couple of potential pitfalls: 1) PCA is sensitive to the scale effects of the original variables (data normalization is required for performing PCA), and 2) Applying PCA to the data will hurt its ability to interpret the influence of individual features since the PCs are not real variables any more. For these reasons, PCA is not a good choice for feature selection if interpretation of results is important.

Dimensionality reduction and specifically PCA have practical applications to fixed income analysis, particularly in explaining term-structure variation in interest rates. Dimensionality reduction has also been applied to portfolio construction and analytics. It is well known that the first eigenvector identified by PCA maximally captures the systematic risk (variation of returns) of a portfolio.4 Quantifying and understanding this risk is essential when balancing a portfolio.


[1] http://sci2s.ugr.es/noisydata
[2] http://www.biostat.jhsph.edu/~iruczins/teaching/jf/ch10.pdf
[3] Pereira, J. M., Basto, M., & da Silva, A. F. (2016). The Logistic Lasso and Ridge Regression in Predicting Corporate Failure. Procedia Economics and Finance, v.39, pp.634-641.
[4] Alexander, C. (2001). Market models: A guide to financial data analysis. John Wiley & Sons.


What is an “S-Curve” and Does it Matter if it Varies by Servicer?

Mortgage analysts refer to graphs plotting prepayment rates against the interest rate incentive for refinancing as “S-curves” because the resulting curve typically (vaguely) resembles an “S.” The curve takes this shape because prepayment rates vary positively with refinance incentive, but not linearly. Very few borrowers refinance without an interest rate incentive for doing so. Consequently, on the left-hand side of the graph, where the refinance incentive is negative or out of the money, prepayment speeds are both low and fairly flat. This is because a borrower with a rate 1.0% lower than market rates is not very much more likely to refinance than a borrower with a rate 1.5% lower. They are both roughly equally unlikely to do so.

As the refinance incentive crosses over into the money (i.e., when prevailing interest rates fall below rates the borrowers are currently paying), the prepayment rate spikes upward, as a significant number of borrowers take advantage of the opportunity to refinance. But this spike is short-lived. Once the refinance incentive gets above 1.0% or so, prepayment rates begin to flatten out again. This reflects a segment of borrowers that do not refinance even when they have an interest rate incentive to do so. Some of these borrowers have credit or other issues preventing them from refinancing. Others are simply disinclined to go through the trouble. In either case, the growing refinance incentive has little impact and the prepayment rate flattens out.

These two bends—moving from non-incentivized borrowers to incentivized borrowers and then from incentivized borrowers to borrowers who can’t or choose not to refinance—are what gives the S-curve its distinctive shape.

Figure 1: S-Curve Example

An S-Curve Example – Servicer Effects

Interestingly, the shape of a deal’s S-curve tends to vary depending on who is servicing the deal. Many things contribute to this difference, including how actively servicers market refinance opportunities. How important is it to be able to evaluate and analyze the S-curves for the servicers specific to a given deal? It depends, but it could be imperative.

In this example, we’ll analyze a subset of the collateral (“Group 4”) supporting a recently issued Fannie Mae deal, FNR 2017-11. This collateral consists of four Fannie multi-issuer pools of recently originated jumbo-conforming loans with a current weighted average coupon (WAC) of 3.575% and a weighted average maturity (WAM) of 348 months. The table below shows the breakout of the top six servicers in these four pools based on the combined balance.

Figure 2: Breakout of Top Six Servicers

Over half (54%) of the Group 4 collateral is serviced by these six servicers. To begin the analysis, we pulled all jumbo-conforming, 30-year loans originated between 2015 and 2017 for the six servicers and bucketed them based on their refi incentive. A longer timeframe is used to ensure that there are sufficient observations at each point. The graph below shows the prepayment rate relative to the refi incentive for each of the servicers as well as the universe.

Figure 3: S-curve by Servicer

For loans that are at the money—i.e., the point at which the S-curve would be expected to begin spiking upward—only those serviced by IMPAC prepay materially faster than the entire cohort. However, as the refi incentive increases, IMPAC, Seneca Mortgage, and New American Funding all experience a sharp pick-up in speeds while loans serviced by Pingora, Lakeview, and Wells behave comparable to the market.

The last step is to compute the weighted average S-curve for the top six servicers using the current UPB percentages as the weights, shown in Figure 4 below. On the basis of the individual servicer observations, prepays for out-of-the-money loans should mirror the universe, but as loans become more re-financeable, speeds should accelerate faster than the universe. The difference between the six-servicer average and the universe reaches a peak of approximately 4% CPR between 50 bps and 100 bps in the money. This is valuable information for framing expectations for future prepayment rates. Analysts can calibrate prepayment models (or their outputs) to account for observed differences in CPRs that may be attributable to the servicer, rather than loan characteristics.

Figure 4: Weighted Average vs. Universe

This analysis was generated using RiskSpan’s data and analytics platform, RS Edge.


Validating Interest Rate Models

Many model validations—particularly validations of market risk models, ALM models, and mortgage servicing rights valuation models—require validators to evaluate an array of sub-models. These almost always include at least one interest rate model, which are designed to predict the movement of interest rates.

Validating interest rate models (i.e. short-rate models) can be challenging because many different ways of modeling how interest rates change over time (“interest rate dynamics”) have been created over the years. Each approach has advantages and shortcomings, and it is critical to distinguish the limitations and advantages of each of them  to understand whether the short-rate model being used is appropriate to the task. This can be accomplished via the basic tenets of model validation—evaluation of conceptual soundness, replication, benchmarking, and outcomes analysis. Applying these concepts to interest rate models, however, poses some unique complications.

A brief Introduction to the Short-Rate Model

In general, a short-rate model solves the short-rate evolution as a stochastic differential equation. Short-rate models can be categorized based on their interest rate dynamics.

A one-factor short-rate model has only one diffusion term. The biggest limitation of one-factor models is that the correlation between two continuously-compound spot rates at two dates is equal to one, which means a shock at a certain maturity is transmitted thoroughly across the curve that is not realistic in the market.

A multi-factor short-rate model, as its name implies, contains more than one diffusion term. Unlike one-factor models, multi-factor models consider the correlation between forward rates, which makes a multi-factor model more realistic and consistent with actual multi-dimension yield curve movements.

Validating Conceptual Soundness

Validating an interest rate model’s conceptual soundness includes reviewing its data inputs, mean-reversion feature, distributions of short rate, and model selection. Reviewing these items sufficiently requires a validator to possess a basic knowledge of stochastic calculus and stochastic differential equations.

Data Inputs

The fundamental data inputs to the interest rate model could be the zero-coupon curve (also known as term structure of interest rates) or the historical spot rates. Let’s take the Hull-White (H-W) one-factor model (H-W: drt = k(θ – rt)dt + σtdwt) as an example. H-W is an affine term structure model, of which analytical tractability is one of its most favorable properties. Analytical tractability is a valuable feature to model validators because it enables calculations to be replicated. We can calibrate the level parameter (θ) and the rate parameter (k) from the inputs curve. Commonly, the volatility parameter (σt) can be calibrated from historical data or swaptions volatilities. In addition, the analytical formulas are also available for zero-coupon bonds, caps/floors, and European swaptions.

Mean Reversion

Given the nature of mean reversion, both the level parameter and rate parameter should be positive. Therefore, an appropriate calibration method should be selected accordingly. Note the common approaches for the one-factor model—least square estimation and maximum likelihood estimation—could generate negative results, which are unacceptable by the mean-reversion feature. The model validator should compare different calibration results from different methods to see which method is the best approach for addressing the model assumption.

Short-Rate Distribution and Model Selection

The distribution of the short rate is another feature that we need to consider when we validate the short-rate model assumptions. The original short-rate models—Vasicek and H-W, for example—presume the short rate to be normally distributed, allowing for the possibility of negative rates. Because negative rates were not expected to be seen in the simulated term structures, the Cox-Ingersoll-Ross model (CIR, non-central chi-squared distributed) and Black-Karasinski model (BK, lognormal distributed) were invented to preclude the existence of negative rates. Compared to the normally distributed models, the non-normally distributed models forfeit a certain degree of analytical tractability, which makes validating them less straightforward. In recent years, as negative rates became a reality in the market, the shifted lognormal distributed model was introduced. This model is dependent on the shift size, determining a lower limit in the simulation process. Note there is no analytical formula for the shift size. Ideally, the shift size should be equal to the absolute value of the minimum negative rate in the historical data. However, not every country experienced negative interest rates, and therefore, the shift size is generally determined by the user’s experience by means of fundamental analysis.

The model validator should develop a method to quantify the risk from any analytical judgement. Because the interest rate model often serves as a sub-model in a larger module, the model selection should also be commensurate with the module’s ultimate objectives.

Replication

Effective model validation frequently relies on a replication exercise to determine whether a model follows the building procedures stated in its documentation. In general, the model documentation provides the estimation method and assorted data inputs. The model validator could consider recalibrating the parameters from the provided interest rate curve and volatility structures. This process helps the model validator better understand the model, its limitations, and potential problems.

Ongoing Monitoring & Benchmarking

Interest rate models are generally used to simulate term structures in order to price caps/floors and swaptions and measure the hedge cost. Let’s again take the H-W model as an example. Two standard simulation methods are available for the H-W model: 1) Monte Carlo simulation and 2) trinomial lattice method. The model validator could use these two methods to perform benchmarking analysis against one another.

The Monte Carlo simulation works ideally for the path-dependent interest rate derivatives. The Monte Carlo method is mathematically easy to understand and convenient for implementation. At each time step, a random variable is simulated and added into the interest rate dynamics. A Monte Carlo simulation is usually considered for products that can only be exercised at maturity. Since the Monte Carlo method simulates the future rates, we cannot be sure at which time the rate or the value of an option becomes optimal. Hence, a standard Monte Carlo approach cannot be used for derivatives with early-exercise capability.

On the other hand, we can price early-exercise products by means of the trinomial lattice method. The trinomial lattice method constructs a trinomial tree under the risk-neutral measure, in which the value at each node can be computed. Given the tree’s backward-looking feature, at each node we can compare the intrinsic value (current value) with the backwardly inducted value (continuous value), determining whether to exercise at a given node. The comparison step will keep running backwardly until it reaches the initial node and returns the final estimated value. Therefore, trinomial lattice works ideally for non-path-dependent interest rate derivatives. Nevertheless, lattice can be also implemented for path-dependent derivatives for the purpose of benchmarking.

Normally, we would expect to see that the simulated result from the lattice method is less accurate and more volatile than the result from the Monte Carlo simulation method, because a larger number of simulated paths can be selected in the Monte Carlo method. This will make the simulated result more stable, assuming the same computing cost and the same time step.

Outcomes Analysis

The most straightforward method for outcomes analysis is to perform sensitivity tests on the model’s key drivers. A standardized one-factor short-rate model usually contains three parameters. For the level parameter (θ), we can calibrate the equilibrium rate-level from the simulated term structure and compare with θ. For the mean-reversion speed parameter (k), we can examine the half-life, which equals to ln ⁡(2)/k , and compare with the realized half-life from simulated term structure. For the volatility parameter (σt), we would expect to see the larger volatility yields a larger spread in the simulated term structure. We can also recalibrate the volatility surface from the simulated term structure to examine if the number of simulated paths is sufficient to capture the volatility assumption.

As mentioned above, an affine term structure model is analytically tractable, which means we can use the analytical formula to price zero-coupon bonds and other interest rate derivatives. We can compare the model results with the market prices, which can also verify the functionality of the given short-rate model.

Conclusion

The popularity of certain types of interest rate models changes as fast as the economy. In order to keep up, it is important to build a wide range of knowledge and continue learning new perspectives. Validation processes that follow the guidelines set forth in the OCC’s and FRB’s Supervisory Guidance on Model Risk Management (OCC 2011-12 and SR 11-7) seek to answer questions about the model’s conceptual soundness, development, process, implementation, and outcomes.  While the details of the actual validation process vary from bank to bank and from model to model, an interest rate model validation should seek to address these matters by asking the following questions:

  • Are the data inputs consistent with the assumptions of the given short-rate model?
  • What distribution does the interest rate dynamics imply for the short-rate model?
  • What kind of estimation method is applied in the model?
  • Is the model analytically tractable? Are there explicit analytical formulas for zero-coupon bond or bond-option from the model?
  • Is the model suitable for the Monte Carlo simulation or the lattice method?
  • Can we recalibrate the model parameters from the simulated term structures?
  • Does the model address the needs of its users?

These are the fundamental questions that we need to think about when we are trying to validate any interest rate model. Combining these with additional questions specific to the individual rate dynamics in use will yield a robust validation analysis that will satisfy both internal and regulatory demands.


AML Model Validation: Effective Process Verification Requires Thorough Documentation

Increasing regulatory scrutiny due to the catastrophic risk associated with anti-money-laundering (AML) non-compliance is prompting many banks to tighten up their approach to AML model validation. Because AML applications would be better classified as highly specialized, complex systems of algorithms and business rules than as “models,” applying model validation techniques to them presents some unique challenges that make documentation especially important.

In addition to devising effective challenges to determine the “conceptual soundness” of an AML system and whether its approach is defensible, validators must determine the extent to which various rules are firing precisely as designed. Rather than commenting on the general reasonableness of outputs based on back-testing and sensitivity analysis, validators must rely more heavily on a form of process verification that requires precise documentation.

Vendor Documentation of Transaction Monitoring Systems

Above-the-line and below-the-line testing—the backbone of most AML transaction monitoring testing—amounts to a process verification/replication exercise. For any model replication exercise to return meaningful results, the underlying model must be meticulously documented. If not, validators are left to guess at how to fill in the blanks. For some models, guessing can be an effective workaround. But it seldom works well when it comes to a transaction monitoring system and its underlying rules. Absent documentation that describes exactly what rules are supposed to do, and when they are supposed to fire, effective replication becomes nearly impossible.

Anyone who has validated an AML transaction monitoring system knows that they come with a truckload of documentation. Vendor documentation is often quite thorough and does a reasonable job of laying out the solution’s approach to assessing transaction data and generating alerts. Vendor documentation typically explains how relevant transactions are identified, what suspicious activity each rule is seeking to detect, and (usually) a reasonably detailed description of the algorithms and logic each rule applies.

This information provided by the vendor is valuable and critical to a validator’s ability to understand how the solution is intended to work. But because so much more is going on than what can reasonably be captured in vendor documentation, it alone provides insufficient information to devise above-the-line and below-the-line testing that will yield worthwhile results.

Why An AML Solution’s Vendor Documentation is Not Enough

Every model validator knows that model owners must supplement vendor-supplied documentation with their own. This is especially true with AML solutions, in which individual user settings—thresholds, triggers, look-back periods, white lists, and learning algorithms—are arguably more crucial to the solution’s overall performance than the rules themselves.

Comprehensive model owner documentation helps validators (and regulatory supervisors) understand not only that AML rules designed to flag suspicious activity are firing correctly, but also that each rule is sufficiently understood by those who use the solution. It also provides the basis for a validator’s testing that rules are calibrated reasonably. Testing these calibrations is analogous to validating the inputs and assumptions of a predictive model. If they are not explicitly spelled out, then they cannot be evaluated.

Here are some examples.

Transaction Input Transformations

Details about how transaction data streams are mapped, transformed, and integrated into the AML system’s database vary by institution and cannot reasonably be described in generic vendor documentation. Consequently, owner documentation needs to fully describe this. To pass model validation muster, the documentation should also describe the review process for input data and field mapping, along with all steps taken to correct inaccuracies or inconsistencies as they are discovered.

Mapping and importing AML transaction data is sometimes an inexact science. To mitigate risks associated with missing fields and customer attributes, risk-based parameters must be established and adequately documented. This documentation enables validators who test the import function to go into the analysis with both eyes open. Validators must be able to understand the circumstances under which proxy data is used in order to make sound judgments about the reasonableness and effectiveness of established proxy parameters and how well they are being adhered to. Ideally, documentation pertaining to transaction input transformation should describe the data validations that are performed and define any error messages that the system might generate.

Risk Scoring Methodologies and Related Monitoring

Specific methodologies used to risk score customers and countries and assign them to various lists (e.g., white, gray, or black lists) also vary enough by institution that vendor documentation cannot be expected to capture them. Processes and standards employed in creating and maintaining these lists must be documented. This documentation should include how customers and countries get on these lists to begin with, how frequently they are monitored once they are on a list, what form that monitoring takes, the circumstances under which they can move between lists, and how these circumstances are ascertained. These details are often known and usually coded (to some degree) in BSA department procedures. This is not sufficient. They should be incorporated in the AML solution’s model documentation and include data sources and a log capturing the history of customers and countries moving to and from the various risk ratings and lists.

Output Overrides

Management overrides are more prevalent with AML solutions than with most models. This is by design. AML solutions are intended to flag suspicious transactions for review, not to make a final judgment about them. That job is left to BSA department analysts. Too often, important metrics about the work of these analysts are not used to their full potential. Regular analysis of these overrides should be performed and documented so that validators can evaluate AML system performance and the justification underlying any tuning decisions based on the frequency and types of overrides.

Successful AML model validations require rule replication, and incompletely documented rules simply cannot be replicated. Transaction monitoring is a complicated, data-intensive process, and getting everything down on paper can be daunting, but AML “model” owners can take stock of where they stand by asking themselves the following questions:

  1. Are my transaction monitoring rules documented thoroughly enough for a qualified third-party validator to replicate them? (Have I included all systematic overrides, such as white lists and learning algorithms?)
  2. Does my documentation give a comprehensive description of how each scenario is intended to work?
  3. Are thresholds adequately defined?
  4. Are the data and parameters required for flagging suspicious transactions described well enough to be replicated?

If the answer to all these questions is yes, then AML solution owners can move into the model validation process reasonably confident that the state of their documentation will not be a hindrance to the AML model validation process.


Machine Learning and Portfolio Performance Analysis

Attribution analysis of portfolios typically aims to discover the impact that a portfolio manager’s investment choices and strategies had on overall profitability. They can help determine whether success was the result of an educated choice or simply good luck. Usually a benchmark is chosen and the portfolio’s performance is assessed relative to it.

This post, however, considers the question of whether a non-referential assessment is possible. That is, can we deconstruct and assess a portfolio’s performance without employing a benchmark? Such an analysis would require access to historical return as well as the portfolio’s weights and perhaps the volatility of interest rates, if some of the components exhibit a dependence on them. This list of required variables is by no means exhaustive.

There are two prevalent approaches to attribution analysis—one based on factor models and the other on return decomposition. The factor model approach considers the equities in a portfolio at a single point in time and attributes performance to various macro- and micro-economic factors prevalent at that time. The effects of these factors are aggregated at the portfolio level and a qualitative assessment is done. Return decomposition, on the other hand, explores the manner in which positive portfolio returns are achieved across time. The principal drivers of performance are separated and further analyzed. In addition to a year’s worth of time series data for the variables listed in the previous paragraph, covariance, correlation, and cluster analyses and other mathematical methods would likely be required.

Normality Assumption

Is the normality assumption for stock returns fully justified? Are sample means and variances good proxies for population means and variances? This assumption is worth testing because Normality and the Central Limit Theorem are widely assumed when dealing with financial data. The Delta-Normal Value at Risk (VaR) method, which is widely used to compute portfolio VaR, assumes that stock returns and allied risk factors are normally distributed. Normality is also implicitly assumed in financial literature. Consider the distribution of S&P returns from May 1980 to May 2017 displayed in Figure 1.

Figure One: Distribution of S&P Returns

Panel (a) is a histogram of S&P daily returns from January 2001 to January 2017. The red curve is a Gaussian fit. Panel (b) shows the same data on a semi-log plot (logarithmic Y axis). The semi-log plot emphasizes the tail events.

The returns displayed in the left panel of figure 1 have a higher central peak and the “shoulders” are somewhat wider than what is predicted by the Gaussian fit. This mismatch in the tails is more visible in the semi-log plot shown in panel (b). This demonstrates that a normal distribution is probably not a very accurate assumption. Sigma, the standard deviation, is typically used as a measure of the relative magnitude of market moves and as a rough proxy for the occurrence of such events. The normal distribution places the odds of a minus-5 sigma swing at only 2.86×10-5 %. In other words, assuming 252 trading days per year, a drop of this magnitude should occur once in every 13,000 years! However, an examination of S&P returns over the 37-year period cited shows drops of 5 standard deviations or greater on 15 occasions. Assuming a normal distribution would consistently underestimate the occurrence of tail events.

We conducted a subsequent analysis focusing on the daily returns of SPY, a popular exchange-traded fund (ETF). This ETF tracks 503 component instruments. Using returns from July 01, 2016 through June 31, 2017, we tested each component instrument’s return vector for normality using the Chi-Square Test, the Kurtosis estimate, and a visual inspection of the Q-Q plot. Brief explanations of these methods are provided below.

Chi-Square Test

This is a goodness-of-fit test that assumes a specific data distribution (Null hypothesis) and then tests that assumption. The test evaluates the deviations of the model predictions (Normal distribution, in this instance) from empirical values. If the resulting computed test statistic is large, then the observed and expected values are not close and the model is deemed a poor fit to the data. Thus, the Null hypothesis assumption of a specific distribution is rejected.

Kurtosis

The kurtosis of any univariate standard-Normal distribution is 3. Any deviations from this value imply that the data distribution is correspondingly non-Normal. An example is illustrated in Figures 2, 3, and 4, below.

Q-Q Plot

Quantile-quantile (QQ) plots are graphs on which quantiles from two distributions are plotted relative to each other. If the distributions correspond, then the plot appears linear. This is a visual assessment rather than a quantitative estimation. A sample set of results is shown in Figures 2, 3, and 4, below.

Figure Two: Year’s Returns for Exxon

Figure 2. The left panel shows the histogram of a year’s returns for Exxon (XOM). The null hypothesis was rejected with the conclusion that the data is not normally distributed. The kurtosis was 6 which implies a deviation from normality. The Q-Q plot in the right panel reinforces these conclusions.

Figure Three: Year’s Returns for Boeing

Figure 3. The left panel shows the histogram of a year’s returns for Boeing (BA). The data is not normally distributed and shows a significant skewness also. The kurtosis was 12.83 and implies a significant deviation from normality. The Q-Q plot in the right panel confirms this.

For the sake of comparison, we also show returns that exhibit normality in the next figure.

Figure Four: Year’s Returns for Xerox

The left panel shows the histogram of a year’s returns for Xerox (XRX). The data is normally distributed, which is apparent from a visual inspection of both panels. The kurtosis was 3.23 which is very close to the value for a theoretical normal distribution.

Machine learning literature has several suggestions for addressing this problem, including Kernel Density Estimation and Mixture Density Networks. If the data exhibits multi-modal behavior, learning a multi-modal mixture model is a possible approach.

Stationarity Assumption

In addition to normality, we also make untested assumptions regarding stationarity. This critical assumption is implicit when computing covariances and correlations. We also tend to overlook insufficient sample sizes. As observed earlier, the SPY dataset we had at our disposal consisted of 503 instruments, with around 250 returns per instrument. The number of observations is much lower than the dimensionality of the data. This will produce a covariance matrix which is not full-rank and, consequently, its inverse will not exist. Singular covariance matrices are highly problematic when computing the risk-return efficiency loci in the analysis of portfolios. We tested the returns of all instruments for stationarity using the Augmented Dickey Fuller (ADF) test. Several return vectors were non-stationary. Non-stationarity and sample size issues can’t be wished away because the financial markets are fluid with new firms coming into existence and existing firms disappearing due bankruptcies or acquisitions. Consequently, limited financial histories will be encountered and must be dealt with.

This is a problem where machine learning can be profitably employed. Shrinkage methods, Latent factor models, Empirical Bayes estimators and Random matrix theory based models are widely published techniques that are applicable here.

Portfolio Performance Analysis

Once issues surrounding untested assumptions have addressed, we can focus on portfolio performance analysis–a subject with a vast collection of books and papers devoted to it. We limit our attention here to one aspect of portfolio performance analysis – an inquiry into the clustering behavior of stocks in a portfolio.

Books on portfolio theory devote substantial space to the discussion of asset diversification to achieve an optimum balance of risk and return. To properly diversify assets, we need to know if resources have been over-allocated to a specific sector and, consequently, under-allocated to others. Cluster analysis can help to answer this. A pertinent question is how to best measure the difference or similarity between stocks. One way would be to estimate correlations between stocks. This approach has its own weaknesses, some of which have been discussed in earlier sections. Even if we had a statistically significant set of observations, we are faced with the problem of changing correlations during the course of a year due to structural and regime shifts caused by intermittent periods of stress. Even in the absence of stress, correlations can break down or change due to factors that are endogenous to individual stocks.

We can estimate similarity and visualize clusters using histogram analysis. However, histograms eliminate temporal information. To overcome this constraint, we used Spectral Clustering, which is a machine learning technique that explores cluster formation without neglecting temporal information.

Figures 5 to 7 display preliminary results from our cluster analysis. Analyses like this will enable portfolio managers to realize clustering patterns and their strengths in their portfolios. They will also help guide decisions on reweighting portfolio components and diversification.

Figures 5-7: Cluster Analyses

Figure 5. Cluster analysis of a limited set of stocks is shown here. The labels indicate the names of the firms. Clusters are illustrated by various colored bullets, and increasing distances indicate decreasing similarities. Within clusters, stronger affinities are indicated by greater connecting line weights.

The following figures display magnified views of individual clusters.

Figure 6. We can see that Procter & Gamble, Kimberly Clark and Colgate Palmolive form a cluster (top left, dark green bullets). Likewise, Bank of America, Wells Fargo and Goldman Sachs form a cluster (top right, light green bullets). This is not surprising as these two clusters represent two sectors: consumer products and banking. Line weights are correlated to affinities within sectors.

Figure 7. The cluster on the left displays stocks in the technology sector, while the clusters on the right represent firms in the defense industry (top) and the energy sector (bottom).

In this post, we raised questions about standard assumptions that are made when analyzing portfolios. We also suggested possible solutions from machine learning literature. We subsequently analyzed one year’s worth of returns of SPY to identify clusters and their strengths and discussed the value of such an analysis to portfolio managers in evaluating risk and reweighting or diversifying their portfolios.


Prepayment Speeds Analysis of Freddie Mac Specified Pools

Are there differences in the prepayment speeds of various Freddie Mac specified pools?

Specified pools are comprised of loans with more homogenous characteristics than the typical TBA pool. For example, loans with original balances of less than $85,000 (a low loan balance, or LLB, specified pool). For this analysis, we will look at the prepayment rates of low loan balance (LLB), medium loan balance (MLB, $85,000 < original loan balance <= $110,000), high loan balance (HLB, $110,000 < original loan balance < = $150,000), investor (original loan balance < $150,000 and 100% investment properties), and low FICO pools (original loan balance < $150,000 and FICO < 700). And to ensure an apples-to-apples comparison, we will restrict the analysis to 4%, 2014 vintage pools.

Background information on specified pools can be found at https://www.fanniemae.com/content/news/specified-pool-pay-up-commentary.pdf.

Watch the short video to find out the answer and to see just how easy this can be done with RS Edge.


Mitigating EUC Risk Using Model Validation Principles

The challenge associated with simply gauging the risk associated with “end user computing” applications (EUCs)— let alone managing it—is both alarming and overwhelming. Scanning tools designed to detect EUCs can routinely turn up tens of thousands of potential files, even at not especially large financial institutions. Despite the risks inherent in using EUCs for mission-critical calculations, EUCs are prevalent in nearly any institution due to their ease of use and wide-ranging functionality.

This reality has spurred a growing number of operational risk managers to action. And even though EUCs, by definition, do not rise to the level of models, many of these managers are turning to their model risk departments for assistance. This is sensible in many cases because the skills associated with effectively validating a model translate well to reviewing an EUC for reasonableness and accuracy.  Certain model risk management tools can be tailored and scaled to manage burgeoning EUC inventories without breaking the bank.

Identifying an EUC

One risk of reviewing EUCs using personnel accustomed to validating models is the tendency of model validators to do more than is necessary. Subjecting an EUC to a full battery of effective challenges, conceptual soundness assessments, benchmarking, back-testing, and sensitivity analyses is not an efficient use of resources, nor is it typically necessary. To avoid this level of overkill, reviewers ought to be able to quickly recognize when they are looking an EUC and when they are looking at something else.

Sometimes the simplest definitions work best: an EUC is a spreadsheet.

While neither precise, comprehensive, nor 100 percent accurate, that definition is a reasonable approximation. Not every EUC is a spreadsheet (some are Access databases) but the overwhelming majority of EUCs we see are Excel files. And not every Excel file is an EUC—conference room schedules and other files in Excel that do not do any serious calculating do not pose EUC risk. Some Excel spreadsheets are models, of course, and if an EUC review discovers quantitative estimates in a spreadsheet used to compute forecasts, then analysts should be empowered to flag such applications for review and possible inclusion in the institution’s formal model inventory. Once the dust has settled, however, the final EUC inventory is likely to contain almost exclusively spreadsheets.

Building an EUC Inventory

EUCs are not models, but much of what goes into building a model inventory applies equally well to building an EUC inventory. Because the overwhelming majority of EUCs are Excel files, the search for latent EUCs typically begins with an automated search for files with .xls and .xlsx extensions. Many commercially available tools conduct these sorts of scans. The exercise typically returns an extensive list of files that must be sifted through.

Simple analytical tools, such as Excel’s “Inquire” add-in, are useful for identifying the number and types of unique calculations in a spreadsheet as well as a spreadsheet’s reliance on external data sources. Spreadsheets with no calculations can likely be excluded from further consideration from the EUC inventory. Likewise, spreadsheets with no data connections (i.e., links to or from other spreadsheets) are unlikely to qualify for the EUC inventory because such files do not typically have significant downstream impact. Spreadsheets with many tabs and hundreds of unique calculations are likely to qualify as EUCs (at least—if not as models) regardless of their specific use.

Most spreadsheets fall somewhere between these two extremes. In many cases, questioning the owners/users of identified spreadsheets is necessary to determine its use and help ascertain any potential institutional risks if the spreadsheet does not work as intended. When making inquiries of spreadsheet owners, open-ended questions may not always be as helpful as those designed to elicit a narrow band of responses. Instead of asking, “What is this spreadsheet used for?” A more effective request would be, “What other systems and files is this spreadsheet used to populate?”

Answers to these sorts of questions aid not only in determining whether a spreadsheet qualifies as an EUC but the risk-rating of the EUC as well.

Testing Requirements

For now, regulator interest in seeing that EUCs are adequately monitored and controlled appears to be outpacing any formal guidance on how to go about doing it.

Absent such guidance, many institutions have started approaching EUC testing like a limited-scope model validation. Effective reviews include a documentation review, a tie-out of input data to authorized, verified sources, an examination of formulas and coding, a form of benchmarking, and an overview of spreadsheet governance and controls.

Documentation Review

Not unlike a model, each EUC should be accompanied by documentation that explains its purpose and how it accomplishes what it intends to do. Documentation should describe the source of input data and what the EUC does with it. Sufficient information should be provided for a reasonably informed reviewer to re-create the EUC based solely on the documentation. If a reviewer must guess the purpose of any calculation, then the EUC’s documentation is likely deficient.

Input Review

The reviewer should be able to match input data in the EUC back to an authoritative source. This review can be performed manually; however, any automated lookups used to pull data in from other files should be thoroughly reviewed, as well.

Formula and Function Review

Each formula in the EUC should be independently reviewed to verify that it is consistent with its documented purposes. Reviewers do not need to test the functionality of Excel—e.g., they do not need to test arithmetic functions on a calculator—however, formulas and functions should be reviewed for reasonableness.

Benchmarking

A model validation benchmarking exercise generally consists of comparing the subject model’s forecasts with those of a challenger model designed to do the same thing, but perhaps in a different way. Benchmarking an EUC, in contrast, typically involves constructing an independent spreadsheet based on the EUC documentation and making sure it returns the same answers as the EUC.

Governance and Controls

An EUC should ideally be subjected to the same controls requirements as a model. Procedures designed to ensure process checks, access and change control management, output reconciliation, and tolerance levels should be adequately documented.

The extent to which these tools should be applied depends largely on how much risk an EUC poses. Properly classifying EUCs as high-, medium, or low-risk during the inventory process is critical to determining how much effort to invest in the review.

Other model validation elements, such as back-testing, stress testing, and sensitivity analysis, are typically not applicable to an EUC review. Because EUCs are not predictive by definition, these sorts of analyses are not likely to bring much value to an EUC review .

Striking an appropriate balance — leveraging effective model risk management principles without doing more than needs to be done — is the key to ensuring that EUCs are adequately accounted for, well controlled, and functioning properly without incurring unnecessary costs.


Validating Model Inputs: How Much Is Enough?

In some respects, the OCC 2011-12/SR 11-7 mandate to verify model inputs could not be any more straightforward: “Process verification … includes verifying that internal and external data inputs continue to be accurate, complete, consistent with model purpose and design, and of the highest quality available.” From a logical perspective, this requirement is unambiguous and non-controversial. After all, the reliability of a model’s outputs cannot be any better than the quality of its inputs.

From a functional perspective, however, it raises practical questions around the amount of work that needs to be done in order to consider a particular input “verified.” Take the example of a Housing Price Index (HPI) input assumption. It could be that the modeler obtains the HPI assumption from the bank’s finance department, which purchases it from an analytics firm. What is the model validator’s responsibility? Is it sufficient to verify that the HPI input matches the data of the finance department that supplied it? If not, is it enough to verify that the finance department’s HPI data matches the data provided by its analytics vendor? If not, is it necessary to validate the analytics firm’s model for generating HPI assumptions?

It depends.

Just as model risk increases with greater model complexity, higher uncertainty about inputs and assumptions, broader use, and larger potential impact, input risk increases with increases in input complexity and uncertainty. The risk of any specific input also rises as model outputs become increasingly sensitive to it.

Validating Model Inputs Best Practices

So how much validation of model inputs is enough? As with the management of other risks, the level of validation or control should be dictated by the magnitude or impact of the risk. Like so much else in model validation, no ‘one size fits all’ approach applies to determining the appropriate level of validation of model inputs and assumptions. In addition to cost/benefit considerations, model validators should consider at least four factors for mitigating the risk of input and assumption errors leading to inaccurate outputs.

  • Complexity of inputs
  • Manual manipulation of inputs from source system prior to input into model
  • Reliability of source system
  • Relative importance of the input to the model’s outputs (i.e., sensitivity)

Consideration 1: Complexity of Inputs

The greater the complexity of the model’s inputs and assumptions, the greater the risk of errors. For example, complex yield curves with multiple data points will be inherently subject to greater risk of inaccuracy than binary inputs such as “yes” and “no.” In general, the more complex an input is, the more scrutiny it requires and the “further back” a validator should look to verify its origin and reasonability.

Consideration 2: Manual Manipulation of Inputs from Source System Prior to Input into Model

Input data often requires modification from the source system to facilitate input into the model. More handling and manual modifications increase the likelihood of error. For example, if a position input is manually copied from Bloomberg and then subjected to a manual process of modification of format to enable uploading to the model, there is a greater likelihood of error than if the position input is extracted automatically via an API. The accuracy of the input should be verified in either case, but the more manual handling and manipulation of data that occurs, the more comprehensive the testing should be. In this example, more comprehensive testing would likely take the form of a larger sample size.

In addition, the controls over the processes to extract, transform, and load data from a source system into the model will impact the risk of error. More mature and effective controls, including automation and reconciliation, will decrease the likelihood of error and therefore likely require a lighter verification procedure.

Consideration 3: Reliability of Source Systems

More mature and stable source systems generally produce more consistently reliable results. Conversely, newer systems and those that have produced erroneous results increase the risk of error. The results of previous validation of inputs, from prior model validations or from third parties, including internal audit and compliance, can be used as an indicator of the reliability of information from source systems and the magnitude of input risk. The greater the number of issues identified, the greater the risk, and the more likely it is that a validator should seek to drill deeper into the fundamental sources of source data.

Consideration 4: Output Sensitivity to Inputs

No matter how reliable an input data’s source system is deemed to be, or the amount of manual manipulation to which an input is subjected, perhaps the most important consideration is the individual input’s power to affect the model’s outputs. Returning to our original example, if a 50 percent change in the HPI assumption has only a negligible impact on the model’s outputs, then a quick verification against the report supplied by the finance department may be sufficient. If, however, the model’s outputs are extremely sensitive to even small shifts in the HPI assumption, then additional testing is likely warranted—perhaps even to include a validation of the analytics vendor’s HPI model (along with all of its inputs).

A Cost-Effective Model Input Validation Strategy

When it comes to verifying model inputs, there is no theoretical limitation to the lengths to which a model validator can go. Model risk managers, who do not have unlimited time or budgets, would benefit from applying practical limits to validation procedures using a risk-based approach to determine the most cost-effective strategies to ensure that models are sufficiently validated. Applying the considerations listed above on a case-by-case basis will help validators appropriately define and scope model input reviews in a manner commensurate with appropriate risk management principles.


Performance Testing: Benchmarking Vs. Back-Testing

When someone asks you what a model validation is, what is the first thing you think of? If you are like most, then you would immediately think of performance metrics— those quantitative indicators that tell you not only if the model is working as intended, but also its performance and accuracy over time and compared to others. Performance testing is the core of any model validation and generally consists of the following components:

  • Benchmarking
  • Back-testing
  • Sensitivity Analysis
  • Stress Testing

Sensitivity analysis and stress testing, while critical to any model validation’s performance testing, will be covered by a future article. This post will focus on the relative virtues of benchmarking versus back-testing—seeking to define what each is, when and how each should be used, and how to make the best use of the results of each.

Benchmarking

Benchmarking is when the validator is providing a comparison of the model being validated to some other model or metric. The type of benchmark utilized will vary, like all model validation performance testing does, with the nature, use, and type of model being validated. Due to the performance information it provides, benchmarking should always be utilized in some form when a suitable benchmark can be found.

Choosing a Benchmark

Choosing what kind of benchmark to use within a model validation can sometimes be a very daunting task. Like all testing within a model validation, the kind of benchmark to use depends on the type of model being tested. Benchmarking takes many forms and may entail comparing the model’s outputs to:

  • The model’s previous version
  • An externally produced model
  • A model built by the validator
  • Other models and methodologies considered by the model developers, but not chosen
  • Industry best practice
  • Thresholds and expectations of the model’s performance

One of the most used benchmarking approaches is to compare a new model’s outputs to those of the version of the model it is replacing. It remains very common throughout the industry for models to be replaced due to a deterioration of performance, change in risk appetite, new regulatory guidance, need to capture new variables, or the availability of new sets of information. In these cases, it is important to not only document but also prove that the new model performs better and does not have the same issues that triggered the old model’s replacement.

Another common benchmarking approach compares the model’s outputs to those of an external “challenger” model (or one built by the validator) which functions with the same objective and data. This approach is likely to return more apt output comparisons than those generated by benchmarking against older versions that are likely to be out of date since the challenger model is developed and updated with the same data as the champion model.

Another benchmark set which could be used for model validation includes other models or methodologies reviewed by the model developers as possibilities for the model being validated but ultimately not used. Model developers as best practice should always list any alternative methodologies, theories, or data which were omitted from the model’s final version. Additionally, model validators should always leverage their experience and understanding of the current best practices throughout the industry, along with any analysis previously completed on similar models. Model validation should then take these alternatives and use them as benchmarks to the model being validated.

Model validators have multiple, distinct ways to incorporate benchmarking into their analysis. The use of the different types of benchmarking discussed here should be based on the type of model, its objective, and the validator’s best judgment. If a model cannot be reasonably benchmarked, then the validator should record why not and discuss the resulting limitations of the validation.

Back-Testing

Back-testing is used to measure model outcomes. Here, instead of measuring performance with a comparison, the validator is specifically measuring whether the model is both working as intended and is accurate. Back-testing can take many forms based on the model’s objective. As with benchmarking, back-testing should be a part of every full-scope model validation to the extent possible.

What Back-Tests to Perform

As a form of outcomes analysis, back-testing provides quantitative metrics which measure the performance of a model’s forecast, the accuracy of its estimates, or its ability to rank-order risk. For instance, if a model produces forecasts for a given variable, back-testing would involve comparing the model’s forecast values against actual outcomes, thus indicating its accuracy.

A related function of model back-testing evaluates the ability of a given model to adequately measure risk. This risk could take any of several forms, from the probability of a given borrower to default to the likelihood of a large loss during a given trading day. To back-test a model’s ability to capture risk exposure, it is important first to collect the right data. In order to back-test a probability of default model, for example, data would need to be collected containing cases where borrowers have actually defaulted in order to test the model’s predictions.

Back-testing models that assign borrowers to various risk levels necessitate some special considerations. Back-testing these and other models that seek to rank-order risk involves looking at the model’s performance history and examining its accuracy through its ability to rank and order the risk. This can involve analyzing both Type 1 (false positive) and Type 2 (false negative) statistical errors against the true positive and true negative rates for a given model.  Common statistical tests used for this type of back-testing analysis include, but are not limited to, a Kolmogorov-Smirnov score (KS), a Brier score, or a Receiver Operating Characteristic (ROC).

Benchmarking vs Backtesting

Back-testing measures a model’s outcome and accuracy against real-world observations, while benchmarking measures those outcomes against those of other models or metrics. Some overlap exists when the benchmarking includes comparing how well different models’ outputs back-test against real-world observations and the chosen benchmark. This overlap sometimes leads people to mistakenly conclude that model validations can rely on just one method. In reality, however, back-testing and benchmarking should ideally be performed together in order to bring their individual benefits to bear in evaluating the model’s overall performance. The decision, optimally, should not be whether to create a benchmark or to perform back-testing. Rather, the decision should be what form both benchmarking and back-testing should take.

While benchmarking and back-testing are complementary exercises that should not be viewed as mutually exclusive, their outcomes sometimes appear to produce conflicting results. What should a model validator do, for example, if the model appears to back-test well against real-world observations but do not benchmark particularly well against similar model outputs? What about a model that returns results similar to those of other benchmark models but does not back-test well? In the first” scenario, the model owner can derive a measure of comfort from the knowledge that the model performs well in hindsight. But the owner also runs the very real risk of being “out on an island” if the model turns out to be wrong. The second scenario affords the comfort of company in the model’s projections. But what if the models are all wrong together?

Scenarios where benchmarking and back-testing do not produce complementary results are not common, but they do happen. In these situations, it becomes incumbent on model validators to determine whether back-testing results should trump benchmarking results (or vice-versa) or if they should simply temper one another. The course to take may be dictated by circumstances. For example, a model validator may conclude that macro-economic indicators are changing to the point that a model which back-tests favorably is not an advisable tool because it is not tuned to the expected forward-looking conditions. This could explain why a model that back-tests favorably remains a benchmarking outlier if the benchmark models are taking into account what the subject model is missing. On the other hand, there are scenarios where it is reasonable to conclude that back-testing results trump benchmarking results. After all, most firms would rather have an accurate model than one that lines up with all the others.

As seen in our discussion here, benchmarking and back-testing can sometimes produce distinct or similar metrics depending on the model being validated. While those differences or similarities can sometimes be significant, both benchmarking and back-testing provide critical complementary information about a model’s overall performance. So when approaching a model validation and determining its scope, your choice should be what form of benchmarking and back-testing needs to be done, rather than whether one needs to be performed versus the other.


4 Questions to Ask When Determining Model Validation Scope

Model risk management is a necessary undertaking for which model owners must prepare on a regular basis. Model risk managers frequently struggle to strike an appropriate cost-benefit balance in determining whether a model requires validation, how frequently a model needs to be validated, and how detailed subsequent and interim model validations need to be. The extent to which a model must be validated is a decision that affects many stakeholders in terms of both time and dollars. Everyone has an interest in knowing that models are reliable, but bringing the time and expense of a full model validation to bear on every model, every year is seldom warranted. What are the circumstances under which a limited-scope validation will do and what should that validation look like? We have identified four considerations that can inform your decision on whether a full-scope model validation is necessary:

  1.  What about the model has changed since the last full-scope validation?
  2. How have market conditions changed since the last validation?
  3.  How mission-critical is the model?
  4. How often have manual overrides of model output been necessary?

What Constitutes a Model Validation

Comprehensive model validations consist of three main components: conceptual soundness, ongoing monitoring and benchmarking, and outcomes analysis and back-testing.[1] A comprehensive validation encompassing all these areas is usually required when a model is first put into use. Any validation that does not fully address all three of these areas is by definition a limited-scope validation. 1 Comprehensive validations on ‘black box’ models developed and maintained by third-party vendors are therefore problematic because the mathematical code and formulas are not typically available for review (in many cases a validator can only hypothesize the cause and effect relationships between the inputs and outputs based on a reading of the model’s documentation).  Ideally, regular comprehensive validations are supplemented by limited-scope validations and outcomes analyses on an ongoing, interim basis to ensure that the model performs as expected.

Key Considerations for Model Validation

There is no ‘one size fits all’ question for determining how often a comprehensive validation is necessary, versus when a limited-scope review would be appropriate. Beyond the obvious time and cost considerations, model validation managers would benefit from asking themselves a minimum of four questions in making this determination:

Question 1: What about the model has changed since the last full-scope validation?

Many models layer economic assumptions on top of arithmetic equations. Most models consist of three principal components:

  1. inputs (assumptions and data)
  2. processing (underlying mathematics and code that transform inputs into estimates)
  3. output reporting (processes that translate estimates into useful information)

Changes to either of the first two components are more likely to require a comprehensive validation than changes to the third component. A change that materially impacts how the model output is computed, either by changing the inputs that drive the calculation or by changing the calculations themselves, is more likely to merit a more comprehensive review than a change that merely affects how the model’s outputs are interpreted.

For example, say I have a model that assigns a credit rating to a bank’s counterparties on a 100-point scale. The requirements the bank establishes for the counterparty are driven by how the model rates the counterparty. Say, for example, that the bank lends to counterparties that score between 90 and 100 with no restrictions, between 80 and 89 with pledged collateral, between 70 and 79 with delivered collateral, and does not lend to counterparties scoring below a 70. Consider two possible changes to the model:

  1. Changes in model calculations that result in what used to be a 65 now being a 79.
  2. Changes in grading scale that result in a counterparty that receives a rating of 65 now being deemed creditworthy.

While the second change impacts the interpretation of model output and may require only a limited-scope validation to determine whether the amended grading scale is defensible, the first change is almost certain to require that the validator go deeper ‘under the hood’ for verification that the model is working as intended. Assuming that the inputs did not change, the first type of change may be the result of changes to assumptions (e.g., weighting schemes) or simply a revision to a perceived calculation error. The second is a change on the reporting component, where a comparison of the model’s forecasts to those of challenger models and back-testing with historical data may be sufficient for validation. Not every change that affects model outputs necessarily requires a full-scope validation. The insertion of recently updated economic forecasts into a recently validated model may require only a limited set of tests to demonstrate that changes in the model estimates are consistent with the new economic forecast inputs. The magnitude of the impact on output also matters. Altering several input parameters that results in a material change to model output is more likely to require a full validation.

Question 2: How have market conditions changed since the last validation?

Even models that do not change at all require periodic, full-scope validations because macroeconomic conditions or other external factors call one or more of the model’s underlying assumptions into question. The 2008 global financial crisis is a perfect example. Mortgage credit and prepayment models prior to 2008 were built on assumptions that appeared reasonable and plausible based on market observations prior to 2008. Statistical models based solely on historical data before, during, or after the crisis are likely to require full-scope validations as their underlying datasets are expanded to capture a more comprehensive array of observed economic scenarios. It doesn’t always have to be bad news in the economy to instigate model changes that require full-scope validations. The federal funds rate has been hovering near zero since the end of 2008. With a period of gradual and sustained recovery potentially on the horizon, many models are beginning to incorporate rising interest rates into their current forecasts. These foreseeable model adjustments will likely require more comprehensive validations geared toward verifying that model outputs are appropriately sensitive to the revised interest rate assumptions.

Question 3: How mission-critical is the model?

The more vital the model’s outputs are to financial statements or mission-critical business decisions, the greater the need for frequent and detailed third-party validations. Model risk is amplified when the model outputs inform reports that are provided to investors, regulators, or compliance authorities. Particular care should be given when deciding whether to partially validate models with such high-stake outputs. Models whose outputs are used for internal strategic planning are also important. That being said, some models are more critical to a bank’s long-term success than others. Ensuring the accuracy of the risk algorithms used for DFAST stress testing is more imperative than the accuracy of a model that predicts wait times in a customer service queue. Consequently, DFAST models, regardless of their complexity, are likely to require more frequent full-scope validations than models whose results likely undergo less scrutiny.

Question 4: How often have manual overrides of model output been necessary?

Another issue to consider revolves around the use of manual overrides to the model’s output. In cases where expert opinion is permitted to supersede the model outputs on a regular basis, more frequent full-scope validations may be necessary in order to determine whether the model is performing as intended. Counterparty credit scoring models, cited in our earlier example, are frequently subjected to manual overrides by human underwriters to account for new or other qualitative information that cannot be processed by the model. The decision of whether it is necessary to revise or re-estimate a model is frequently a function of how often such overrides are required and what the magnitude of these overrides tends to be. Models that frequently have their outputs overridden should be subjected to more frequent full-scope validations. And models that are revised as a result of numerous overrides should also likely be fully validated, particularly when the revision includes significant changes to input variables and their respective weightings.

Full or Partial Model Validation?

Model risk managers need to perform a delicate balancing act in order to ensure that an enterprise’s models are sufficiently validated while keeping to a budget and not overly burdening model owners. In many cases, limited-scope validations are the most efficient means to this end. Such validations allow for the continuous monitoring of model performance without bringing in a Ph.D. with a full team of experts to opine on a model whose conceptual approach, inputs, assumptions, and controls have not changed since its last full-scope validation. While gray areas abound and the question of full versus partial validation needs to be addressed on a case-by-case basis, the four basic considerations outlined above can inform and facilitate the decision. Incorporating these considerations into your model risk management policy will greatly simplify the decision of how detailed your next model validation needs to be. An informed decision to perform a partial model validation can ultimately save your business the time and expense required to execute a full model validation.


[1] In the United States, most model validations are governed by the following sets of guidelines: 1) OCC 2011-12 (institutions regulated by the OCC), and 2) FRB SR-11 (institutions regulated by the Federal Reserve). These guidelines are effectively identical to one another. Model validations at Government-sponsored enterprises, including Fannie Mae, Freddie Mac, and the Federal Home Loan Banks, are governed by Advisory Bulletin 2013-07, which, while different from the OCC and Fed guidance, shares many of the same underlying principles.


   

Company

Products

Security & Compliance

Sign In
Get Started