Linkedin    Twitter   Facebook

Get Started
Get a Demo

Linkedin    Twitter

Category: Article

Single Family Rental Securitization Market

The Single Family Rental Market

The single family rental market has existed for decades as a thriving part of the U.S. housing market.  Investment in single family homes for rental purposes has provided many opportunities for the American “mom and pop” investors to build and maintain wealth, prepare for retirement, and hold residual cash flow producing assets.   According to the National Rental Home Council (NRHC) (“Single-Family Rental Primer”; Green Street Advisors, June 6, 2016) as of year-end 2015, the single-family rental market comprised approximately 13% (16 million detached single-family rentals) of all occupied housing and roughly 37% of the entire United States rental market.

Single-Family Rental Securitization Structure

Introduce the credit crisis of 2008.  Limited credit for non-prime borrowers in combination with record setting delinquency and foreclosure rates prompted a significant reduction of housing prices. According to the S&P CoreLogic Case-Shiller U.S. National Home Price NSA Index, since the index’s launch in May 18, 2006 (initial index value = 184.38), national house prices had dropped 25% (index value = 138.5) by April 2012.

The market dynamic combination of low prices and post-crises rental demand along with highly restrictive mortgage credit qualifications alerted particular investors to an opportunity.  Specific private institutional investors, mostly private equity firms, began acquiring large quantities of distressed single family homes. According to the working paper entitled “The Emerging Economic Geography of Single-Family Rental Securitization” by the Federal Reserve Bank of San Francisco (Fields, Kohli, Schafran; January 2016) the entrance of these “large institutional investors into their new role as ‘corporate landlords’ [represented] a paradigm shift for the single-family rental market.”

Not only did they rehabilitate the homes and rent them out to non-prime borrowers, they then in turn introduced these assets into the capital markets by pledging the collateral and rental receipts into publicly issued REIT’s as well as issuing single-family rental securitizations (SFR).  The issuance of single family rental securitizations was a new concept utilizing an old vehicle, the issuance of a bankruptcy remote special purpose vehicle for the purpose of issuing debt via pledged collateral assets.

In this case, the collateral is generally a loan secured by a first priority mortgage (that was placed in an LP or LLC) backed by the pledging or sale of the underlying single family homes operated as rental properties (also normally placed in a previous LP or LLC).  Not only did this provide a strong exit strategy for investors because it allowed them to obtain immediate capital, but they were also able to increase their leveraged return on equity.

When Did Single-Family Rental Securitization Begin?

The first securitization transaction was issued in November 2013 by Invitation Homes (IH, 2013-1), a subsidiary of the Blackstone Group BX. As of July 2016, 32 single-borrower (26) and multi-borrower (six) SFR transactions have been issued. The table below provides a list of all SFR single and multi- borrower securitization transactions rated as of July 2016.123

Table: SFR Securitization Transactions Rated as of July 2016

Interestingly, the current inventory owned as well as securitized is only approximately 1 to 2% of the overall market.  Also of particular interest is the recent consolidation of institutions active in this market and the introduction of new participants.  American Homes 4 Rent (AM4R) acquired Beazer Rental Homes in July 2014 and Colony American Homes (Colony) merged with Starwood Waypoint Residential Trust (SWAY) in January 2016.  Subsequent to the Colony and SWAY merger, this newly formed company issued its own SFR securitization in June 2016 of approximately 3,600 properties with a loan balance of $536 million (CSH, 2016-1).  Moreover introducing themselves into the SFR securitization market was Home Partners of America (formerly Hyperion Homes, Inc.), which issued its first single-family rental securitization earlier this year (approximately $654mm, property count of 2,232).

Single-Family Rental Securitization Market Outlook

The question remains, is the SFR securitization market here to stay? On the one hand, issuance still appears to be strong; however, SFRs could be an efficient market’s response to the market dislocation of 2008, the effects of which may now appear to be fading away.  At a minimum this type of securitization demonstrates the effectiveness of the capital markets in moving quickly to fill the gaps left by the bursting of the housing bubble.

[1] Source: Kroll Bond Rating Agency, Inc. (KBRA)

[2] Source:

[3] Source: Yahoo Finance

Sample Size Requirements for CECL Modeling

Part One of a Two-Part Series on CECL Data Requirements

With CECL implementation looming, many bankers are questioning whether they have enough internal loan data for CECL modeling. Ensuring your data is sufficient is a critical first step in meeting the CECL requirements, as you will need to find and obtain relevant third-party data if it isn’t. This article explains in plain English how to calculate statistically sufficient sample sizes to determine whether third-party data is required. More importantly, it shows modeling techniques that reduce the required sample size. Investing in the right modeling approach could ultimately save you the time and expense of obtaining third-party data.

CECL Data Requirements: Sample Size for a Single Homogenous Pool

Exhibit 1: Required Sample Size

Let’s first consider the sample required for a single pool of nearly identical loans. In the case of a uniform pool of loans — with the same FICO, loan-to-value (LTV) ratio, loan age, etc. — there is a straightforward formula to calculate the sample size we need to estimate the pool’s default rate, shown in Exhibit 1.1 As the formula shows, the sample size depends on several variables, some of which must be estimated:

  • Materiality Threshold and Confidence Level: Suppose you have a $1 billion loan portfolio and you determine that, from a financial statement materiality standpoint, your ALLL estimate needs to be reliable to within +/- $2.5 million. Statistically, we would say that we need to be 95% confident that our loss reserve estimate is within an error margin of +/- $2.5 million of the true figure. The wider our materiality thresholds and lower our required confidence levels, the smaller the sample size we need.
  • Loss Severity: As your average loss severity increases, you need a greater sample size to achieve the same error margin and confidence level. For example, if your average loss severity is 0%, you will estimate zero losses regardless of your default rates. Theoretically, you don’t even need to perform the exercise of estimating default rates, and your required sample size is zero. On the opposite end, if your average loss severity is 100%, every dollar of defaulted balance translates into a dollar of loss, so you can least afford to misestimate default rates. Your required sample size will therefore be great.
  • Default Rates: Your preliminary estimate of default rate, based on your available sample, also affects the sample size you will require. (Of course, if you lack any internal sample, you already know you need to obtain third-party data for CECL modeling.) Holding dollar error margin constant, you need fewer loans for low default-rate populations.

Example: Suppose we have originated a pool of low-risk commercial real estate loans. We have historical observations for 500 such loans, of which 495 paid off and five defaulted, so our preliminary default rate estimate is 1%. Of the five defaults, loss severity averaged 25% of original principal balance. We deem ALLL estimate errors within 0.25% of the relevant principal balance to be immaterial. Is our internal sample of 500 loans enough for CECL modeling purposes, or do we need to obtain proxy data? Simply apply the formula from Exhibit 1: In this case, our internal sample of 500 loans is more than enough to give us a statistical confidence interval that is narrower than our materiality thresholds. We do not need proxy data to inform our CECL model in this case.

CECL Data Requirements: Sample Size Across an Asset Class

If we have an asset class with loans of varying credit risk characteristics, one way to determine the needed sample is just to carve up the portfolio into many buckets of loans with like-risk characteristics, determine the number of loans needed for each bucket on a standalone basis per the formula above, and then sum these amounts. The problem with this approach – assuming our concern is to avoid material ALLL errors at the asset class level – is that it will dramatically overstate the aggregate number of loans required. A better approach, which still involves segregating the portfolio into risk buckets, is to assign varying margins of error across the buckets in a way that minimizes the aggregate sample required while maintaining a proportional portfolio mix and keeping the aggregate margin of error within the aggregate materiality threshold. A tool like Solver within Microsoft Excel can perform this optimization task with precision. The resulting error margins (as a percentage of each bucket’s default rate estimates) are much wider than they would be on a standalone basis for buckets with low frequencies and slightly narrower for buckets with high default frequencies. Even at its most optimized, though, the total number of loans needed to estimate the default rates of multiple like-risk buckets will skyrocket as the number of key credit risk variables increases. A superior approach to bucketing is loan-level modeling, which treats the entire asset class as one sample but estimates loan-specific default rates according to the individual risk characteristics of each loan.

Loan-Level Modeling


Suppose within a particular asset class, FICO is the only factor that affects default rates, and we segregate loans into four FICO buckets based on credit performance. (Assume for simplicity that each bucket holds an equal number of loans.) The buckets’ default rates range from 1% to 7%. As before, average loss severity is 25% and our materiality threshold is 0.25% of principal balance. Whether with a bucketing approach or loan-level modeling, either way we need a sample of about 5,000 loans total across the asset class. (We calculate the sample required for bucketing with Solver as described above and calculate the sample required for loan-level modeling with an iterative approach described below.) Now suppose we discover that loan age is another key performance driver. We want to incorporate this into our model because an accurate ALLL minimizes earnings volatility and thereby minimizes excessive capital buffers. We create four loan age buckets, leaving us now with 4 × 4 = 16 buckets (again, assume the buckets hold equal loan count). With four categories each of two variables, we would need around 9,000 loans for loan-level modeling but 20,000 loans for a bucketing approach, with around 1,300 in each bucket. (These are ballpark estimates that assume that your loan-level model has been properly constructed and fit the data reasonably well. Your estimates will vary somewhat with the default rates and loss severities of your available sample. Also, while this article deals with loan count sufficiency, we have noted previously that the same dataset must also cover a sufficient timespan, whether you are using loan-level modeling or bucketing.) Finally, suppose we include a third variable, perhaps stage in the economic cycle, LTV, Debt Service Coverage Ratio, or something else.

Exhibit 2: Loan-Level Modeling Yields Greater Insight from Smaller Samples

Again assume we segregate loans into four categories based on this third variable. Now we have 4^3= 64 equal-sized buckets. With loan-level modeling we need around 12,000 loans. With bucketing we need around 100,000 loans, an average of around 1,600 per bucket. As the graph shows in Exhibit 2, a bucketing approach forces us to choose between less insight and an astronomical sample size requirement. As we increase the number of variables used to forecast credit losses, the sample needed for loan-level modeling increases slightly, but the sample needed for bucketing explodes. This points to loan-level modeling as the best solution because well-performing CECL models incorporate many variables. (Another benefit of loan-level credit models, one that is of particular interest to investors, is that the granular intelligence they provide can facilitate better loan screening and pricing decisions.)

CECL Data Requirements: Sample Size for Loan-Level Modeling

Determining the sample size needed for loan-level modeling is an iterative process based on the standard errors reported in the model output of a statistical software package. After estimating and running a model on your existing sample, convert the error margin of each default rate (1.96 × the standard error of the default rate estimate to generate a 95% confidence interval) into an error margin of dollars lost by multiplying the default rate error margin by loss severity and the relevant principal balance. Next, sum each dollar error margin to determine whether the aggregate dollar error margin is within the materiality threshold, and adjust the sample size up or down as necessary. The second part in our series on CECL data requirements will lay out the data fields that should be collected and preserved to support CECL modeling.


Mortgage Insurance and Loss Severity: Causes and Effects of Mortgage Insurance Shortfalls

Mortgage Insurance and Loss Severity

This blog post is the first in a two-part series about Mortgage Insurance and Loss Severity. During the implementation of RiskSpan’s Credit Model, which enables users to estimate loan-level default, prepayment, and loss severity based on loan-level credit characteristics and macroeconomic forecasts, our team explored the many variables that affect loss severity. This series will highlight what our team discovered about Mortgage Insurance and loss severity, enabling banks to use this GSE data to benchmark their own MI recovery rates and help estimate their credit risk from MI shortfalls.

RiskSpan reviewed the historical performance of Mortgage Insurers providing loan loss benefits between 1999 and 2015. Our analysis centered on Borrower and Lender-Paid Mortgage Insurance (referred to collectively as MI in this post) in Freddie Mac’s Single Family Loan-Level Dataset. Similar data is available from Fannie Mae, however, we’ve limited our initial analysis to Freddie Mac as its data more clearly reports the recovery amounts coming from Mortgage Insurers.

Mortgage Insurance Benefit Options

Exhibit 1: Mortgage Insurance Percentage Option Benefit Calculation

Mortgage Insurance Benefit = Calculated Losses x MI Percent Coverage

Calculated Losses include:

  • UPB at time of default
  • Unpaid Interest
  • Other costs, such as attorney and statutory fees, taxes, insurance, and property maintenance.

Mortgage insurance protects investors against the event a borrower defaults. Mortgage Insurers have many options in resolving MI claims and determining the expected benefit, the amount the insurer pays in the event of a defaulted loan. The primary claim option is the Percentage Option, where the loan loss is multiplied by the MI percentage, as shown in Exhibit 1. Freddie Mac’s dataset includes the MI percentage and several loss fields, as well as other loan characteristics necessary to calculate the loss amount for each loan.

The Mortgage Insurer will elect to use other claim options if they result in a lower claim than the Percentage Option. For example, if the Calculated Losses less the Net Proceeds from the liquidation of the property (i.e., net losses) are less than the Mortgage Insurance Benefit via the Percentage Option, the Mortgage Insurer can select to reimburse the net losses. Mortgage insurers can choose to acquire the property, known as the Acquisition Option. The mortgage insurer acquires the property after paying the full amount of the Calculated Losses on the loan to the investor. There were no instances in the data of Mortgage Insurers exercising the Acquisition Option after 2006.

Causes of Mortgage Insurance Shortfalls

Freddie Mac’s loan- level dataset allows us to examine loans with MI, which experienced default and sustained losses. We find that there are cases in which mortgages with MI coverage are not receiving their expected MI benefits after liquidation. These occurrences can be explained by servicing and business factors not provided in the data, for example:

Cancellation: Mortgage Insurance may be cancelled either by non-payment of MI premium to the insurer or loan reaching a certain CLTV threshold. Per the Homeowners Protection Act of 1998, servicers automatically terminate private mortgage insurance (PMI) once the principal balance of the mortgage reaches 78% of the original value or if the borrower asks for MI to be cancelled once mark-to-market loan-to-value is below 80%.

Denial: Mortgage Insurers may deny a claim for multiple factors, such as

  •  not filing a Notice of Default with the Mortgage Insurer within MI policy guideline’s time frame,
  • not submitting the claim within a timely period after the liquidation event,
  • inability to transfer title, or
  • not providing the necessary claim documentation, usually from underwriting, from loan origination to Mortgage Insurers at time of claim.

Rescission: Mortgage Insurers will rescind an MI claim but will refund the MI premiums to the servicer. Rescission of claims are usually linked to the original underwriting of the loan and might be caused by multiple factors, such as

  • underwriting Negligence by the lender,
  • third-party fraud, or
  • misrepresentation of the Borrower.

Curtailment: Mortgage Insurers will partially reimburse the filed claim if the expenses are outside of their MI policy scope. Examples of curtailment to MI claims include

  • excess interest, taxes, and insurance expenses beyond coverage provision of the Master Policy. Most current MI policies do not have these restrictions,
  • non-covered expenses such as costs associated with physical damage to the property, tax penalties, etc., and
  • delays in reaching foreclosure in a timely manner.

Receivership: During the mortgage crisis, several of the Mortgage Insurers (for instance Triad, PMI, and RMIC) became insolvent and the state insurance regulators placed them into receivership. For the loans that were insured by Mortgage Insurers in receivership, claims are currently being partially paid (at around 50% of the expected benefit) with the unpaid benefit being deferred. This unpaid benefit runs the risk of not being paid.

These factors are evident in the data and our analysis as follows:

Cancellations: The Freddie Mac dataset does not provide the MI in force at the time of default, so we cannot identify cases of cancellation. These cases would show up as an instance of no MI payment.

Denials & Rescissions: Our analysis excludes any loans that were repurchased by the lender, which would likely exclude most instances of MI rescission and denial. In instances where the Mortgage Insurer found sufficient case to rescind or deny, Freddie Mac would most likely find sufficient evidence for a lender repurchase as well.

Curtailments: The analysis includes the impact of MI curtailment.

Receivership: The analysis includes the impact of Mortgage Insurers going into receivership.

Shortfalls of Expected Mortgage Insurance Recoveries

In the exhibits below, we provide the calculated MI Haircut Rate by Vintage Year and by Disposition Year for the loans in our analysis. We define the MI Haircut Rate as the shortfall between our calculated expected MI proceeds and the actual MI proceeds reported in the dataset.

The shortfall in MI recoveries is separated into two categories: MI Gap and No MI Payment. 

  • MI Gap represents instances where some actual MI proceeds exist, but they are less than our calculated expected amount. The shortfall in actual MI benefit could be due to either Curtailment or partial payment due to Receivership. 
  • No MI Payment represents instances where there was no MI recovery associated with loans that experienced losses and had MI at origination. No payment could be due to Rescission, Cancellation, Denial, or Receivership.

For purposes of this analysis, the Severity Rate represented below does not include the portion of the loss outside of the MI scope. For example, in 2001, average severity rate was 30%, but only 19% was eligible to be offset by MI. This was done in order to give a better understanding of the MI haircut’s effect on the Severity Rate.

Exhibit 1: Mortgage Insurance Haircut Rate by Vintage Years

We can observe an MI Haircut Rate averaging at 19.50% for vintages 1999 to 2011 with higher haircuts for the distressed vintages 2003 to 2008 at 23.50%.

Exhibit 2: Mortgage Insurance Haircut Rate by Disposition Year

Our analysis shows the MI Haircut Rate prior to 2008 on average was 6.5% and steadily increased to an average of 25% from 2009 thru 2014. We will explain below.

Exhibit 3: Mortgage Insurance Haircut Rate and Expense to Delinquent UPB Percentage by Months Non-Performing

In this analysis, we observe the MI Haircut Rate steadily increased by the number of months between when a loan was first classified as non-performing and when a loan liquidated. This increase can be explained by increased curtailments tied to expenses that increase over time, such as expenses associated with physical damage of the property, tax penalties, delinquent interest, insurance and taxes outside the coverage period, and excessive maintenance or attorney fees. Interest, taxes, and insurance typically constitute 85% of all loss expenses.

This analysis of mortgage insurance is an exploratory post into what causes the shortfall in MI claims and how those shortfalls can affect loss severity. RiskSpan will be addressing a series of topics related to Mortgage Insurance and loss severity.  In our next post we will address how banks can use this GSE data to benchmark their own MI recovery rates and help estimate their credit risk from MI shortfalls.

What CECL Means To Investors

Recent updates to U.S. GAAP will dramatically change the way financial institutions incorporate credit risk into their financial statements. The new method is called the Current Expected Credit Loss (CECL) model and will take effect over the next few years. For many institutions, CECL will mean a one-time reduction in book equity and lower stated earnings during periods of portfolio growth. These reductions occur because CECL implicitly double-counts credit risk from the time of loan origination, as we will meticulously demonstrate. But for investors, will the accounting change alter the value of your shares?

Three Distinct Measures of Value

To answer this question well, we need to parse three distinct measures of value:

1.      Book Value: This is total shareholders’ equity as reported in financial reports like 10-Ks and annual reports prepared in accordance with U.S. GAAP.

2.      Current Market Value (also known as Market Cap): Current share price multiplied by the number of outstanding shares. This is the market’s collective opinion of the value of your institution. It could be very similar to, or quite different from, book value, and may change from minute to minute.

3.      Intrinsic Value (also known as Fundamental Value or True Value): The price that a rational investor with perfect knowledge of an institution’s characteristics would be willing to pay for its shares. It is by comparing an estimate of intrinsic value versus current market value that we deem a stock over- or under-priced. Investors with a long-term interest in a company should be concerned with its intrinsic or true value.

How Does an Accounting Change Affect Each Measure of Value?

Accounting standards govern financial statements, which investors then interpret. An informed, rational investor will “look through” any accounting quirk that distorts the true economics of an enterprise. Book value, therefore, is the only measure of value that an accounting change directly affects.

An accounting change may indirectly affect the true value of a company if costly regulations kick in as a result of a lower book value or if the operational cost of complying with the new standard is cumbersome. These are some of the risks to fundamental value from CECL, which we discuss later, along with potential mitigants.

Key Feature of CECL: Double-Counting Credit Risk

The single-most important thing for investors to understand about CECL is that it double-counts the credit risk of loans in a way that artificially reduces stated earnings and the book values of assets and equity at the time a loan is originated. It is not the intent of CECL to double-count credit risk, but it has that effect, as noted by no less authorities than the two members of the Financial Accounting Standards Board (FASB) who dissented from the rule. (CECL was adopted by a 5-2 vote.)

Consider this simple example of CECL accounting: A bank makes a loan with an original principal balance of $100. CECL requires the bank to recognize an expense equal to the present value of expected credit losses[i] and to record a credit allowance that reduces net assets by this same amount. Suppose we immediately reserve our $100 loan down to a net book value of $99 and book a $1 expense. Why did we even make the loan? Why did we spend $100 on something our accountant says is worth $99? Is lending for suckers?

Intuitively, consider that to make a loan of $100 is to buy a financial asset for a price of $100. If other banks would have made the same loan at the same interest rate (which is to say, they would have paid the same price for the same asset), then our loan’s original principal balance was equal to its fair market value at the time of origination. It is critical to understand that an asset’s fair market value is the price which market participants would pay after considering all of the asset’s risks, including credit risk. Thus, any further allowance for credit risk below the original principal balance is a double-counting of credit risk.

Here’s the underlying math: Suppose the $100 loan is a one-year loan, with a single principal and interest payment due at maturity. If the note rate is 5%, the contractual cash flow is $105 next year. This $105 is the most we can receive; we receive it if no default occurs. What is the present value of the $105 we hope to receive? One way to determine it is to discount the full $105 amount by a discount rate that reflects the risk of nonpayment. We established that 5% is the rate of return that banks are requiring of borrowers presenting similar credit risk, so an easy present value calculation is to discount next year’s contractual $105 cash flow by the 5% contractual interest rate, i.e., $105 / (1 + 5%) = $100. Alternatively, we could reduce the contractual cash flow of $105 by some estimate of credit risk. Say we estimate that if we made many loans like this one, we would collect an average of $104 per loan. Our expected future cash flow, then, is $104. If we take the market value of $100 for this loan as an anchor point, then the market’s required rate of return for expected cash flows must be 4%. ($104 / (1 + 4%) = $100.) It is only sensible that the market requires a lower rate of return on cash flows with greater certainty of collection.

What the CECL standard does is require banks to discount the lower expected cash flows at the higher contractual rate (or to use non-discounting techniques that have the same effect). This would be like discounting $104 at 5% and calculating a fair market value for the asset of $104 / (1 + 5%) ≈ $99. This (CECL’s) method double-counts credit risk by $1. The graph below shows the proper relationship between cash flow forecasts and discount rates when performing present value calculations, and shows how CECL plots off the line.

Proper Valuation Combinations (—)

FASB Vice Chairman James Kroeker and Board member Lawrence Smith described the double-counting issue in their dissent to the standards update: “When performing a present value calculation of future cash flows, it is inappropriate to reflect credit risk in both the expected future cash flows and the discount rate because doing so effectively double counts the reflection of credit risk in that present value calculation. If estimates of future cash flows reflect the risk of nonpayment, then the discount rate should be closer to risk-free. If estimates of future cash flows are based on contractual amounts (and thus do not reflect a nonpayment risk), the discount rate should be higher to reflect assumptions about future defaults.” Ultimately, the revised standard “results in financial reporting that does not faithfully reflect the economics of lending activities.”[ii]

The Accounting Standards Update notes two tangential counterpoints to Kroeker and Smith’s dissent. The first point is that banks would find alternative methods challenging, which may be true but is irrelevant to the question of whether CECL faithfully reflects true economics. The second point is that the valuation principles Kroeker and Smith lay out are for fair value estimates, whereas the accounting standard is not intended to produce fair value estimates. This concedes the only point we are trying to make, which is that the accounting treatment deviates (downwardly, in this case) from the fundamental and market value that an investor should care about.

How CECL Affects Each Measure of Value

As noted previously, the direct consequences of CECL will hit book value. Rating agency Fitch estimates that the initial implementation of CECL would shave 25 to 50 bps off the aggregate tangible common equity ratio of US banks if applied in today’s economy. The ongoing impact of CECL will be less dramatic because the annual impact to stated earnings is just the year-over-year change in CECL. Still, a growing portfolio would likely add to its CECL reserve every year.[iii]

There are many indirect consequences of CECL that may affect market and true value:

1.      Leverage: The combination of lower book values from CECL with regulations that limit leverage on the basis of book value could force some banks to issue equity or retain earnings to de-leverage their balance sheet. Consider these points:

a.      There is a strong argument to be made to regulators that the capital requirements that pre-dated CECL, if not adjusted for the more conservative asset calculations of CECL, will have become more conservative de facto than they were meant to be. There is no indication that regulators are considering such an adjustment, however. A joint statement on CECL from the major regulators tells financial institutions to “[plan] for the potential impact of the new accounting standard on capital.”[iv]

b.      Withholding a dividend payment does not automatically reduce a firm’s true value. If the enterprise can put retained earnings to profitable use, the dollar that wasn’t paid out to investors this year can appreciate into a larger payment later.

c.       The deeper threat to value (across all three measures) comes if regulations force a permanent de-leveraging of the balance sheet. This action would shift the capital mix away from tax-advantaged debt and toward equity, increase the after-tax cost of capital and decrease earnings and cash flow per share, all else equal.

Because banks face the shift to CECL together, however, they may be able to pass greater capital costs on to their borrowers in the form of higher fees or higher interest rates.

d.      Banks can help themselves in a variety of ways. The more accurate a bank’s loss forecasts prove to be, the more stable its loss reserve will be, and the less likely regulators are to require additional capital buffers. Management can also disclose whether their existing capital buffers are sufficient to absorb the projected impact of CECL without altering capital plans. Conceivably, management could elect to account for its loans under the fair value option to avoid CECL’s double-counting bias, but this would introduce market volatility to stated earnings which could prompt its own capital buffers.

2.      Investor Perception of Credit Risk: Investors’ perception of the credit risk a bank faces affects its market value. If an increase in credit allowance due to CECL causes investors to worry that a bank faces greater credit risk than they previously understood, the bank’s market value will fall on this reassessment. On the other hand, if investors have independently assessed the credit risk borne by an institution, a mere change in accounting treatment will not affect their view. An institution’s true value comes from the cash flows that a perfectly informed investor would expect. Unless CECL changes the kinds of loans an institution makes or the securities it purchases, its true credit risk has not changed, and nothing the accounting statements say can change that.

3.      Actual Changes in Credit Risk: Some banks may react to CECL by shifting their portfolio mix toward shorter duration or less credit risky investments, in an effort to mitigate CECL’s impact on their book value. If underwriting unique and risky credits was a core competency of these banks, and they shift toward safer assets with which they have no special advantage, this change could hurt their market and fundamental value.

4.      Volatility: ABA argues that the inherent inaccuracies of forecasts over long time horizons will increase the volatility of the loss reserve under CECL.[vi] Keefe, Bruyette & Woods (KBW) goes the other way, writing that CECL should reduce the cyclicality of stated earnings.[vii] KBW’s point can loosely be understood by considering that long-term averages are more stable than short-term averages, and short-term averages drive existing loss reserves. Certainly, if up-front CECL estimates are accurate, even major swings in charge-offs can be absorbed without a change in the reserve as long as the pattern of charge-offs evolves as expected. While cash flow volatility would hurt fundamental value, the concern from volatility of stated earnings is that it could exacerbate capital buffers required by regulators.

5.      Transparency: All else equal, investors prefer a company whose risks are more fully and clearly disclosed. KBW reasons that the increased transparency required by CECL will have a favorable impact on financial stock prices.[viii]

6.      Comparability Hindered: CECL allows management to choose from a range of modeling techniques and even to choose the macroeconomic assumptions that influence its loss reserve, so long as the forecast is defensible and used firm-wide. Given this flexibility, two identical portfolios could show different loss reserves based on the conservatism or aggressiveness of management. This situation will make peer comparisons impossible unless disclosures are adequate and investors put in the work to interpret them. Management can help investors understand, for example, if its loss reserve is larger because its economic forecast is more conservative, as opposed to because its portfolio is riskier.

7.      Operational Costs: Complying with CECL requires data capacity and modeling resources that could increase operational costs. The American Bankers Association notes that such costs could be “huge.”[ix] Management can advise stakeholders whether it expects CECL to raise its operational costs materially. If compliance costs are material, they will affect all measures of value to the extent that they cannot be passed on to borrowers. As noted earlier, the fact that all US financial institutions face the shift to CECL together increases the likelihood of their being able to pass costs on to borrowers.

8.      Better Intelligence: Conceivably, the enhancements to data collection and credit modeling required by CECL could improve banks’ ability to price loans and screen credit risks. These effects would increase all three measures of value.


CECL is likely to reduce the book value of most financial institutions. If regulators limit leverage because of lower book equity or the operational costs of CECL are material, and these costs cannot be transferred on to borrowers, then market values and fundamental values will also sag. If banks react to the standard by pulling back from the kinds of loans that have been their core competency, this, too, will hurt fundamental value. On the positive side, the required investment in credit risk modeling offers the opportunity for banks to better screen and price their loans.

Bank management can provide disclosures to analysts and investors to help them understand any changes to the bank’s loan profile, fee and interest income, capital structure and operational costs. Additionally, by optimizing the accuracy of its loss forecasts, management can contain the volatility of its CECL estimate and minimize the likelihood of facing further limitations on leverage.

[i] The term “expected loss” can be confusing; it does not necessarily mean that default is likely. If you have a 1% chance of losing $100, then your “expected loss” is 1% × $100 = $1. As long as a loan is riskier than a Treasury, your expected loss is greater than zero.

[ii] FASB Accounting Standards Update 2016-13, p. 237 and p. 235

[iii] By the end of a loan’s life, all interest actually collected and credit losses realized have been reflected in book income, and associated loss reserves are released, so lifetime interest income and credit losses are the same under any standard.

[iv] Joint Statement on the New Accounting Standard on Financial Instruments – Credit Losses.

Modigliani, Franco and Miller, Merton H. (1963) Corporate Income Taxes and the Cost of Capital: A Correction. The American Economic Review, Vol. 53, No. 3, pp. 433-443.

[vi] Gullette, Mike. (2016) FASB’s Current Expected Credit Loss Model for Credit Loss Accounting (CECL). American Bankers Association.

[vii] Kleinhanzl, Brian, et al. FASB is About to Accelerate Loan Loss Recognition for the Financial Industry. Keefe, Bruyette & Woods.

[viii] Kleinhanzl, Brian, et al, p. 1.

[ix] Gullette, Mike. (2016), p. 4.

Managing Model Risk and Model Validation

Over the course of several hundred model validations we have observed a number of recurring themes and challenges that appear to be common to almost every model risk management department. At one time or another, every model risk manager will puzzle over questions around whether an application is a model, whether a full-scope validation is necessary, how to deal with challenges surrounding “black box” third-party vendor models, and how to elicit assistance from model owners. This series of blog posts aims to address these and other related questions with what we’ve learned while helping our clients think through these issues.

As model validators, we frequently find ourselves in the middle of debates between spreadsheet owners and enterprise risk managers over the question of whether a particular computing tool rises to the level of a “model.” To the uninitiated, the semantic question, “Is this spreadsheet a model?” may appear to be largely academic and inconsequential. But its ramifications are significant, and getting the answer right is of critical importance to model owners, to enterprise risk managers, and to regulators.

Part 2: Validating Vendor Models: Special Considerations

Many of the models we validate on behalf of our clients are developed and maintained by third-party vendors. These validations present a number of complexities that are less commonly encountered when validating “home-grown” models.

Notwithstanding these challenges, the OCC’s Supervisory Guidance on Model Risk Management (OCC 2011-12) specifies that “Vendor products should nevertheless be incorporated into a bank’s broader model risk management framework following the same principles as applied to in-house models, although the process may be somewhat modified.”

Part 3: Preparing for Model Validation: Ideas for Model Owners

Though not its intent, model validation can be disruptive to model owners and others seeking to carry out their day-to-day work. We have performed enough model validations over the past decade to have learned how cumbersome the process can be to business unit model owners and others we inconvenience with what at times must feel like an endless barrage of touch-point meetings, documentation requests and other questions relating to modeling inputs, outputs, and procedures.

Part 4: 4 Questions to Ask When Determining Model Scope

Model risk management is a necessary undertaking for which model owners must prepare on a regular basis. Model risk managers frequently struggle to strike an appropriate cost-benefit balance in determining whether a model requires validation, how frequently a model needs to be validated, and how detailed subsequent and interim model validations need to be. The extent to which a model must be validated is a decision that affects many stakeholders in terms of both time and dollars. Everyone has an interest in knowing that models are reliable, but bringing the time and expense of a full model validation to bear on every model, every year is seldom warranted. What are the circumstances under which a limited-scope validation will do and what should that validation look like?

We have identified four considerations that can inform your decision on whether a full-scope model validation is necessary…

Part 5: Performance Testing: Benchmarking vs. Back-Testing

When someone asks you what a model validation is what is the first thing you think of? If you are like most, then you would immediately think of performance metrics— those quantitative indicators that tell you not only if the model is working as intended, but also its performance and accuracy over time and compared to others. Performance testing is the core of any model validation and generally consists of the following components:

  • Benchmarking
  • Back-testing
  • Sensitivity Analysis
  • Stress Testing

Sensitivity analysis and stress testing, while critical to any model validation’s performance testing, will be covered by a future article. This post will focus on the relative virtues of benchmarking versus back-testing—seeking to define what each is, when and how each should be used, and how to make the best use of the results of each.

Part 6: Model Input Data Validation – How much is Enough? 

In some respects, the OCC 2011-12/SR 11-7 mandate to verify model inputs could not be any more straightforward: “Process verification … includes verifying that internal and external data inputs continue to be accurate, complete, consistent with model purpose and design, and of the highest quality available.” From a logical perspective, this requirement is unambiguous and non-controversial. After all, the reliability of a model’s outputs cannot be any better than the quality of its inputs.

Preparing for Model Validation: Ideas for Model Owners

Though not its intent, model validation can be disruptive to model owners and others seeking to carry out their day-to-day work. We have performed enough model validations over the past decade to have learned how cumbersome the process can be to business unit model owners and others we inconvenience with what at times must feel like an endless barrage of touch-point meetings, documentation requests and other questions relating to modeling inputs, outputs, and procedures.

We recognize that the only thing these business units did to deserve this inconvenience was to devise or procure a methodology for systematically improving how something gets estimated. In some cases, the business owner of an application tagged for validation may view it simply as a calculator or other tool, and not as a “model.” And in some cases we agree with the business owner. But in every case, the system under review has been designated as a model requiring validation either by an independent risk management department within the institution or (worse) by a regulator, and so, the validation project must be completed.

As with so many things in life, when it comes to model validation preparation, an ounce of prevention goes a long way. Here are some ideas model owners might consider for making their next model validation a little less stressful.

Overall Model Documentation

Among the first questions we ask at the beginning of a model validation is whether the model has been validated before. In reality, however, we can make a fairly reliable guess about the model’s validation history simply by reading the model owner’s documentation. A comprehensive set of documentation that clearly articulates the model’s purpose, its inputs’ sources, how it works, what happens to the outputs and how the outputs are monitored is an almost sure sign that the model in question has been validated multiple times.

In contrast, it’s generally apparent that the model is being validated for the first time when our initial request for documentation yields one or more of the following:

  • An 800-page user guide from the model’s vendor, but no internally developed documentation or procedures
  • Incomplete (or absent) lists of model inputs with little or no discussion of how inputs and assumptions are obtained, verified, or used in the model
  • No discussion of the model’s limitations
  • Perfunctory monitoring procedures, such as, “The outputs are reviewed by an analyst for reasonableness”
  • Vague (or absent) descriptions of the model’s outputs and how they are used
  • Change logs with just one or two entries

No one likes to write model documentation. There never seems to be enough time to write model documentation. Compounding this challenge is the fact that model validations frequently seem to occur at the most inopportune moments for model owners. A bank’s DFAST models, for example, often undergo validation while the business owners who use them are busy preparing the bank’s DFAST submission. This is not the best time to be tweaking documentation and assembling data for validators.

Documentation would ideally be prepared during periods of lower operational stress. Model owners can accomplish this by predicting and staying in front of requests from model risk management by independently generating documentation for all their models that satisfies the following basic criteria:

  • Identifies the model’s purpose, including its business and functional requirements, and who is responsible for using and maintaining the model
  • Comprehensively lists and justifies of the model’s inputs and assumptions
  • Describes the model’s overall theory and approach, i.e., how the model goes about transforming the inputs and assumptions into reliable outputs (including VBA or other computer code if the model was developed in house)
  • Lays out the developmental evidence supporting the model
  • Identifies the limitations of the model
  • Explains how the model is controlled—who can access it, who can change it, what sorts of approvals are required for different types of changes
  • Comprehensively identifies and describes the model’s outputs, how they are used, and how they are tested

Any investment of time beforehand to incorporate the items above into the model’s documentation will pay dividends when the model validation begins. Being able to simply hand this information over to the validators will likely save model owners hours of attending follow-up meetings and fielding requests. Additional suggestions for getting the model’s inputs and outputs in order follow below.

All of the model’s inputs and assumptions need to be explicitly spelled out, as well as their relevance to the model, their source(s), and any processes used to determine their reliability. Simply emailing an Excel file containing the model and referring the validator to the ‘Inputs’ tab is probably going to result in more meetings, more questions, and more time siphoned out of the model owner’s workday by the validation team.

A useful approach for consolidating inputs and assumptions that might be scattered around different areas of the model involves the creation of a simple table that captures everything a validator is likely to ask about each of the model’s inputs and assumptions.

Systematically capturing all of the model’s inputs and assumptions in this way enable the validators to quickly take inventory of what needs to be tested without having to subject the model owner to a time-consuming battery of questions designed to make sure they haven’t missed anything.

Model Outputs

Being prepared to explain to the validator all the model’s outputs individually and how each is used in reporting and downstream applications greatly facilitates the validation process. Accounting for all the uses of every output becomes more complicated when they are used outside the business unit, including as inputs to another model. At the discretion of the institution’s model risk management group, it may be sufficient to limit this exercise only to uses within the model owner’s purview and to reports provided to management. As with inputs, this can be facilitated by a table.

Outputs that impact directly on financial statements are especially important. Model validators are likely to give these outputs particular scrutiny and model owners would do well to be prepared to explain not only how such outputs are computed and verified, but how the audit trails surrounding them are maintained, as well.

To the extent that outputs are subjected to regular benchmarking, back-testing, or sensitivity analyses, these should be gathered as well.

A Series of Small Investments

A model owner might look at these suggestions and conclude that they seem like a lot of work just to get ready for a model validation. We agree. Bear in mind, however, that the model validator is almost certain to ask for these things at some point during the validation, when, chances are, a model owner is likely to wish she had the flexibility to do her real job. Making a series of small-time investments to assemble these items well in advance of the validator’s arrival not only will make the validation more tolerable for model owners but will likely improve the overall modeling process as well.

New Capital Planning Expectations for Large Financial Institutions and What It Means For You

The Federal Reserve Board (FRB) recently released regulatory guidance outlining its capital planning expectations for large financial companies. The guidance addresses many areas of the capital planning process where regulators are looking for continued improvement within large bank holding companies and attempts to clarify differences in the Fed’s expectations based on firm size and complexity. The guidance is effective for the 2016 CCAR cycle.

The Federal Reserve has provided separate guidance for two different categories of large financial institutions:

  1. LISCC Firms1 and ‘Large and Complex’ firms were provided capital planning guidance under SR 15-18, and
  2. ‘Large and Noncomplex’ firms were provided capital planning guidance under SR 15-19.

SR 15-18 Summary

Specifically, SR 15-18 applies to firms that:

  • Are subject to the LISCC framework,
  • Have total consolidated assets of $250 billion or more, or
  • Have consolidated total on-balance sheet foreign exposure of $10 billion or more.

For the largest and most complex firms, the guidance clarifies expectations that have been previously communicated to firms, including through past Comprehensive Capital Analysis and Review (CCAR) exercises and related supervisory reviews.

SR 15-19 Summary

SR 15-19 applies to firms and ‘Large and Noncomplex’ institutions that:

  • Are not otherwise subject to the LISCC framework,
  • Have total consolidated assets between $50 billion and $250 billion, and
  • Have total consolidated on-balance-sheet foreign exposure of less than $10 billion.

Implications of these capital planning guidelines

Both sets of guidelines (SR 15-18 and SR 15-19) lay out the governance, risk management, internal controls, capital policy, scenario design, and projection methodology expectations relating to the capital planning process. They also lay out some important distinctions between the two institution types relating to how models and model risk management are expected to be used.

We summarize some of the key differences between what is required of these two institution types in the table below. 

Current 2017 LISCC Portfolio Firms

According to the Federal Reserve, here are the current LISCC firms:

  • American International Group, Inc.
  • Bank of America Corporation
  • The Bank of New York Mellon Corporation
  • Barclays PLC
  • Citigroup Inc.
  • Credit Suisse Group AG
  • Deutsche Bank AG
  • The Goldman Sachs Group, Inc.
  • JP Morgan Chase & Co.
  • Morgan Stanley
  • Prudential Financial, Inc.
  • State Street Corporation
  • UBS AG
  • Wells Fargo & Company

[1] Large Institution Supervision Coordinating Committee (LISCC) – the Board of Governors of the Federal Reserve has the responsibility for the supervision of systemically important financial institutions, including large bank holding companies, the U.S. operations of certain foreign banking organizations, and nonbank financial companies that are designated by the Financial Stability Oversight Council (FSOC) for supervision by the Board of Governors. A list of LISCC firms can be found at

RDARR: Principles for Effective Risk Data Aggregation and Risk Reporting

Background and Impetus for RDARR

The global financial crisis revealed that many banks had inadequate practices for timely, complete, and accurate aggregation of risk exposures.  These limitations impaired their ability to generate reliable information to manage risks, especially during times of economic stress. These limitations resulted in severe consequences to individual banks and the entire financial system.

Whether or not your bank is designated as an SIB, we expect your regulator to apply the Principles. You may wish to proactively enhance your RDARR. RiskSpan’s RDARR Advisory Services team has decades of finance, accounting, data, and technology expertise to help banks meet these increasing supervisory expectations.

Responding to this pervasive systemic issue, the Basel Committee on Banking Supervision (BCBS) issued the “Principles for Effective Risk Data Aggregation and Risk Reporting” (RDARR).

Objectives of RDARR

The BCBS RDARR prescribes principles (the Principles) with the objective of strengthening risk data aggregation capabilities and internal risk reporting practices. Implementation of the Principles is expected to enhance risk management and decision-making processes in order to:

  • Enhance infrastructure for reporting key information, particularly that used by the board and senior management to identify, monitor and manage risks;
  • Improve decision-making processes;
  • Enhance the management of information across legal entities, while facilitating a comprehensive assessment of risk exposures at a consolidated level;
  • Reduce the probability and severity of losses resulting from risk management weaknesses;
  • Improve the speed at which information is available and hence decisions can be made; and
  • Improve the organization’s quality of strategic planning and the ability to manage the risk of new products and services.

The Principles of RDARR

Fourteen Principles are structured in four sections:

Overarching governance and infrastructure

1. Governance
2. Architecture/ Infrastructure

Risk data aggregation capabilities

3. Data Accuracy and Integrity
4. Completeness
5. Timeliness
6. Adaptability

Risk reporting practices

7.  Reports Accuracy
8.  Comprehensiveness
9.  Clarity and Usefulness
10.  Frequency
11.  Distribution

Supervisory review, tools and cooperation

12.  Review
13.  Remediation
14.  Cooperation

The BCBS prescribes requirements and practices for each Principle that define compliance.

Scope of RDARR

The Principles are initially prescribed to systemically important banks (SIBs) as designated by the international Financial Stability Board (FSB). Initially, they were expected to be fully implemented by January 1, 2016.

The BCBS “strongly” suggests that supervisory bodies apply the Principles to a wider range of banks, proportionate to the size, nature, and complexity of these banks’ operations.

Consistent with other recent supervisory pronouncements, we expect these principles to eventually be applied by other regulators.

Progress in Adopting RDARR

The BCBS has conducted multiple self-assessment surveys of SIBs to measure preparedness for compliance with the Principles and identify common challenges, along with potential strategies for compliance.

The survey results indicate many banks continue to encounter difficulties in establishing strong data aggregation governance, architecture and processes, often relying on manual workarounds. Many banks failed to recognize that governance/infrastructure practices are important prerequisites for facilitating compliance with the Principles.

Many banks indicated that they will be unable to comply with at least one Principle by the January 2016 deadline.

Impact of the Principles

This guidance has increased the required capabilities of RDARR for measuring and reporting risks.

The new paradigm for risk data aggregation and risk reporting imposes many new standards, most notably:

  • A bank’s senior management should be fully aware of and understand the limitations that prevent full risk data aggregation.
  • Controls surrounding risk data need to be as robust as those applicable to accounting data.
  • Risk data should be reconciled with source systems, including accounting data where appropriate, to ensure that the risk data is accurate.
  • A bank should strive towards a single authoritative source for risk data per each type of risk.
  • Supervisors expect banks to document and explain all of their risk data aggregation processes whether automated or manual.
  • Supervisors expect banks to consider accuracy requirements analogous to accounting materiality.
  • Due to the wide and comprehensive scope of RDARR Principles, many SIBs have struggled to identify and implement the enhancements to facilitate full compliance.

Examples of RiskSpan RDARR Assistance Include:

  • Interpret Principles and Requirements – Interpret the Principles and their application to your existing risk, data, risk reporting, IT infrastructure, data architecture, and quality.
  • Assess Current Capabilities – Assess your existing risk data, risk reporting, IT infrastructure, data architecture, and data quality to identify gaps in the capabilities prescribed by the Principles.
  • Develop and Implement Remediation – Develop and implement remediation plans to eliminate gaps and facilitate compliance.
  • Develop and Implement Standard Risk Taxonomies – Develop standard risk taxonomies to meet the needs for risk reporting, regulatory compliance.
  • Develop or Enhance Risk Reporting – Develop automated risk reporting dashboards for market, credit, and operational risk that are supported by reliable source data.
  • Document and Assess End State RDARR – Develop good documentation of the end state to demonstrate compliance to regulators.

RiskSpan RDARR Advisory Services

Whether or not your bank is designated as a SIB, recent trends indicate that your regulator may soon expect you to apply the Principles. You will need to pro-actively enhance your RDARR.

The Basel Committee on Banking Supervision Principles for Effective Risk Data Aggregation and Risk Reporting guidance has increased the burden on you for measuring and reporting risks.  This new paradigm for risk data aggregation and risk reporting imposes many new standards.

RiskSpan’s RDARR Advisory Services team has decades of finance, accounting, data, and technology expertise to help banks meet these increasing supervisory expectations.

About The Author

Steve Sloan, Director, CPA, CIA, CISA, CIDA, has extensive experience in the professional practices of risk management and internal audit, collaborating with management and audit committees to design and implement the infrastructures to obtain the required assurances over risk and controls.

He prescribes a disciplined approach, aligning stakeholders’ expectations with leading practices, to maximize the return on investment in risk functions. Steve holds a Bachelor of Science from Pennsylvania State University and has multiple certifications.

Vendor Model Validation

Many of the models we validate on behalf of our clients are developed and maintained by third-party vendors. These validations present a number of complexities that are less commonly encountered when validating “home-grown” models. These often include:

  1. Inability to interview the model developer
  2. Inability to review the model code
  3. Inadequate documentation
  4. Lack of developmental evidence and data sets
  5. Lack of transparency into the impact custom settings

Notwithstanding these challenges, the OCC’s Supervisory Guidance on Model Risk Management (OCC 2011-12)1 specifies that, “Vendor products should nevertheless be incorporated into a bank’s broader model risk management framework following the same principles as applied to in-house models, although the process may be somewhat modified.”

The extent of these modifications depends on the complexity of the model and the cooperation afforded by the model’s vendor. We have found the following general principles and practices to be useful.

Model Validation for Vendor Models

Vendor Documentation is Not a Substitute for Model Documentation

Documentation provided by model vendors typically includes user guides and other materials designed to help users navigate applications and make sense of outputs. These documents are written for a diverse group of model users and are not designed to identify and address particular model capabilities specific to the purpose and portfolio of an individual bank. A bank’s model documentation package should delve into its specific implementation of the model, as well as the following:

  • Discussion of the model’s purpose and specific application, including business and functional requirements achieved by the model
  • Discussion of model theory and approach, including algorithms, calculations, formulas, functions and programming
  • Description of the model’s structure
  • Identification of model limitations and weaknesses
  • Comprehensive list of model inputs and assumptions, including their sources
  • Comprehensive list of outputs and reports and how they are used, including downstream systems that rely on them
  • Description of testing (benchmarking and back-testing)

Because documentation provided by the vendor is likely to include very few if any of these items, it falls to the model owner (at the bank) to generate this documentation. While some of these items (specific algorithms, calculations, formulas, and programming, for example) are likely to be deemed proprietary and will not be disclosed by the vendor, most of these components are obtainable and should be requested and documented.

Model documentation should also clearly lay out all model settings (e.g., knobs) and justification for the use of (or departure from) vendor default settings.

Model Validation Testing Results Should Be Requested of the Vendor

OCC 2011-12 states that “Banks should expect vendors to conduct ongoing performance monitoring and outcomes analysis, with disclosure to their clients, and to make appropriate modifications and updates over time.” Many vendors publish the results of their own internal testing of the model. For example, a prepayment model vendor is likely to include back-testing results of the model’s forecasts for certain loan cohorts against actual, observed prepayments. An automated valuation model (AVM) vendor might publish the results of testing comparing the property values it generates against sales data. If a model’s vendor does not publish this information, model validators should request it and document the response in the model validation report. Where available, this information should be obtained and incorporated into the model validation process, along with a discussion of its applicability to data the bank is modeling. Model validators should attempt to replicate the results of these studies, where feasible, and use them to enhance their own independent benchmarking and back-testing activities.

Developmental Evidence Should Be Requested of the Vendor

OCC 2011-12 directs banks to “require the vendor to provide developmental evidence explaining the product components, design, and intended use.” This should be incorporated into the bank’s model documentation. Where feasible, model validators should also ask model vendors to provide information about data sets that were used to develop and test the model.

Contingency plans should be maintained: OCC 2011-12 cites the importance of a bank’s having “as much knowledge in-house as possible, in case the vendor or the bank terminates the contract for any reason, or if the vendor is no longer in business. Banks should have contingency plans for instances when the vendor model is no longer available or cannot be supported by the vendor.” For simple applications whose inner workings are well understood and replicable, a contingency plan may be as simple as Microsoft Excel. This requirement can pose a significant challenge, however, for banks that purchase off-the-shelf asset-liability and market risk models and do not have the in-house expertise to quickly and adequately replicate these models’ complex computations. Situations such as this argue for the implementation of reliable challenger models, which not only assist in meeting benchmarking requirements but can also function as a contingency plan backup.

Consult the Model Risk Management Group During the Process of Procuring Any Application That Might Possibly be Classified as a “Model”

In a perfect world, model validation considerations would be contemplated as part of the procurement process. An agreement to provide developmental evidence, testing results, and cooperation with future model validation efforts would ideally figure into the negotiations before the purchase of any application is finalized. Unfortunately, our experience has shown that banks often acquire what they think of as a simple third-party application, only to be informed after the fact, by either a regulator or the model risk management group, that they have in fact purchased a model requiring validation. A model vendor, particularly one not inclined to think of its product as a “model,” may not always be as responsive to requests for development and testing data after sale if those items have not been requested as a condition for the sale. It is, therefore, a prudent practice for procurement departments to have open lines of communication with model risk management groups so that the right questions can be asked and requirements established prior to application acquisition.

[1] See also: Federal Reserve Board of Governors Guidance on Model Risk Management (SR 11-7)

Model Validation: Is This Spreadsheet a Model?

As model validators, we frequently find ourselves in the middle of debates between spreadsheet owners and enterprise risk managers over the question of whether a particular computing tool rises to the level of a “model.” To the uninitiated, the semantic question, “Is this spreadsheet a model?” may appear to be largely academic and inconsequential. But its ramifications are significant, and getting the answer right is of critical importance to model owners, to enterprise risk managers, and to regulators.

Stakeholders of Model Validation

In the most important respects, the incentives of these stakeholder groups are aligned. Everybody has an interest in knowing that the spreadsheet in question is functioning as it should and producing accurate and meaningful outputs. Appropriate steps should be taken to ensure that every computing tool does this, regardless of whether it is ultimately deemed a model. But classifying something as a model carries with it important consequences related to cost and productivity, as well as overall model risk management.

It is here where incentives begin to diverge. Owners and users of spreadsheets, in particular, are generally inclined to classify them as simple applications or end-user computing (EUC) tools whose reliability can (and ought to) be ascertained using testing measures that do not rise to the level of formal model validation procedures required by regulators.1 These formal procedures can be both expensive for the institution and onerous for the model owner. Models require meticulous documentation of their approach, economic and financial theory, and code. The painstaking statistical analysis is frequently required to generate the necessary developmental evidence, and further cost is then incurred to validate all of it.

Enterprise risk managers and regulators, who do not necessarily feel these added costs and burdens, may be inclined to err on the side of classifying spreadsheets as models “just to be on the safe side.” But incurring unnecessary costs is not a prudent course of action for a financial institution (or any institution). And producing more model validation reports than is needful can have other unintended, negative consequences. Model validations pull model owners away from their everyday work, adversely affecting productivity and, sometimes, quality of work. Virtually every model validation report identifies issues that must be reviewed and addressed by management. Too many unnecessary reports containing findings that are comparatively unimportant can bury enterprise risk managers and distract them from the most urgent findings.

Definition of a Model

So what, then, are the most important considerations in determining which spreadsheets are in fact models that should be subject to formal validation procedures? OCC and FRB guidance on model risk management defines a model as follows:2

A quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.

The same guidance refers to models as having three components:

  1. An information input component, which delivers assumptions and data to the mode
  2. A processing component, which transforms inputs into estimates
  3. A reporting component, which translates the estimates into useful business information

This definition and guidance leave managers with some latitude. Financial institutions employ many applications that apply mathematical concepts to defined inputs in order to generate outputs. But the existence of inputs, outputs, and mathematical concepts alone does not necessarily justify classifying a spreadsheet as a model.

Note that the regulatory definition of a model includes the concept of quantitative estimates. The term quantitative estimate implies a level of uncertainty about the outputs. If an application is generating outputs about which there is little or no uncertainty, then one can argue the output is not a quantitative estimate but, rather, simply a defined arithmetic result. While quantitative estimates typically result from arithmetic processes, not every defined arithmetic result is a quantitative estimate.

For example, a spreadsheet that sums all the known balances of ten bank accounts as of a given date, even if it is supplied by automated feeds, and performs the summations in a complete lights-out process, likely would not rise to the level of a model requiring validation because it is performing a simple arithmetic function; it is not generating a quantitative estimate.3

In contrast, a spreadsheet that projects what the sum of the same ten bank balances will be as of a given future date (based on assumptions about interest rates, expected deposits, and decay rates, for example) generates quantitative estimates and would, therefore, qualify as a model requiring validation. Management and regulators would want to have comfort that the assumptions used by this spreadsheet model are reasonable and that they are being applied and computed appropriately.

Is this Spreadsheet a Model?

We have found the following questions to be particularly enlightening in helping our clients determine whether a spreadsheet should be classified as 1) a model that transforms inputs into quantitative estimates or 2) a non-model spreadsheet that generates defined arithmetic results.

Question 1: Does the Spreadsheet Produce a Demonstrably “Right” Answer?

A related question is whether benchmarking yields results that are comparable, as opposed to exactly the same. If spreadsheets designed by ten different people can reasonably be expected to produce precisely the same result (because there is only one generally accepted way of calculating it), then the result probably does not qualify as a quantitative estimate and the spreadsheet probably should not be classified as a model.

Example 1 (Non-Model): Mortgage Amortization Calculator: Ten different applications   would be expected to transform the same loan amount, interest rate, and term information into precisely the same amortization table. A spreadsheet that differed from this expectation would be considered “wrong.” We would not consider this output to be a quantitative estimate and would be inclined to classify such a spreadsheet as something other than a model.

Example 2 (Model): Spreadsheet projecting the expected UPB of a mortgage portfolio in 12 months:  Such a spreadsheet would likely need to apply and incorporate prepayment and default assumptions. Different spreadsheets could compute and apply these assumptions differently, without one particularly   necessarily   being recognized as “wrong.” We would consider the resulting UPB projections to be quantitative estimates and would be likely to classify such as spreadsheet as a model.

Note that the spreadsheets in both examples tell their users what a loan balance will be in the future. But only the second example layers economic assumptions on top of its basic arithmetic calculations. Economic assumptions can be subjected to verification after the fact, which relates to our second question:

Question 2: Can the Spreadsheet’s Output Be Back-Tested?

Another way of stating this question would be, “Is back-testing required to gauge the accuracy of the spreadsheet’s outputs?” This is a fairly unmistakable indicator of a forward-looking quantitative estimate. A spreadsheet that generates forward-looking estimates is almost certainly a model and should be subjected to formal model validation.

Back-testing would not be of any particular value in our first (non-model) example, above, as the spreadsheet is simply calculating a schedule. In our second (model) example, however, back-testing would be an invaluable tool for judging the reliability of the prepayment and default assumptions driving the balance projection.

Question 3: Is the Spreadsheet Simply Applying a Defined Set of Business Rules?

Spreadsheets are sometimes used to automate the application of defined business rules in order to arrive at a prescribed course of action. This question is a corollary to the first question about whether the spreadsheet produces output that is, by definition, “correct.”

Examples of business-rule calculators are spreadsheets that determine a borrower’s eligibility for a particular loan product or loss mitigation program. Such spreadsheets are also used to determine how much of a haircut to apply to various collateral types based on defined rules.

These spreadsheets do not generate quantitative estimates and we would not consider them models subject to formal regulatory validation.

Should I Validate This Spreadsheet?

All spreadsheets that perform calculations should be subject to review. Any spreadsheet that produces incorrect or otherwise unreliable outputs should not be used until its errors are corrected. Formal model validation procedures, however, should be reserved for spreadsheets that meet certain criteria. Subjecting non-model spreadsheets to model validation unnecessarily drives up costs and dilutes the findings of bona fide model validations by cluttering enterprise risk management’s radar with an unwieldy number of formal issues requiring tracking and resolution.

Spreadsheets should be classified as models (and validated as such) when they produce forward-looking estimates that can be back-tested. This excludes simple calculators that do not rely on economic assumptions or apply business rules that produce outputs that can be definitively identified before the fact as “right” or “wrong.”

We believe that the systematic application of these principles will alleviate much of the tension between spreadsheet owners, enterprise risk managers, and regulators as they work together to identify those spreadsheets that should be subject to formal model validation.

[1] In the United States, most model validations are governed by one of the following sets of guidelines: 1) OCC 2011-12 (institutions regulated by the OCC), 2) FRB SR-11 (institutions regulated by the Federal Reserve) and 3) FHFA 2013-07 (Fannie Mae, Freddie Mac, and the Federal Home Loan Banks). These documents have much in common and the OCC and FRB guidelines are identical to one another.

[2] See footnote 1.

[3] Management would nevertheless want to obtain assurances that such an application was functioning correctly. This, however, can be achieved via less intrusive means than a formal model validation process. This might be addressed via conventional auditing, SOX reviews, or EUC quality gates. All of these are less intrusive.

Get Started
Get A Demo

Linkedin    Twitter    Facebook