Choosing a CECL Methodology
CECL presents institutions with a vast array of choices when it comes to CECL loss estimation methodologies. It can seem a daunting challenge to winnow down the list of possible methods. Institutions must consider considering competing concerns – including soundness and auditability, cost and feasibility, and the value of model reusability. Institutions must convince not only themselves but also external stakeholders that their methodology choices are reasonable, and often on a segment by segment basis, as methodology can vary by segment. It benefits banks, however, to narrow the field of CECL methodology choices soon so that they can finalize data preparation and begin parallel testing (generating CECL results alongside incurred-loss allowance estimates). Parallel testing generates advance signals of CECL impact and may itself play a role in the final choice of allowance methodology. In this post, we provide an overview of some of the most common loss estimation methodologies that banks and credit unions are considering for CECL, and outline the requirements, advantages and challenges of each.
Methods to Estimate Lifetime Losses
The CECL standard explicitly mentions five loss estimation methodologies, and these are the methodologies most commonly considered by practitioners. Different practitioners define them differently. Additionally, many sound approaches combine elements of each method. For this analysis, we will discuss them as separate methods, and use the definitions that most institutions have in mind when referring to them:
- Vintage,
- Loss Rate,
- PDxLGD,
- Roll Rate, and
- Discount Cash Flow (DCF).
While CECL allows the use of other methods—for example, for estimating losses on individual collateral-dependent loans—these five methodologies are the most applicable to the largest subset of assets and institutions. For most loans, the allowance estimation process entails grouping loans into segments, and for each segment, choosing and applying one of the methodologies above. A common theme in FASB’s language regarding CECL methods is flexibility: rather than prescribing a formula, FASB expects that the banks consider historical patterns and the macroeconomic and credit policy drivers thereof, and then extrapolate based on those patterns, as well as each individual institution’s macroeconomic outlook. The discussion that follows demonstrates some of this flexibility within each methodology but focuses on the approach chosen by RiskSpan based on our view of CECL and our industry experience. We will first outline the basics of each methodology, followed by their data requirements, and end with the advantages and challenges of each approach.
Vintage Method
Using the Vintage method, historical losses are tabulated by vintage and by loan age, as a percentage of origination balances by vintage year. In the example below, known historical values appear in the white cells, and forecasted values appear in shaded cells. We will refer to the entire shaded region as the “forecast triangle” and the cells within the forecast triangle as “forecast cells.”[/vc_column_text][/vc_column][/vc_row]
A simple way to populate the forecast cells is with the simple average of the known values from the same column. In other words, we calculate the average marginal loss rate for loans of each age and extrapolate that forward. The limitation of this approach is that it does not differentiate loss forecasts based on the bank’s macroeconomic outlook, which is a core requirement of CECL, so a bank using this method will need to incorporate its macroeconomic outlook via management adjustments and qualitative factors (Q-factors).
As an alternative methodology, RiskSpan has developed an approach to forecast the loss triangle using statistical regression, developing a regression model that estimates the historical loss rates in the vintage matrix as a function of loan age, a credit indicator, and a macroeconomic variable, and then applies that regression equation along with a forecast for the macroeconomic variable (and a mean-reversion process) to populate the forecast triangle. The forecast cells can still be adjusted by management as desired, and/or Q-factors can be used. We caution, however, that management should take care not to double-count the influence of macroeconomics on allowance estimates (i.e., once via models, and again via Q-factors)
Once the results of the regression are ready and adjustments are applied where needed, the final allowance can be derived as follows:
Loss Rate Method
Using the Loss Rate method, the average lifetime loss rate is calculated for historical static pools within a segment. This average lifetime loss rate of a is used as the basis to predict the lifetime loss rate of the current static pool—that is, the loans on the reporting-date balance sheet.
In this context, a static pool refers to a group of loans that were on the balance sheet as of a particular date, regardless of when they were originated. For example, within an institutions’ owner-occupied commercial real estate portfolio, the 12/31/06 static pool would refer to all such loans that were on the institution’s balance sheet as of December 31, 2006. We would measure the lifetime losses of such a static pool beginning on the static pool date (December 31, 2006, in this example) and express those losses as a percentage of the balance that existed on the static pool date. This premise is consistent with what CECL asks us to do, i.e., estimate all future credit losses on the loans on the reporting-date balance sheet.
A historical static pool fully aged if all loans that made up the pool are either paid in full or charged off, where payments in full include renewals that satisfy the original contract. We should be wary of including partially aged static pools in the development of average lifetime loss estimates, because the cumulative loss rates of partially aged pools constitute life-to-date loss rates rather than complete lifetime loss rates, and inherently understates the lifetime loss rate that is required by CECL.
To generate the most complete picture of historical losses, RiskSpan constructs multiple overlapping static pools within the historical dataset of a given segment and calculates the average of the lifetime loss rates of all fully aged static pools. This provides an average lifetime loss rate over a business cycle as the soundest basis for a long-term forecast. This technique also allows, but does not require, the use of statistical techniques to estimate lifetime loss rate as a function of the credit mix of a static pool.
After the average lifetime loss rate has been determined, we can incorporate management’s view of how the forward-looking environment will differ from the lookback period over which the lifetime loss rates were calculated, via Q-Factors.
The final allowance can be derived as follows:
PDxLGD Method
Methods ranging from very simple to very sophisticated go by the name “PD×LGD.” At the most sophisticated end of the spectrum are models that calculate loan-by-loan, month-by-month, macro-conditioned probabilities of default and corresponding loss given default estimates. Such estimates can be used in a discounted cash flow context. These estimates can also be used outside of a cash flow context; we can summarize these monthly estimates into a cumulative default probability and corresponding exposure-at-default and loss-given-default estimates, which yield a single lifetime loss rate estimate. At the simpler end of the spectrum are calculations of the lifetime default rates and corresponding loss given default rates of static pools (not marginal monthly or annual default rates). This simpler calculation is the method that most institutions have in mind when referring to “PD×LGD methods,” so it is the definition we will use here.
Using this PDxLGD method, the loss rate is calculated based on the same static pool concept as that of the Loss Rate method. As with the Loss Rate method, we can use the default rates and loss given default rates of different static pools to quantify the relationship between those rates and the credit mix of the segment, and to use that relationship going forward based on the credit mix of today’s portfolio. However, under PDxLGD, the loss rate is a function of two components: the lifetime default rate (PD), and the loss given default (LGD). The final allowance can be derived as follows:
Because the PDxLGD and Loss Rate methods derive the Expected Loss Rate for the segment using different but related approaches, one of the important quality controls is to verify that the final calculated rates are equal under both methodologies, and that the cause of any discrepancies is investigated.
Roll Rate Method
Using the Roll Rate method, ultimate losses are predicted based on historical roll rates and the historical loss given default estimate. Roll rates are either (a) the frequency with which loans transition from one delinquency status to another, or (b) the frequency with which loans “migrate” or “transition” from one risk grade to another. While the former is preferred due to its transparency and objectivity, for institutions with established risk grades, the latter is an appropriate metric.
Under this method, management can apply adjustments for macroeconomic and other factors at the individual roll rate level, as well as on-top adjustments as needed. Roll rate matrices can included prepayment as a possible transition, thereby incorporating prepayment probabilities. Roll rates can be used in a cash flow engine that incorporates contractual loan features and generates probabilistic (expected) cash flows, or outside of a cash flow engine to generate expected chargeoffs of amortized cost. Finally, it is possible to use statistical regression techniques to express roll rates as a function of macroeconomic variables, and thus, to condition future roll rates on macroeconomic expectations.
The final allowance can be derived as follows:
Discounted Cash Flow (DCF) Method
Discounting cash flows is a way of translating expected future cash flows into a present value. DCF is a loan-level method (even for loans grouped into segments), and thus requires loan-by-loan, month-by-month forecasts of prepayment, default, and loss-given-default forecasts to translate contractual cash flows into prepay-, default-, and loss-given-default-adjusted cash flows. Although such loan-level, monthly forecasts could be derived using any method, most institutions have statistical forecasting techniques in mind when thinking about a DCF approach. Thus, even though statistical forecasting techniques and cash flow discounting are not inextricably linked, we will treat them as a pair here.
The most complex, and the most robust, of the five methodologies, DCF (paired with statistical forecasting techniques) is generally used by larger institutions that have the capacity and the need for the greatest amount of insight and control. Critically, DCF capabilities give institutions the ability (when substituting the effective interest rate for a market-observed discount rate) to generate fair value estimates that can serve a host of accounting and strategic purposes.
To estimate future cash flows, RiskSpan uses statistical models, which comprise:
- Prepayment sub-models
- Probability-of-default or roll rate sub-models
- Loss-given-default sub-models
Allowance is then determined based on the expected cash flows, which, similarly to the Roll Rate method, are generated based on the rates predicted by the statistical models, contractual loan terms, and the loan status at the reporting date.
Some argue that an advantage of the discounted cash flow approach is lower Day 1 losses. Whether DCF or non-DCF methods produce a lower Day 1 allowance, all else equal, depends upon the length of the assumed liquidation timeline, the discount rate, and the recovery rate. This is an underdiscussed topic that merits its own blog post. We will cover this fully in a future post.
The statistical models often used with DCF methods use historical data to express the likelihood of default or prepayment as a mathematical function of loan-level credit factors and macroeconomic variables.
For example, the probability of transitioning from “Current” status to “Delinquent” at montht can be calculated as a function of that loan’s loan age at multiplied by a sensitivity factor β1 on the loan age variable derived based on the data in the historical dataset, the loan’s FICO multiplied by a sensitivity factor β2, and the projected unemployment rate based on management’s macroeconomic assumptions at montht multiplied by a sensitivity factor β3. Mathematically,
Because macroeconomic and loan-level credit factors are explicitly and transparently incorporated into the forecast, such statistical techniques reduce reliance on Q-Factors. This is one of the reasons why such methods are the most scientific.
Historical Data Requirements
The table below summarizes the historical data requirements for each methodology, including the dataset type, the minimum required data fields, and the timespan.
In conclusion, having the most robust data allows the most options; for institutions with moderately complex historical datasets, Loss Rate, PDxLGD, and Vintage are excellent options. With limited historical data, the Vintage method can produce a sound allowance under CECL.
While the data requirements may be daunting, it is important to keep in mind that proxy data can be used in place of, or alongside, institutional historical data, and RiskSpan can help identify and fill your data needs. Some of the proxy data options are summarized below:
Advantages and Challenges of CECL Methodologies
Each methodology has advantages, and each carries its own set of challenges. While the Vintage method, for example, is forgiving to limited historical data, it also provides limited insight and control for further analysis. On the other hand, the DCF method provides significant insight and control, as well as early model performance indicators, but requires a robust dataset and advanced statistical expertise.
We have summarized some of the advantages and challenges for each method below.
In addition to the considerations summarized in the table, it is important to consider audit and regulatory requirements. Generally, institutions facing higher audit and regulatory scrutiny will be steered toward more complex methods. Also, bankers who intend to leverage the loan forecasting model they use for CECL for strategic decision-making (for example, loan screening and pricing decisions), and who desire granular insight and dials around their allowance numbers, will gravitate toward methodologies that afford more precision. At the other end of the spectrum, the methods that provide less precision and insight generally come with lighter operational burden.
Choosing Your CECL Methodology
Choosing the method that’s right for you depends on many factors, from historical data availability to management objectives and associated operational costs.
In many cases, management can gain a better understanding of the institutional allowance requirements after analyzing the results determined by multiple complementary approaches.
RiskSpan is willing to talk further with individual institutions about their circumstances, as well as generate sample results using a set of various methodologies.