Part One of a Two-Part Series on CECL Data Requirements

With CECL implementation looming, many bankers are questioning whether they have enough internal loan data for CECL modeling. Ensuring your data is sufficient is a critical first step in meeting the CECL requirements, as you will need to find and obtain relevant third-party data if it isn’t. This article explains in plain English how to calculate statistically sufficient sample sizes to determine whether third-party data is required. More importantly, it shows modeling techniques that reduce the required sample size. Investing in the right modeling approach could ultimately save you the time and expense of obtaining third-party data.

CECL Data Requirements: Sample Size for a Single Homogenous Pool

Exhibit 1: Required Sample Size

Let’s first consider the sample required for a single pool of nearly identical loans. In the case of a uniform pool of loans — with the same FICO, loan-to-value (LTV) ratio, loan age, etc. — there is a straightforward formula to calculate the sample size we need to estimate the pool’s default rate, shown in Exhibit 1.1 As the formula shows, the sample size depends on several variables, some of which must be estimated:

  • Materiality Threshold and Confidence Level: Suppose you have a $1 billion loan portfolio and you determine that, from a financial statement materiality standpoint, your ALLL estimate needs to be reliable to within +/- $2.5 million. Statistically, we would say that we need to be 95% confident that our loss reserve estimate is within an error margin of +/- $2.5 million of the true figure. The wider our materiality thresholds and lower our required confidence levels, the smaller the sample size we need.
  • Loss Severity: As your average loss severity increases, you need a greater sample size to achieve the same error margin and confidence level. For example, if your average loss severity is 0%, you will estimate zero losses regardless of your default rates. Theoretically, you don’t even need to perform the exercise of estimating default rates, and your required sample size is zero. On the opposite end, if your average loss severity is 100%, every dollar of defaulted balance translates into a dollar of loss, so you can least afford to misestimate default rates. Your required sample size will therefore be great.
  • Default Rates: Your preliminary estimate of default rate, based on your available sample, also affects the sample size you will require. (Of course, if you lack any internal sample, you already know you need to obtain third-party data for CECL modeling.) Holding dollar error margin constant, you need fewer loans for low default-rate populations.

Example: Suppose we have originated a pool of low-risk commercial real estate loans. We have historical observations for 500 such loans, of which 495 paid off and five defaulted, so our preliminary default rate estimate is 1%. Of the five defaults, loss severity averaged 25% of original principal balance. We deem ALLL estimate errors within 0.25% of the relevant principal balance to be immaterial. Is our internal sample of 500 loans enough for CECL modeling purposes, or do we need to obtain proxy data? Simply apply the formula from Exhibit 1: In this case, our internal sample of 500 loans is more than enough to give us a statistical confidence interval that is narrower than our materiality thresholds. We do not need proxy data to inform our CECL model in this case.

CECL Data Requirements: Sample Size Across an Asset Class

If we have an asset class with loans of varying credit risk characteristics, one way to determine the needed sample is just to carve up the portfolio into many buckets of loans with like-risk characteristics, determine the number of loans needed for each bucket on a standalone basis per the formula above, and then sum these amounts. The problem with this approach – assuming our concern is to avoid material ALLL errors at the asset class level – is that it will dramatically overstate the aggregate number of loans required. A better approach, which still involves segregating the portfolio into risk buckets, is to assign varying margins of error across the buckets in a way that minimizes the aggregate sample required while maintaining a proportional portfolio mix and keeping the aggregate margin of error within the aggregate materiality threshold. A tool like Solver within Microsoft Excel can perform this optimization task with precision. The resulting error margins (as a percentage of each bucket’s default rate estimates) are much wider than they would be on a standalone basis for buckets with low frequencies and slightly narrower for buckets with high default frequencies. Even at its most optimized, though, the total number of loans needed to estimate the default rates of multiple like-risk buckets will skyrocket as the number of key credit risk variables increases. A superior approach to bucketing is loan-level modeling, which treats the entire asset class as one sample but estimates loan-specific default rates according to the individual risk characteristics of each loan.

Loan-Level Modeling

 

Suppose within a particular asset class, FICO is the only factor that affects default rates, and we segregate loans into four FICO buckets based on credit performance. (Assume for simplicity that each bucket holds an equal number of loans.) The buckets’ default rates range from 1% to 7%. As before, average loss severity is 25% and our materiality threshold is 0.25% of principal balance. Whether with a bucketing approach or loan-level modeling, either way we need a sample of about 5,000 loans total across the asset class. (We calculate the sample required for bucketing with Solver as described above and calculate the sample required for loan-level modeling with an iterative approach described below.) Now suppose we discover that loan age is another key performance driver. We want to incorporate this into our model because an accurate ALLL minimizes earnings volatility and thereby minimizes excessive capital buffers. We create four loan age buckets, leaving us now with 4 × 4 = 16 buckets (again, assume the buckets hold equal loan count). With four categories each of two variables, we would need around 9,000 loans for loan-level modeling but 20,000 loans for a bucketing approach, with around 1,300 in each bucket. (These are ballpark estimates that assume that your loan-level model has been properly constructed and fit the data reasonably well. Your estimates will vary somewhat with the default rates and loss severities of your available sample. Also, while this article deals with loan count sufficiency, we have noted previously that the same dataset must also cover a sufficient timespan, whether you are using loan-level modeling or bucketing.) Finally, suppose we include a third variable, perhaps stage in the economic cycle, LTV, Debt Service Coverage Ratio, or something else.

Exhibit 2: Loan-Level Modeling Yields Greater Insight from Smaller Samples

Again assume we segregate loans into four categories based on this third variable. Now we have 4^3= 64 equal-sized buckets. With loan-level modeling we need around 12,000 loans. With bucketing we need around 100,000 loans, an average of around 1,600 per bucket. As the graph shows in Exhibit 2, a bucketing approach forces us to choose between less insight and an astronomical sample size requirement. As we increase the number of variables used to forecast credit losses, the sample needed for loan-level modeling increases slightly, but the sample needed for bucketing explodes. This points to loan-level modeling as the best solution because well-performing CECL models incorporate many variables. (Another benefit of loan-level credit models, one that is of particular interest to investors, is that the granular intelligence they provide can facilitate better loan screening and pricing decisions.)

CECL Data Requirements: Sample Size for Loan-Level Modeling

Determining the sample size needed for loan-level modeling is an iterative process based on the standard errors reported in the model output of a statistical software package. After estimating and running a model on your existing sample, convert the error margin of each default rate (1.96 × the standard error of the default rate estimate to generate a 95% confidence interval) into an error margin of dollars lost by multiplying the default rate error margin by loss severity and the relevant principal balance. Next, sum each dollar error margin to determine whether the aggregate dollar error margin is within the materiality threshold, and adjust the sample size up or down as necessary. The second part in our series on CECL data requirements will lay out the data fields that should be collected and preserved to support CECL modeling.


[1] https://onlinecourses.science.psu.edu/stat506/node/11