Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: Credit Analytics

RiskSpan Partners with S&P Global Market Intelligence

ARLINGTON, Va., December 5, 2018 /PRNewswire/ — Virginia-based modeling and analytics SaaS vendor RiskSpan announced today that it will be partnering with S&P Global Market Intelligence to expand the capabilities of its commercially-available RS Edge Platform.

RS Edge is a SaaS platform that integrates normalized loan and securities data, predictive models and complex scenario analytics for commercial banks, credit unions, insurance companies, and other financial institutions. The RS Edge Platform solves the hardest data management and analytical problem – affordable off-the-shelf integration of clean data and reliable models.

RiskSpan’s CECL module features broad-based methodologies covering all loan types and security types. The integration of S&P Global Market Intelligence’s C&I and CRE CECL models, built on 36 years of default and recovery data, adds loan-level, econometric models for these major asset classes from a globally recognized credit ratings institution. These enhancements further equip RiskSpan clients to navigate FASB’s impending CECL standard as well as IFRS 9 requirements.

“We’re very excited to leverage S&P Global Market Intelligence’s CECL credit models and methodologies on our SaaS platform” said RiskSpan CEO Bernadette Kogler. “Coupled with RiskSpan’s technology capabilities and risk management expertise, our CECL solution is set up to provide unmatched value to the market.”

Bob Durante, Senior Director of Risk Solutions at S&P Global Market Intelligence added, “We are pleased to offer our CECL credit models through partners such as RiskSpan. This partnership brings our best of breed CECL models directly through RiskSpan to a wide array of customers in the commercial banking, community banking, and insurance industries.”

Learn more about our CECL module here.

Get a Demo

About RiskSpan

RiskSpan simplifies the management of complex data and models in the capital markets, commercial banking, and insurance industries. We transform seemingly unmanageable loan data and securities data into productive business analytics.

About S&P Global Market Intelligence

At S&P Global Market Intelligence, we know that not all information is important—some of it is vital. Accurate, deep and insightful. We integrate financial and industry data, research and news into tools that help track performance, generate alpha, identify investment ideas, understand competitive and industry dynamics, perform valuations and assess credit risk. Investment professionals, government agencies, corporations and universities globally can gain the intelligence essential to making business and financial decisions with conviction.

S&P Global Market Intelligence a division of S&P Global (NYSE: SPGI), provides essential intelligence for individuals, companies and governments to make decisions with confidence. For more information, visit www.spglobal.com/marketintelligence.


CECL: DCF vs. Non-DCF Allowance — Myth and Reality

FASB’s CECL standard allows institutions to calculate their allowance for credit losses as either “the difference between the amortized cost basis and the present value of the expected cash flows” (ASC 326-20-30-4) or “expected credit losses of the amortized cost basis” (ASC 326-20-30-5). The first approach is commonly called the discounted cash flow or “DCF approach” and the second approach the “non-DCF approach.” In the second approach, the allowance equals the undiscounted sum of the amortized cost basis projected not to be collected. For the purposes of this post, we will equate amortized cost with unpaid principal balance. A popular misconception – even among savvy professionals – is that a DCF-based allowance is always lower than a non-DCF allowance given the same performance forecast. In fact, a DCF allowance is sometimes higher and sometimes lower than a non-DCF allowance, depending upon the remaining life of the instrument, the modeled recovery rate, the effective interest rate (EIR), and the time from default until recovery (liquidation lag). Below we will compare DCF and non-DCF allowances while systematically varying these key differentiators. Our DCF allowances reflect cash inflows that follow the SIFMA standard formulas. We systematically vary time to maturity, recovery rate, liquidation lag and EIR to show their impact on DCF vs. non-DCF allowances (see Table 1 for definitions of these variables). We hold default rate and voluntary prepayment rate constant at reasonable levels across the forecast horizon. See Table 2 for all loan features and behavioral assumptions held constant throughout this exercise. For clarity, we reiterate that the DCF allowances we will compare to non-DCF allowances reflect amortized cost minus discounted cash inflows, per ASC 326-20-30-4. A third approach, which is unsound and therefore excluded, is the discounting of accounting losses. This approach will understate expected credit losses by using the interest rate to discount principal losses while ignoring lost interest itself. Table 1 – Key Drivers of DCF vs. Non-DCF Allowance Differences (Systematically Varied Below)

Variable Definitions and Notes
Months to Maturity Months from reporting date until last scheduled payment
Effective Interest Rate (EIR) The rate of return implicit in the financial asset. Per CECL, this is the rate used to discount expected cash flows when using the DCF approach and, by rule, is calculated using the asset’s contractual or prepay-adjusted cash flows. In this exercise, we set unpaid principal balance equal to amortized cost, so the EIR is the same assuming either contractual or prepay-adjusted cash flows and matches the instrument’s note rate.
Liquidation Lag (Months) Months between first missed payment and receipt of recovery proceeds
Recovery Rate Net cash inflow at liquidation, divided by the principal balance of the loan at the time it went into default. Note that 100% recovery will not include recovery of unpaid interest.

  Table 2 – Loan Features and Behavioral Assumptions Held Constant

Book Value on Reporting Date Par(Amortized Cost = Unpaid Principal Balance)
Performance Status on Reporting Date Current
Amortization Type Level pay, fully amortizing, zero balloon
Conditional Default Rate (Annualized) 0.50%
Conditional Voluntary Prepayment Rate (Annualized) 10.00%

  Figure 1 compares DCF versus non-DCF allowances. It is organized into nine tables, covering the landscape of loan characteristics that drive DCF vs. non-DCF allowance differences. The cells of the tables show DCF allowance minus Non-DCF allowance in basis points. Thus, positive values mean that the DCF allowance is greater.  

  • Tables A, B and C show loans with 100% recovery rates. For such loans, ultimate recovery proceeds match exposure at default. Under the non-DCF approach, as long as recovery proceeds eventually cover principal balance at the time of default, allowance will be zero. Accordingly, the non-DCF allo­wance is 0 in every cell of tables A, B and C. Longer liquidation lags, however, diminish present value and thus increase DCF allowances. The greater the discount rate (the EIR), the deeper the hit to present value. Thus, the DCF allowance increases as we move from the top-left to the bottom-right of tables A, B and C. Note that even when liquidation lag is 0, 100% recovery still excludes the final month’s interest, and a DCF allowance (which reflects total cash flows) will accordingly reflect a small hit. Tables A, B and C differ in one respect – the life of the loan. Longer lives translate to greater total defaulted dollars, greater amounts exposed to the liquidation lags, and greater DCF allowances.
  • Tables G, H and I show loans with 0% recovery rates. While 0% recovery rates may be rare, it is instructive to understand the zero-recovery case to sharpen our intuitions around the comparison between DCF and non-DCF allowances. With zero recovery proceeds, the loans produce only monthly (or periodic) payments until default. Liquidation lag, therefore, is irrelevant. As long as the EIR is positive and there are defaults in payment periods besides the first, the present value of a periodic cash flow stream (using EIR as the discount rate) will exceed cumulative principal collected. Book value minus the present value of the periodic cash flow stream, therefore, will be less than than the cumulative principal not collected, and thus DCF allowance will be lower. Appendix A explains why this is the case. As Tables G, H and I show, the advantage (if we may be permitted to characterize a lower allowance as an advantage) of the DCF approach on 0% recovery loans is greater with greater discount rates and greater loan terms.
  • Tables D, E and F show a more complex (and more realistic) scenario where the recovery rate is 75% (loss-given-default rate is 25%). Note that each cell in Table D falls in between the corresponding values from Table A and Table G; each cell in Table E falls in between the corresponding values from Table B and Table H; and each cell in Table F falls in between the corresponding values from Table C and Table I. In general, we can see that long liquidation lags will hurt present values, driving DCF allowances above non-DCF allowances. Short (zero) liquidation lags allow the DCF advantage from the periodic cash flow stream (described above in the comments about Tables G, H and I) to prevail, but the size of the effect is much smaller than with 0% recovery rates because allowances in general are much lower. With moderate liquidation lags (12 months), the two approaches are nearly equivalent. Here the difference is made by the loan term, where shorter loans limit the periodic cash flow stream that advantages the DCF allowances, and longer loans magnify the impact of the periodic cash flow stream to the advantage of the DCF approach.

Figure 1 – DCF Allowance Relative to Non-DCF Allowance (difference in basis points) Liquidation Lag Table Conclusion

  • Longer liquidation lags will increase DCF allowances relative to non-DCF allowances as long as recovery rate is greater than 0%.
  • Greater EIRs will magnify the difference (in either direction) between DCF and non-DCF allowances.
  • At extremely high recovery rates, DCF allowances will always exceed non-DCF allowances; at extremely low recovery rates, DCF allowances will always be lower than non-DCF allowances. At moderate recovery rates, other factors (loan term and liquidation lag) make the difference as to whether DCF or non-DCF allowance is higher.
  • Longer loan terms both a) increase allowance in general, by exposing balances to default over a longer time horizon; and b) magnify the significance of the periodic cash flow stream relative to the liquidation lag, which advantages DCF allowances.
    • Where recovery rates are extremely high (and so non-DCF allowances are held low or to zero) the increase to defaults from longer loan terms will drive DCF allowances further above non-DCF allowances.
    • Where recovery rates are moderate or low, the increase to loan term will lower DCF allowances relative to non-DCF allowances.[1]

Note that we have not specified the asset class of our hypothetical instrument in this exercise. Asset class by itself does not influence the comparison between DCF and non-DCF allowances. However, asset class (for example, a 30-year mortgage secured by a primary residence, versus a five-year term loan secured by business equipment) does influence the variables (loan term, recovery rate, liquidation lag, and effective interest rate) that drive DCF vs. non-DCF allowance differences. Knowledge of an institution’s asset mix would enable us to determine how DCF vs. non-DCF allowances will compare for that portfolio. Appendix A: The present value of a periodic cash flow stream, as discounted per CECL at the Effective Interest Rate (EIR), will always exceed cumulative principal collected when the following conditions are met: recovery rate is 0%, EIR is positive, and there are defaults in payment periods other than the first. To understand why this is the case, note that the difference between the present value of cash flows and cumulative principal collected has two components: cumulative interest collected, which accrues to the present value of cash flows but not cumulative principal collected, and the cumulative dollar impact of discounting future cash flows, which lowers present value but does not touch cumulative principal collected. The present value of cash flows will exceed cumulative principal collected when the interest impact exceeds the discounting impact. The interest impact is always greater in the early months of a loan forecast because interest makes up a large share of total payment and value lost to discounting is minimal. As the loan ages, the interest share diminishes and the discount impact grows. In the pristine case, where book value equals unpaid principal balance and defaults are zero, the discount effect will finally catch up to the interest effect with the final payment. The present value of the total cash flow stream will thus equal the cumulative principal collected and equal the beginning unpaid principal balance. If there are any defaults in periods later than the first, however, the discount effect can never fully catch up to the interest effect. Table 3 provides one such example. Table 3 – Cash Flow, Principal Losses, Present Value and Allowance under 0% Recovery Loan Features and Assumptions:

  • Reporting-date amortized cost and unpaid principal balance = $10,000
  • 5-year, annual-pay, fully amortizing loan
  • Fixed note rate (and effective interest rate) of 4%
  • 10% conditional voluntary prepayment rate, 0.50% conditional default rate, 0% recovery rate

DCF allowance DCF allowance = $10,000 − $9,872 = $128 Non-DCF allowance = Sum of Principal Losses = $134 We make the following important notes:

  • First-period defaults effectively make the loan a smaller-balance loan and will not cause a difference between the DCF allowance and non-DCF allowance; only defaults subsequent to the first period will drive a difference between the two approaches.
  • Interest-only loans will exacerbate the advantage of DCF allowances relative to non-DCF allowances.
  • For floating-rate instruments, a projected change in coupon rate (based on the known level of the underlying index as of the reporting date) does not change the fact that DCF allowance will be lower than non-DCF allowance if the conditions of 0% recovery rate, positive EIR, and presence of non-first-period defaults are met.

Finally, the discounting approach under CECL is different from that used in finance to assess the fundamental value of a loan. A loan’s fundamental value can be determined by discounting its expected cash flows at a market-observed rate of return (i.e., the rate that links recent market prices on similar-risk instruments to the expected cash flows on those instruments.) As we have noted in other blogs, CECL’s DCF method does not produce the fundamental value of a loan.

    [1] We see just one case in Figure 1 that appears to be an exception to this rule, as we compare the lower-right corner of Table D to the lower-right corner of Table E. What happens between these two cells is that the DCF allowance grows from 36.8 basis points in Table D to 58.9 basis points in Table E (a 60% increase in ratio terms), while the non-DCF allowance grows from 28.4 basis points in Table D to 50.1 basis points in Table E (a 77% increase in ratio terms). Because the allowances rise in general, the subtractive difference between them increases, but we see more rapid growth of the non-DCF allowance as we continue moving from the lower-right corner of Table E to the same corner of Table F.

Get a Demo


CRT Exposure to Hurricane Michael

Graph

With Hurricane Michael approaching the Gulf Coast, we put together some interactive charts looking at the affected metro areas, and their related CRT exposure (Both CAS and STACR). Given the large area of impact with Hurricane Michael, we have included a nearly exhaustive selection of MSA’s. Click on a deal ID along the left-hand side of the plot to view its exposure to each MSA. Most of the mortgage delinquencies in the wake of Hurricane Harvey quickly cured. Holders of securities backed by loans that ultimately defaulted (typically because the property was completely destroyed) had much of their exposure mitigated by insurance proceeds, government intervention, and other relief provisions.  






Analytics-as-a-Service – CECL Forecasting

The RiskSpan Edge Platform CECL Module delivers the technology platform and expertise to take you from where you are today to producing audit-ready CECL estimates. Our dedicated CECL Module executes your monthly loss reserving and reporting process under the new CECL standard, covering data intake, segmentation, modeling, and report generation within a single platform. Watch RiskSpan Director David Andrukonis explain the Edge CECL Module in this video.

 

Get a Demo


CRT Deal Monitor: Understanding When Credit Becomes Risky

This analysis tracks several metrics related to deal performance and credit profile, putting them into a historical context by comparing the same metrics for recent-vintage deals against those of ‘similar’ cohorts in the time leading up to the 2008 housing crisis. You’ll see how credit metrics are trending today and understand the significance of today’s shifts in the context of historical data. Some of the charts in this post have interactive features, so click around! We’ll be tweaking the analysis and adding new metrics in subsequent months. Please shoot us an email if you have an idea for other metrics you’d like us to track.

Highlights

  • Performance metrics signal steadily increasing credit risk, but no cause for alarm.
    • We’re starting to see the hurricane-related (2017 Harvey and Irma) delinquency spikes subside in the deal data. Investors should expect a similar trend in 2019 due to Hurricane Florence.
    • The overall percentage of delinquent loans is increasing steadily due to the natural age ramp of delinquency rates and the ramp-up of the program over the last 5 years.
    • Overall delinquency levels are still far lower than historical rates.
    • While the share of delinquency is increasing, loans that go delinquent are ending up in default at a lower rate than before.
  • Deal Profiles are becoming riskier as new GSE acquisitions include higher-DTI business.
    • It’s no secret that both GSEs started acquiring a lot of high-DTI loans (for Fannie this moved from around 16% of MBS issuance in Q2 2017 to 30% of issuance as of Q2 this year). We’re starting to see a shift in CRT deal profiles as these loans are making their way into CRT issuance.
    • The credit profile chart toward the end of this post compares the credit profiles of recently issued deals with those of the most recent three months of MBS issuance data to give you a sense of the deal profiles we’re likely to see over the next 3 to 9 months. We also compare these recently issued deals to a similar cohort from 2006 to give some perspective on how much the credit profile has improved since the housing crisis.
    • RiskSpan’s Vintage Quality Index reflects an overall loosening of credit standards–reminiscent of 2003 levels–driven by this increase in high-DTI originations.
  • Fannie and Freddie have fundamental differences in their data disclosures for CAS and STACR.
    • Delinquency rates and loan performance all appear slightly worse for Fannie Mae in both the deal and historical data.
    • Obvious differences in reporting (e.g., STACR reporting a delinquent status in a terminal month) have been corrected in this analysis, but some less obvious differences in reporting between the GSEs may persist.
    • We suspect there is something fundamentally different about how Freddie Mac reports delinquency status—perhaps related to cleaning servicing reporting errors, cleaning hurricane delinquencies, or the way servicing transfers are handled in the data. We are continuing our research on this front and hope to follow up with another post to explain these anomalies.

The exceptionally low rate of delinquency, default, and loss among CRT deals at the moment makes analyzing their credit-risk characteristics relatively boring. Loans in any newly issued deal have already seen between 6 and 12 months of home price growth, and so if the economy remains steady for the first 6 to 12 months after issuance, then that deal is pretty much in the clear from a risk perspective. The danger comes if home prices drift downward right after deal issuance. Our aim with this analysis is to signal when a shift may be occurring in the credit risk inherent in CRT deals. Many data points related to the overall economy and home prices are available to investors seeking to answer this question. This analysis focuses on what the Agency CRT data—both the deal data and the historical performance datasets—can tell us about the health of the housing market and the potential risks associated with the next deals that are issued.

Current Performance and Credit Metrics

Delinquency Trends

The simplest metric we track is the share of loans across all deals that is 60+ days past due (DPD). The charts below compare STACR (Freddie) vs. CAS (Fannie), with separate charts for high-LTV deals (G2 for CAS and HQA for STACR) vs. low-LTV deals (G1 for CAS and DNA for STACR). Both time series show a steadily increasing share of delinquent loans. This slight upward trend is related to the natural aging curve of delinquency and the ramp-up of the CRT program. Both time series show a significant spike in delinquency around January of this year due to the 2017 hurricane season. Most of these delinquent loans are expected to eventually cure or prepay. For comparative purposes, we include a historical time series of the share of loans 60+ DPD for each LTV group. These charts are derived from the Fannie Mae and Freddie Mac loan-level performance datasets. Comparatively, today’s deal performance is much better than even the pre-2006 era. You’ll note the systematically higher delinquency rates of CAS deals. We suspect this is due to reporting differences rather than actual differences in deal performance. We’ll continue to investigate and report back on our findings.

Delinquency Outcome Monitoring

While delinquency rates might be trending up, loans that are rolling to 60-DPD are ultimately defaulting at lower and lower rates. The tables below track the status of loans that were 60+ DPD. Each bar in the chart represents the population of loans that were 60+ DPD exactly 6 months prior to the x-axis date. Over time, we see growing 60-DPD and 60+ DPD groups, and a shrinking Default group. This indicates that a majority of delinquent loans wind up curing or prepaying, rather than proceeding to default. The choppiness and high default rates in the first few observations of the data are related to the very low counts of delinquent loans as the CRT program ramped up. The following table repeats the 60-DPD delinquency analysis for the Freddie Mac Loan Level Performance dataset leading up to and following the housing crisis. (The Fannie Mae loan level performance set yields a nearly identical chart.) Note how many more loans in these cohorts remained delinquent (rather than curing or defaulting) relative to the more recent CRT loans. https://plot.ly/~dataprep/30.embed

Vintage Quality Index

RiskSpan’s Vintage Quality Index (VQI) reflects a reversion to the looser underwriting standards of the early 2000s as a result of the GSEs’ expansion of high-DTI lending. RiskSpan introduced the VQI in 2015 as a way of quantifying the underwriting environment of a particular vintage of mortgage originations. We use the metric as an empirically grounded way to control for vintage differences within our credit model. VQI-History While both GSEs increased high-DTI lending in 2017, it’s worth noting that Fannie Mae saw a relatively larger surge in loans with DTIs greater than 43%. The chart below shows the share of loans backing MBS with DTI > 43. We use the loan-level MBS issuance data to track what’s being originated and acquired by the GSEs because it is the timeliest data source available. CRT deals are issued with loans that are between 6 and 20 months seasoned, and so tracking MBS issuance provides a preview of what will end up in the next cohort of deals. High DTI Share

Deal Profile Comparison

The tables below compare the credit profiles of recently issued deals. We focus on the key drivers of credit risk, highlighting the comparatively riskier features of a deal. Each table separates the high-LTV (80%+) deals from the low-LTV deals (60%-80%). We add two additional columns for comparison purposes. The first is the ‘Coming Cohort,’ which is meant to give an indication of what upcoming deal profiles will look like. The data in this column is derived from the most recent three months of MBS issuance loan-level data, controlling for the LTV group. These are newly originated and acquired by the GSEs—considering that CRT deals are generally issued with an average loan age between 6 and 15 months, these are the loans that will most likely wind up in future CRT transactions. The second comparison cohort consists of 2006 originations in the historical performance datasets (Fannie and Freddie combined), controlling for the LTV group. We supply this comparison as context for the level of risk that was associated with one of the worst-performing cohorts. The latest CAS deals—both high- and low-LTV—show the impact of increased >43% DTI loan acquisitions. Until recently, STACR deals typically had a higher share of high-DTI loans, but the latest CAS deals have surpassed STACR in this measure, with nearly 30% of their loans having DTI ratios in excess of 43%. CAS high-LTV deals carry more risk in LTV metrics, such as the percentage of loans with a CLTV > 90 or CLTV > 95. However, STACR includes a greater share of loans with a less-than-standard level of mortgage insurance, which would provide less loss protection to investors in the event of a default. Credit Profile Low-LTV deals generally appear more evenly matched in terms of risk factors when comparing STACR and CAS. STACR does display the same DTI imbalance as seen in the high-LTV deals, but that may change as the high-DTI group makes its way into deals. Low-LTV-Deal-Credit-Profile-Most-Recent-Deals

Deal Tracking Reports

Please note that defaults are reported on a delay for both GSEs, and so while we have CPR numbers available for August, CDR numbers are not provided because they are not fully populated yet. Fannie Mae CAS default data is delayed an additional month relative to STACR. We’ve left loss and severity metrics blank for fixed-loss deals. STACR-Deals-over-the-past-3-months CAS-Deals-from-the-past-3-months.

Get a Demo


Data-as-a-Service – Credit Risk Transfer Data

Watch RiskSpan Managing Director Janet Jozwik explain our recent Credit Risk Transfer data (CRT) additions to the RS Edge Platform.

Each dataset has been normalized to the same standard for simpler analysis in RS Edge, enabling users to compare GSE performance with just a few clicks. The data has also been enhanced to include helpful variables, such as mark-to-market loan-to-value ratios based on the most granular house price indexes provided by the Federal Housing Finance Agency. 

get a demo


RiskSpan to Offer Credit Risk Transfer Data Through Edge Platform

ARLINGTON, VA, September 6, 2018 — RiskSpan announced today its rollout of Credit Risk Transfer (CRT) datasets available through its RS Edge Platform. The datasets include over seventy million Agency loans that will expand the RS Edge platform’s data library and add key enhancements for credit risk analysis.  RS Edge is a SaaS platform that integrates normalized data, predictive models and complex scenario analytics for customers in the capital markets, commercial banking, and insurance industries. The Edge Platform solves the hardest data management and analytical problem – affordable off-the-shelf integration of clean data and reliable models.  New additions to the RS Edge Data Library will include key GSE Loan Level Performance datasets going back eighteen years. RiskSpan is also adding Fannie Mae’s Connecticut Avenue Securities (CAS) and Credit Insurance Risk Transfer (CIRT) datasets as well as the Freddie Mac Structured Agency Credit Risk (STACR) datasets.  

Each dataset has been normalized to the same standard for simpler analysis in RS Edge. This will allow users to compare GSE performance with just a few clicks. The data has also been enhanced to include helpful variables, such as mark-to-market loan-to-value ratios based on the most granular house price indexes provided by the Federal Housing Finance Agency.  Managing Director and Co-Head of Quantitative Analytics Janet Jozwik said of the new CRT data, “Our data library is a great, cost-effective resource that can be leveraged to build models, understand assumptions around losses on different vintages, and benchmark performance of their own portfolio against the wider universe.”  RiskSpan’s Edge API also makes it easier-than-ever to access large datasets for analytics, model development and benchmarking. Major quant teams that prefer APIs now have access to normalized and validated data to run scenario analytics, stress testing or shock analysis. RiskSpan makes data available through its proprietary instance of RStudio and Python. 

get a demo


Here Come the CECL Models: What Model Validators Need to Know

As it turns out, model validation managers at regional banks didn’t get much time to contemplate what they would do with all their newly discovered free time. Passage of the Economic Growth, Regulatory Relief, and Consumer Protection Act appears to have relieved many model validators of the annual DFAST burden. But as one class of models exits the inventory, a new class enters—CECL models.

Banks everywhere are nearing the end of a multi-year scramble to implement a raft of new credit models designed to forecast life-of-loan performance for the purpose of determining appropriate credit-loss allowances under the Financial Accounting Standards Board’s new Current Expected Credit Loss (CECL) standard, which takes full effect in 2020 for public filers and 2021 for others.

The number of new models CECL adds to each bank’s inventory will depend on the diversity of asset portfolios. More asset classes and more segmentation will mean more models to validate. Generally model risk managers should count on having to validate at least one CECL model for every loan and debt security type (residential mortgage, CRE, plus all the various subcategories of consumer and C&I loans) plus potentially any challenger models the bank may have developed.

In many respects, tomorrow’s CECL model validations will simply replace today’s allowance for loan and lease losses (ALLL) model validations. But CECL models differ from traditional allowance models. Under the current standard, allowance models typically forecast losses over a one-to-two-year horizon. CECL requires a life-of-loan forecast, and a model’s inputs are explicitly constrained by the standard. Accounting rules also dictate how a bank may translate the modeled performance of a financial asset (the CECL model’s outputs) into an allowance. Model validators need to be just as familiar with the standards governing how these inputs and outputs are handled as they are with the conceptual soundness and mathematical theory of the credit models themselves.

CECL Model Inputs – And the Magic of Mean Reversion

Not unlike DFAST models, CECL models rely on a combination of loan-level characteristics and macroeconomic assumptions. Macroeconomic assumptions are problematic with a life-of-loan credit loss model (particularly with long-lived assets—mortgages, for instance) because no one can reasonably forecast what the economy is going to look like six years from now. (No one really knows what it will look like six months from now, either, but we need to start somewhere.) The CECL standard accounts for this reality by requiring modelers to consider macroeconomic input assumptions in two separate phases: 1) a “reasonable and supportable” forecast covering the time frame over which the entity can make or obtain such a forecast (two or three years is emerging as common practice for this time frame), and 2) a “mean reversion” forecast based on long-term historical averages for the out years. As an alternative to mean reverting by the inputs, entities may instead bypass their models in the out years and revert to long-term average performance outcomes by the relevant loan characteristics.

Assessing these assumptions (and others like them) requires a model validator to simultaneously wear a “conceptual soundness” testing hat and an “accounting policy” compliance hat. Because the purpose of the CECL model is to prove an accounting answer and satisfy an accounting requirement, what can validators reasonably conclude when confronted with an assumption that may seem unsound from purely statistical point of view but nevertheless satisfies the accounting standard?

Taking the mean reversion requirement as an example, the projected performance of loans and securities beyond the “reasonable and supportable” period is permitted to revert to the mean in one of two ways: 1) modelers can feed long-term history into the model by supplying average values for macroeconomic inputs, allowing modeled results to revert to long-term means in that way, or 2) modelers can mean revert “by the outputs” – bypassing the model and populating the remainder of the forecast with long-term average performance outcomes (prepayment, default, recovery and/or loss rates depending on the methodology). Either of these approaches could conceivably result in a modeler relying on assumptions that may be defensible from an accounting perspective despite being statistically dubious, but the first is particularly likely to raise a validator’s eyebrow. The loss rates that a model will predict when fed “average” macroeconomic input assumptions are always going to be uncharacteristically low. (Because credit losses are generally large in bad macroeconomic environments and low in average and good environments, long-term average credit losses are higher than the credit losses that occur during average environments. A model tuned to this reality—and fed one path of “average” macroeconomic inputs—will return credit losses substantially lower than long-term average credit losses.) A credit risk modeler is likely to think that these are not particularly realistic projections, but an auditor following the letter of the standard may choose not find any fault with them. In such situations, validators need to fall somewhere in between these two extremes—keeping in mind that the underlying purpose of CECL models is to reasonably fulfill an accounting requirement—before hastily issuing a series of high-risk validation findings.

CECL Model Outputs: What are they?

CECL models differ from some other models in that the allowance (the figure that modelers are ultimately tasked with getting to) is not itself a direct output of the underlying credit models being validated. The expected losses that emerge from the model must be subject to a further calculation in order to arrive at the appropriate allowance figure. Whether these subsequent calculations are considered within the scope of a CECL model validation is ultimately going to be an institutional policy question, but it stands to reason that they would be.

Under the CECL standard, banks will have two alternatives for calculating the allowance for credit losses: 1) the allowance can be set equal to the sum of the expected credit losses (as projected by the model), or 2) the allowance can be set equal to the cost basis of the loan minus the present value of expected cash flows. While a validator would theoretically not be in a position to comment on whether the selected approach is better or worse than the alternative, principles of process verification would dictate that the validator ought to determine whether the selected approach is consistent with internal policy and that it was computed accurately.

When Policy Trumps Statistics

The selection of a mean reversion approach is not the only area in which a modeler may make a statistically dubious choice in favor of complying with accounting policy.

Discount Rates

Translating expected losses into an allowance using the present-value-of-future-cash-flows approach (option 2—above) obviously requires selecting an appropriate discount rate. What should it be? The standard stipulates the use of the financial asset’s Effective Interest Rate (or “yield,” i.e., the rate of return that equates an instrument’s cash flows with its amortized cost basis). Subsequent accounting guidance affords quite a bit a flexibility in how this rate is calculated. Institutions may use the yield that equates contractual cash flows with the amortized cost basis (we can call this “contractual yield”), or the rate of return that equates cash flows adjusted for prepayment expectations with the cost basis (“prepayment-adjusted yield”).

The use of the contractual yield (which has been adjusted for neither prepayments nor credit events) to discount cash flows that have been adjusted for both prepayments and credit events will allow the impact of prepayment risk to be commingled with the allowance number. For any instruments where the cost basis is greater than unpaid principal balance (a mortgage instrument purchased at 102, for instance) prepayment risk will exacerbate the allowance. For any instruments where the cost basis is less than the unpaid principal balance, accelerations in repayment will offset the allowance. This flaw has been documented by FASB staff, with the FASB Board subsequently allowing but not requiring the use of a prepay-adjusted yield.

Multiple Scenarios

The accounting standard neither prohibits nor requires the use of multiple scenarios to forecast credit losses. Using multiple scenarios is likely more supportable from a statistical and model validation perspective, but it may be challenging for a validator to determine whether the various scenarios have been weighted properly to arrive at the correct, blended, “expected” outcome.

Macroeconomic Assumptions During the “Reasonable and Supportable” Period

Attempting to quantitatively support the macro assumptions during the “reasonable and supportable” forecast window (usually two to three years) is likely to be problematic both for the modeler and the validator. Such forecasts tend to be more art than science and validators are likely best off trying to benchmark them against what others are using than attempting to justify them using elaborately contrived quantitative methods. The data that is mostly likely to be used may turn out to be simply the data that is available. Validators must balance skepticism of such approaches with pragmatism. Modelers have to use something, and they can only use the data they have.

Internal Data vs. Industry Data

The standard allows for modeling using internal data or industry proxy data. Banks often operate under the dogma that internal data (when available) is always preferable to industry data. This seems reasonable on its face, but it only really makes sense for institutions with internal data that is sufficiently robust in terms of quantity and history. And the threshold for what constitutes “sufficiently robust” is not always obvious. Is one business cycle long enough? Is 10,000 loans enough? These questions do not have hard and fast answers.

———-

Many questions pertaining to CECL model validations do not yet have hard and fast answers. In some cases, the answers will vary by institution as different banks adopt different policies. Industry best practices will doubtless emerge in response to others. For the rest, model validators will need to rely on judgment, sometimes having to balance statistical principles with accounting policy realities. The first CECL model validations are around the corner. It’s not too early to begin thinking about how to address these questions.


Houston Strong: Communities Recover from Hurricanes. Do Mortgages?

The 2017 hurricane season devastated individual lives, communities, and entire regions. As one would expect, dramatic increases in mortgage delinquencies accompanied these events. But the subsequent recoveries are a testament both to the resilience of the people living in these areas and to relief mechanisms put into place by the mortgage holders.

Now, nearly a year later, we wanted to see what the credit-risk transfer data (as reported by Fannie Mae CAS and Freddie Mac STACR) could tell us about how these borrowers’ mortgage payments are coming along.

The timing of the hurricanes’ impact on mortgage payments can be approximated by identifying when Current-to-30 days past due (DPD) roll rates began to spike. Barring other major macroeconomic events, we can reasonably assume that most of this increase is directly due to hurricane-related complications for the borrowers.

Houston Strong - Analysis by Edge

The effect of the hurricanes is clear—Puerto Rico, the U.S. Virgin Islands, and Houston all experienced delinquency spikes in September. Puerto Rico and the Virgin Islands then experienced a second wave of delinquencies in October due to Hurricanes Irma and Maria.

But what has been happening to these loans since entering delinquency? Have they been getting further delinquent and eventually defaulting, or are they curing? We focus our attention on loans in Houston (specifically the Houston-The Woodlands-Sugar Land Metropolitan Statistical Area) and Puerto Rico because of the large number of observable mortgages in those areas.

First, we look at Houston. Because the 30-DPD peak was in September, we track that bucket of loans. To help us understand the path 30-DPD might reasonably be expected to take, we compared the Houston delinquencies to 30-DPD loans in the 48 states other than Texas and Florida.

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

Of this group of loans in Houston that were 30 DPD in September, we see that while many go on to be 60+ DPD in October, over time this cohort is decreasing in size.

Recovery is slower than the non-hurricane-affected U.S. loans, but persistent. The biggest difference is that a significant number of 30-day delinquencies in the rest of the country loans continue to hover at 30 DPD (rather than curing or progressing to 60 DPD) while the Houston cohort is more evenly split between the growing number loans that cure and the shrinking number of loans progressing to 60+ DPD.

Puerto Rico (which experienced its 30 DPD peak in October) shows a similar trend:

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

To examine loans even more affected by the hurricanes, we can perform the same analysis on loans that reached 60 DPD status.

Houston Strong - Analysis by Edge

Here, Houston’s peak is in October while Puerto Rico’s is in November.

Houston vs. the non-hurricane-affected U.S.:

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

Puerto Rico vs. the non-hurricane-affected U.S.:

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

In both Houston and Puerto Rico, we see a relatively small 30-DPD cohort across all months and a growing Current cohort. This indicates many people paying their way to Current from 60+ DPD status. Compare this to the rest of the US where more people pay off just enough to become 30 DPD, but not enough to become Current.

The lack of defaults in post-hurricane Houston and Puerto Rico can be explained by several relief mechanisms Fannie Mae and Freddie Mac have in place. Chiefly, disaster forbearance gives borrowers some breathing room with regards to payment. The difference is even more striking among loans that were 90 days delinquent, where eventual default is not uncommon in the non-hurricane affected U.S. grouping:

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

And so, both 30-DPD and 60-DPD loans in Houston and Puerto Rico proceed to more serious levels of delinquency at a much lower rate than similarly delinquent loans in the rest of the U.S. To see if this is typical for areas affected by hurricanes of a similar scale, we looked at Fannie Mae loan-level performance data for the New Orleans MSA after Hurricane Katrina in August 2005.

As the following chart illustrates, current-to-30 DPD roll rates peaked in New Orleans in the month following the hurricane:

Houston Strong - Analysis by Edge

What happened to these loans?

Houston Strong - Analysis by Edge

Here we see a relatively speedy recovery, with large decreases in the number of 60+ DPD loans and a sharp increase in prepayments. Compare this to non-hurricane affected states over the same period, where the number of 60+ DPD loans held relatively constant, and the number of prepayments grew at a noticeably slower rate than in New Orleans.

Houston Strong - Analysis by Edge

The remarkable number of prepayments in New Orleans was largely due to flood insurance payouts, which effectively prepay delinquent loans. Government assistance lifted many others back to current. As of March, we do not see this behavior in Houston and Puerto Rico, where recovery is moving much more slowly. Flood insurance incidence rates are known to have been low in both areas, a likely suspect for this discrepancy.

While loans are clearly moving out of delinquency in these areas, it is at a much slower rate than the historical precedent of Hurricane Katrina. In the coming months we can expect securitized mortgages in Houston and Puerto Rico to continue to improve, but getting back to normal will likely take longer than what was observed in New Orleans following Katrina. Of course, the impending 2018 hurricane season may complicate this matter.

—————————————————————————————————————-

Note: The analysis in this blog post was developed using RiskSpan’s Edge Platform. The RiskSpan Edge Platform is a module-based data management, modeling, and predictive analytics software platform for loans and fixed-income securities. Click here to learn more.

 


Augmenting Internal Loan Data to Comply with CECL and Boost Profit

The importance of sound internal data gathering practices cannot be understated. However, in light of the new CECL standard, many lending institutions have found themselves unable to meet the data requirements. This may have served as a wake-up call for organizations at all levels to look at their internal data warehousing systems and identify and remedy the gaps in their strategies. For some institutions, it may be difficult to consolidate data siloed within various stand-alone systems. Other institutions, even after consolidating all available data, may lack sufficient loan count, timespan, or data elements to meet the CECL standard with internal data alone. This post will discuss some of the strategies to make up for shortfalls while data gathering systems and procedures are built and implemented for the future.  

Identify Your Data

The first step is to identify the data that is available. As many tasks go, this is easier said than done. Often, organizations without formal data gathering practices and without a centralized data warehouse find themselves looking at multiple data storage systems across various departments and a multitude of ad-hoc processes implemented in time of need and not upgraded to a standardized solution. However, it is important to begin this process now, if it is not already underway. As part of the data identification phase, it is important to keep track of not only the available variables, but also the length of time for which the data exists, and whether any time periods have missing or unreliable information. In most circumstances, to meet the CECL standard, institutions should have loan performance data that will cover a full economic cycle by the time of CECL adoption. Such data enables an institution to form grounded expectations of how assets will perform over their full contractual lives, across a variety of potential economic climates. Some data points are required regardless of the CECL methodology, while others are necessary only for certain approaches. At this part of the data preparation process, it is more important to understand the big picture than it is to confirm only some of the required fields—it is wise to see what information is available, even if it may not appear relevant at this time. This will prove very useful for drafting the data warehousing procedures, and will allow for a more transparent understanding of requirements should the bank decide to use a different methodology in the future.  

Choose Your CECL Methodology

There are many intricacies involved in choosing a CECL Methodology. Each organization should determine both its capabilities and its needs. For example, the Vintage method has relatively simple calculations and limited data requirements, but provides little insight and control for management, and does not yield early model performance indicators. On the other hand, the Discounted Cash Flow method provides many insights and controls, and identifies model performance indicators preemptively, but requires more complex calculations and a very robust data history. It is acceptable to implement a relatively simple methodology at first and begin utilizing more advanced methodologies in the future. Banks with limited historical data, but with procedures in place to ramp up data gathering and data warehousing capabilities, would be well served to implement a method for which all data needs are met. They can then work toward the goal of implementing a more robust methodology once enough historical data is available. However, if insufficient data exists to effectively implement a satisfactory methodology, it may be necessary to augment existing historical data with proxy data as a bridge solution while your data collections mature.  

Augment Your Internal Data

Choose Proxy Data

Search for cost-effective datasets that give historical loan performance information about portfolios that are reasonably similar to your go-forward portfolio. Note that proxy portfolios do not need to perfectly resemble your portfolio, so long as either a) the data provider offers filtering capability that enables you to find the subset of proxy loans that matches your portfolio’s characteristics, or b) you employ segment- or loan-level modeling techniques that apply the observations from the proxy dataset in the proportions that are relevant to your portfolio. RiskSpan’s Edge platform contains a Data Library that offers historical loan performance datasets from a variety of industry sources covering multiple asset classes:

  • For commercial real estate (CRE) portfolios, we host loan-level data on all CRE loans guaranteed by the Small Business Administration (SBA) dating back to 1990. Data on loans underlying CMBS securitizations dating back to 1998, compiled by Trepp, is also available on the RiskSpan platform.
  • For commercial and industrial (C&I) portfolios, we also host loan-level on all C&I loans guaranteed by the SBA dating back to 1990.
  • For residential mortgage loan portfolios, we offer large agency datasets (excellent, low-cost options for portfolios that share many characteristics with GSE portfolios) and non-agency datasets (for portfolios with unique characteristics or risks).
  • By Q3 2018, we will also offer data for auto loan portfolios and reverse mortgage portfolios (Home Equity Conversion Mortgages).

Note that for audit purposes, limitations of proxy data and consequent assumptions for a given portfolio need to be clearly outlined, and all severe limitations addressed. In some cases, multiple proxy datasets may be required. At this stage, it is important to ensure that the proxy data contains all the data required by the chosen CECL methodology. If such proxy data is not available, a different CECL model may be best.  

Prepare Your Data

The next step is to prepare internal data for augmentation. This includes standard data-keeping practices, such as accurate and consistent data headers, unique keys such as loan numbers and reporting dates, and confirmation that no duplicates exist. Depending on the quality of internal data, additional analysis may also be required. For example, all data fields need to be displayed in a consistent format according to the data type, and invalid data points, such as FICO scores outside the acceptable range, need to be cleansed. If the data is assembled manually, it is prudent to automate the process to minimize the possibility of user error. If automation is not possible, it is important to implement data quality controls that verify that the dataset is generated according to the metadata rules. This stage provides the final opportunity to identify any data quality issues that may have been missed. For example, if, after cleansing the data for invalid FICO scores, it appears that the dataset has many invalid entries, further analysis may be required, especially if borrower credit score is one of the risk metrics used for CECL modeling. Once internal data preparation is complete, proxy metadata may need to be modified to be consistent with internal standards. This includes data labels and field formats, as well as data quality checks to ensure that consistent criteria are used across all datasets.  

Identify Your Augmentation Strategy

Once the internal data is ready and its limitations identified, analysts need to confirm that the proxy data addresses these gaps. Note that it is assumed at this stage that the proxy data chosen contains information for loans that are consistent with the internal portfolio, and that all proxy metadata items are consistent with internal metadata. For example, if internal data is robust, but has a short history, proxy data needs to cover the additional time periods for the life of the asset. In such cases, augmenting internal data is relatively simple: the datasets are joined, and tested to ensure that the join was successful. Testing should also cover the known limitations of the proxy data, such as missing non-required fields or other data quality issues deemed acceptable during the research and analysis phase. More often, however, there is a combination of data shortfalls that lead to proxy data needs, which can include either time-related gaps, data element gaps, or both. In such cases, the augmentation strategy is more complex. In the cases of optional data elements, a decision to exclude certain data columns is acceptable. However, when incorporating required elements that are inputs for the allowance calculation, the data must be used in a way that complies with regulatory requirements. If internal data has incomplete information for a given variable, statistical methods and machine learning tools are useful to incorporate the proxy data with the internal data, and approximate the missing variable fields. Statistical testing is then used to verify that the relationships between actual and approximated figures are consistent with expectation, which are then verified by management or expert analysis. External research on economic or agency data, where applicable, can further be used to justify the estimated data assumptions. While rigorous statistical analysis is integral for the most accurate metrics, the qualitative analysis that follows is imperative for CECL model documentation and review.  

Justify Your Proxy Data

Overlaps in time periods between internal loan performance datasets and proxy loan performance datasets are critical in establishing the applicability of the proxy dataset. A variety of similarity metrics can be calculated that compare the performance of the proxy loans with the internal loan during the period of overlap. Such similarity metrics can be put forward to justify the use of the proxy dataset. The proxy dataset can be useful for predictions even if the performance of the proxy loans is not identical to the performance of the institutions’ loans. As long as there is a reliable pattern linking the performance of the two datasets, and no reason to think that pattern will discontinue, a risk-adjusting calibration can be justified and applied to the proxy data, or to results of models built thereon.  

Why Augment Internal Data?

While the task of choosing the augmentation strategy may seem daunting, there are concrete benefits to supplementing internal data with a proxy, rather than using simply the proxy data on its own. Most importantly, for the purpose of calculating the allowance for a given portfolio, incorporating some of the actual values will in most cases produce the most accurate estimate. For example, your institution may underwrite loans conservatively relative to the rest of the industry—incorporating at least some of the actual data associated with the lending practices will make it easier to understand how the proxy data differs from characteristics unique to your business. More broadly, proxy data is useful beyond CECL reporting, and has other applications that can boost bank profits. For example, lending institutions can build better predictive models based on richer datasets to calibrate loan screening and loan pricing decisions. These datasets can also be built into existing models to provide better insight on risk metrics and other asset characteristics, and to allow for more fine-tuned management decisions.


Get Started
Log in

Linkedin   

risktech2024