Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: Credit Analytics

Calculating VaR: A Review of Methods

Calculating VaR

A Review of Methods

CONTRIBUTOR

Don Brown
Co-Head of Quantitative Analytics

TABLE OF CONTENTS

Have questions about calculating VaR?

Talk Scope

Chapter 1
Introduction

Many firms now use Value-at-Risk (“VaR”) for risk reporting. Banks need VaR to report regulatory capital usage under the Market Risk Rule, as outlined in the Fed and OCC regulations and. Additionally, hedge funds now use VaR to report a unified risk measure across multiple asset classes. There are multiple approaches to VaR, so which method should we choose? In this brief paper, we outline a case for full revaluation VaR in contrast to a simulated VaR using a “delta-gamma” approach to value assets.

The VaR for a position or book of business can be defined as some threshold  (in dollars) where the existing position, when faced with market conditions similar to some given historical period, will have P/L greater than  with probability. Typically,  is chosen to be  or. To compute this threshold , we need to

  1. Set a significance percentile , a market observation period, and holding period n.1
  2. Generate a set of future market conditions (“scenarios”) from today to period n.
  3. Compute a P/L on the position for each scenario

After computing each position’s P/L, we sum the P/L for each scenario and then rank the scenarios’ P/L to find the kth percentile (worst) loss.2 This loss defines our VaR T at the kth percentile for observation-period length n. Determining what significance percentile k and observation length n to use is straightforward and is often dictated by regulatory rules, for example 99th percentile 10-day VaR is used for risk-based capital under the Market Risk Rule. Generating the scenarios and computing P/L under these scenarios is open to interpretation. We cover each of these in the next two sections, with their advantages and drawbacks.

Chapter 2
Generating Scenarios

To compute VaR, we first need to generate projective scenarios of market conditions. Broadly speaking, there are two ways to derive this set of scenarios3

  1. Project future market conditions using a Monte Carlo simulation framework
  2. Project future market conditions using historical (actual) changes in market conditions

MONTE CARLO SIMULATION

Many commercial providers simulate future market conditions using Monte Carlo simulation. To do this, they must first estimate the distributions of risk factors, including correlations between risk factors. Using correlations that are derived from historical data makes the general assumption that correlations are constant within the period. As shown in the academic literature, correlations tend to change, especially in extreme market moves – exactly the kind of moves that tend to define the VaR threshold.4 By constraining correlations, VaR may be either overstated or understated depending on the structure of the position. To account for this, some providers allow users to “stress” correlations by increasing or decreasing them. Such a stress scenario is either arbitrary, or is informed by using correlations from yet another time-period (for example, using correlations from a time of market stress), mixing and matching market data in an ad hoc way.

Further, many market risk factors are highly correlated, which is especially true on the interest rate curve. To account for this, some providers use a single factor for rate-level and then a second or third factor for slope and curvature of the curve. While this may be broadly representative, this approach may not capture subtle changes on other parts of the curve. This limited approach is acceptable for non-callable fixed income securities, but proves problematic when applying curve changes to complex securities such as MBS, where the security value is a function of forward mortgage rates, which in turn is a multivariate function of points on the curve and often implied volatility.

MONTE CARLO SIMULATION

Many commercial providers simulate future market conditions using Monte Carlo simulation. To do this, they must first estimate the distributions of risk factors, including correlations between risk factors. Using correlations that are derived from historical data makes the general assumption that correlations are constant within the period. As shown in the academic literature, correlations tend to change, especially in extreme market moves – exactly the kind of moves that tend to define the VaR threshold.4 By constraining correlations, VaR may be either overstated or understated depending on the structure of the position. To account for this, some providers allow users to “stress” correlations by increasing or decreasing them. Such a stress scenario is either arbitrary, or is informed by using correlations from yet another time-period (for example, using correlations from a time of market stress), mixing and matching market data in an ad hoc way.

Further, many market risk factors are highly correlated, which is especially true on the interest rate curve. To account for this, some providers use a single factor for rate-level and then a second or third factor for slope and curvature of the curve. While this may be broadly representative, this approach may not capture subtle changes on other parts of the curve. This limited approach is acceptable for non-callable fixed income securities, but proves problematic when applying curve changes to complex securities such as MBS, where the security value is a function of forward mortgage rates, which in turn is a multivariate function of points on the curve and often implied volatility.

HISTORICAL SIMULATION

RiskSpan projects future market conditions by using actual (observed) -day changes in market conditions over the look-back period. For example, if we are computing 10-day VaR for regulatory capital usage under the Market Risk Rule, RiskSpan takes actual 10-day changes in market variables. This approach allows our VaR scenarios to account for natural changes in correlation under extreme market moves, such as occurs during a flight-to-quality where risky assets tend to underperform risk-free assets, and risky assets tend to move in a highly correlated manner. RiskSpan believes this is a more natural way to capture changing correlations, without the arbitrary overlay of how to change correlations in extreme market moves. This, in turn, will more correctly capture VaR.5

 

HISTORICAL SIMULATION

RiskSpan projects future market conditions by using actual (observed) -day changes in market conditions over the look-back period. For example, if we are computing 10-day VaR for regulatory capital usage under the Market Risk Rule, RiskSpan takes actual 10-day changes in market variables. This approach allows our VaR scenarios to account for natural changes in correlation under extreme market moves, such as occurs during a flight-to-quality where risky assets tend to underperform risk-free assets, and risky assets tend to move in a highly correlated manner. RiskSpan believes this is a more natural way to capture changing correlations, without the arbitrary overlay of how to change correlations in extreme market moves. This, in turn, will more correctly capture VaR.5

Chapter 3
Calculating Simulated P/L

Get a Demo

With the VaR scenarios defined, we move on to computing P/L under these scenarios. Generally, there are two methods employed

  1. A Taylor approximation of P/L for each instrument, sometimes called “delta-gamma”
  2. A full revaluation of each instrument using its market-accepted technique for valuation

Market practitioners sometimes blend these two techniques, employing full revaluation where the valuation technique is simple (e.g. yield + spread) and using delta-gamma where revaluation is more complicated (e.g. OAS simulation on MBS).

 

DELTA-GAMMA P/L APPROXIMATION

Many market practitioners use a Taylor approximation or “delta-gamma” approach to valuing an instrument under each VaR scenario. For instruments whose price function is approximately linear across each of the m risk factors, users tend to use the first order Taylor approximation, where the instrument price under the kth VaR scenario is given by

Making the price change in the kth scenario

Where ΔP is the simulated price change, Δxi is the change in the ith risk factor, and  is the price delta with respect to the ith risk factor evaluated at the base case. In many cases, these partial derivatives are approximated by bumping the risk factors up/down.6 If the instrument is slightly non-linear, but not non-linear enough to use a higher order approximation, then approximating a first derivative can be a source of error in generating simulated prices. For instruments that are approximately linear, using first order approximation is typically as good as full revaluation. From a computation standpoint, it is marginally faster but not significantly so. Instruments whose price function is approximately linear also tend to have analytic solutions to their initial price functions, for example yield-to-price, and these analytic solutions tend to be as fast as a first-order Taylor approximation. If the instrument is non-linear, practitioners must use a higher order approximation which introduces second-order partial derivatives. For an instrument with m risk-factors, we can approximate the price change in the kth scenario by using the multivariate second order Taylor approximation

To simplify the application of the second-order Taylor approximation, practitioners tend to ignore many of the cross-partial terms. For example, in valuing MBS under delta-gamma, practitioners tend to simplify the approximation by using the first derivatives and a single “convexity” term, which is the second derivative of price with respect to overall rates. Using this short-cut raises a number of issues:

  1. It assumes that the cross-partials have little impact. For many structured products, this is not true.7
  2. Since structured products calculate deltas using finite shifts, how exactly does one calculate a second-order mixed partials?8
  3. For structured products, using a single, second-order “convexity” term assumes that the second order term with respect to rates is uniform across the curve and does not vary by where you are on the curve. For complex mortgage products such as mortgage servicing rights, IOs and Inverse IOs, convexity can vary greatly depending on where you look at the curve.

Using a second-order approximation assumes that the second order derivatives are constant as rates change. For MBS, this is not true in general. For example, in the graphs below we show a constant-OAS price curve for TBA FNMA 30yr 3.5%, as well as a graph of its “DV01”, or first derivative with respect to rates. As you can see, the DV01 graph is non-linear, implying that the convexity term (second derivative of the price function) is non-constant, rendering a second-order Taylor approximation a weak assumption. This is especially true for large moves in rate, the kind of moves that dominate the computation of the VaR.9

In addition to the assumptions above, we occasionally observe that commercial VaR providers compute 1-day VaR and, in the interest of computational savings, scale this 1-day VaR by √10 to generate 10-day VaR. This approximation only works if

  1. Changes in risk factors are all independently, identically distributed (no autocorrelation or heteroscedasticity)
  2. The asset price function is linear in all risk factors

In general, neither of these conditions hold and using a scaling factor of √10 will likely yield an incorrect value for 10-day VaR.10

 

RATIONALIZING WEAKNESS IN THE APPROXIMATION

With the weaknesses in the Taylor approximation cited above, why do some providers still use delta-gamma VaR? Most practitioners will cite that the Taylor approximation is much faster than full revaluation for complex, non-linear instruments. While this seems true at first glance, you still need to:

  1. Compute or approximate all the first partial derivatives
  2. Compute or approximate some of the second partial derivatives and decide which are relevant or irrelevant. This choice may vary from security type to security type.

Neither of these tasks are computationally simple for complex, path-dependent securities which are found in many portfolios. Further, the choice of which second-order terms to ignore has to be supported by documentation to satisfy regulators under the Market Risk Rule.

Even after approximating partials and making multiple, qualitative assessments of which second-order terms to include/exclude, we are still left with error from the Taylor approximation. This error grows with the size of the market move, which also tends to be the scenarios that dominate the VaR calculation. With today’s flexible cloud computation and ultra-fast, cheap processing, the Taylor approximation and its computation of partials ends up being only marginally faster than a full revaluation for complex instruments.11

With the weaknesses in Taylor approximation, especially with non-linear instruments, and the speed and cheapness of full revaluation, we believe that fully revaluing each instrument in each scenario is both more accurate and more straightforward than having to defend a raft of assumptions around the Taylor approximation.

Chapter 4
Conclusion

Talk Scope

With these points in mind, what is the best method for computing VaR? Considering the complexity of many instruments, and considering the comparatively cheap and fast computation available through today’s cloud computing, we believe that calculating VaR using a historical-scenario, full revaluation approach provides the most accurate and robust VaR framework.

From a scenario generation standpoint, using historical scenarios allows risk factors to evolve in a natural way. This in turn captures actual changes in risk factor correlations, changes which can be especially acute in large market moves. In contrast, a Monte Carlo simulation of scenarios typically allows users to “stress” correlations, but these stresses scenarios are arbitrary which may ultimately lead to misstated risk.

From a valuation framework, we feel that full revaluation of assets provides the most accurate representation of risk, especially for complex instruments such as complex ABS and MBS securities. The assumptions and errors introduced in the Taylor approximation may overwhelm any minor savings in run-time, given today’s powerful and cheap cloud analytics. Further, the Taylor approximation forces users to make and defend qualitative judgements of which partial derivatives to include and which to ignore. This greatly increasing the management burden around VaR as well as regulatory scrutiny around justifying these assumptions.

In short, we believe that a historical scenario, full-revaluation VaR provides the most accurate representation of VaR, and that today’s cheap and powerful computing make this approach feasible for most books and trading positions. For VaR, it’s no longer necessary to settle for second-best.

References

ENDNOTES

1 The holding period n is typically one day, ten days, or 21 days (a business-month) although in theory it can be any length period.
 
2 We can also partition the book into different sub-books or “equivalence classes” and compute VaR on each class in the partition. The entire book is the trivial partition.
 
3 There is a third approach to VaR: parametric VaR, where the distributions of asset prices are described by the well-known distributions such as Gaussian. Given the often-observed heavy-tail distributions, combined with difficulties in valuing complex assets with non-linear payoffs, we will ignore parametric VaR in this review.
 
4 The academic literature contains many papers on increased correlation during extreme market moves, for example [5]

5 For example, a bank may have positions in two FX pairs that are poorly correlated in times normal times and highly negatively correlated in times of stress. If a 99%ile worst-move coincides with a stress period, then the aggregate P/L from the two positions may offset each other. If we used the overall correlation to drive a Monte Carlo simulated VaR, the calculated VaR could be much higher.

6 This is especially common in MBS, where the first and second derivatives are computed using a secant-line approximation after shifting risk factors, such as shifting rates ± 25bp

7 For example, as rates fall and a mortgage becomes more refinancible, the mortgage’s exposure to implied volatility also increases, implying that the cross-partial for price with respect to rates and vol is non-zero.

8 Further, since we are using finite shifts, the typical assumption that ƒxy = ƒyx which is based on the smoothness of ƒ(x,y) does not necessarily hold. Therefore, we need to compute two sets of cross partials, further increasing the initial setup time.

9 Why is the second derivative non-constant? As rates move significantly, prepayments stop rising or falling. At these “endpoints,” cash flows on the mortgage change little, making the instrument positively convex like a fixed-amortization schedule bond. In between, changes in prepayments case the mortgage to extend or shorten as rates rise or fall, respectively, which in turn make the MBS negatively convex.

10 Much has been written on the weakness of this scaling, see for example [7]

11 For example, using a flexible computation grid RiskSpan can perform a full OAS revaluation on 20,000 MBS passthroughs using a 250-day lookback period in under one hour. Lattice-solved options are an order of magnitude faster, and analytic instruments such as forwards, European options, futures and FX are even faster.

1 The holding period n is typically one day, ten days, or 21 days (a business-month) although in theory it can be any length period.

2 We can also partition the book into different sub-books or “equivalence classes” and compute VaR on each class in the partition. The entire book is the trivial partition.

3 There is a third approach to VaR: parametric VaR, where the distributions of asset prices are described by the well-known distributions such as Gaussian. Given the often-observed heavy-tail distributions, combined with difficulties in valuing complex assets with non-linear payoffs, we will ignore parametric VaR in this review.

4 The academic literature contains many papers on increased correlation during extreme market moves, for example [5]

5 For example, a bank may have positions in two FX pairs that are poorly correlated in times normal times and highly negatively correlated in times of stress. If a 99%ile worst-move coincides with a stress period, then the aggregate P/L from the two positions may offset each other. If we used the overall correlation to drive a Monte Carlo simulated VaR, the calculated VaR could be much higher.

6 This is especially common in MBS, where the first and second derivatives are computed using a secant-line approximation after shifting risk factors, such as shifting rates ± 25bp

7 For example, as rates fall and a mortgage becomes more refinancible, the mortgage’s exposure to implied volatility also increases, implying that the cross-partial for price with respect to rates and vol is non-zero.

8 Further, since we are using finite shifts, the typical assumption that ƒxy = ƒyx which is based on the smoothness of ƒ(x,y) does not necessarily hold. Therefore, we need to compute two sets of cross partials, further increasing the initial setup time.

9 Why is the second derivative non-constant? As rates move significantly, prepayments stop rising or falling. At these “endpoints,” cash flows on the mortgage change little, making the instrument positively convex like a fixed-amortization schedule bond. In between, changes in prepayments case the mortgage to extend or shorten as rates rise or fall, respectively, which in turn make the MBS negatively convex.

10 Much has been written on the weakness of this scaling, see for example [7]

11 For example, using a flexible computation grid RiskSpan can perform a full OAS revaluation on 20,000 MBS passthroughs using a 250-day lookback period in under one hour. Lattice-solved options are an order of magnitude faster, and analytic instruments such as forwards, European options, futures and FX are even faster.

Get the fully managed solution

Get a Demo

Residential Mortgage REIT: End to End Loan Data Management and Analytics

An inflexible, locally installed risk management system with dated technology required a large IT staff to support it and was incurring high internal maintenance costs.

Absent a single solution, the use of multiple vendors for pricing and risk analytics, prepay/credit models and data storage created inefficiencies in workflow and an administrative burden to maintain.

Inconsistent data and QC across the various sources was also creating a number of data integrity issues.

The Solution

An end-to-end data and risk management solution. The REIT implemented RiskSpan’s Edge Platform, which provides value, cost and operational efficiencies.

  • Scalable, cloud-native technology
  • Increased flexibility to run analytics at loan level; additional interactive / ad-hoc analytics
  • Reliable, accurate data with more frequent updates

Deliverables 

Consolidating from five vendors down to a single platform enabled the REIT to streamline workflows and automate processes, resulting in a 32% annual cost savings and 46% fewer resources required for maintenance.


RS Edge for Loans & Structured Products: A Data Driven Approach to Pre-Trade and Pricing  

The non-agency residential-mortgage-backed-securities (RMBS) market has high expectations for increased volume in 2020. Driven largely by expected changes to the qualified mortgage (QM) patch, private-label securities (PLS) issuers and investors are preparing for a 2020 surge. The tight underwriting standards of the post-crisis era are loosening and will continue to loosen if debt-to-income restrictions are lifted with changes to the QM patch 

PLS programs can differ greatly. It’s increasingly important to understand the risks inherent in each underlying poolAt the same time, investment opportunities with substantial yield are becoming harder to find without developing a deep understanding of the riskier components of the capital structureA structured approach to pre-trade and portfolio analytics can help mitigate some of these challenges. Using a data-driven approach, portfolio managers can gain confidence in the positions they take and make data influenced pricing decisions 

Industry best practice for pre-trade analysis is to employ a holistic approach to RMBS. To do this, portfolio managers must combine analysis of loan collateral, historical data for similar cohorts of loans (within previous deals), and scenariofor projected performance. The foundation of this approach is:  

  • Historical data can ground assumptions about projected performance 
  • A consistent approach from deal to deal will illuminate shifting risks from shifting collateral 
  • Scenario analysis will inform risk assessment and investment decision  

Analytical Framework 

RiskSpan’s modeling and analytics expert, Janet Jozwik, suggests a framework for analyzing a new RMBS deal with analysis of 3 main components:  deal collateral, historical performance, and scenario forecasting. Combined, these three components give portfolio managers a present, past, and future view into the deal.  

Present: Deal Collateral Analysis 

Deal collateral analysis consists of: 1) a deep dive into the characteristics of the collateral underlying the deal itself, and 2) a comparison of the collateral characteristics of the deal being analyzed to similar deals. A comparison to recently issued deals can highlight shifts in underlying collateral risk within a particular shelf or across issuers.  

Below, RiskSpan’s RS Edge provides the portfolio manager with a dashboard highlighting key collateral characteristics that may influence deal performance. 

Example 1Deal Profile Stratification 

deal-compare-in-rs-edge

Example 2Deal Comparative Analysis 

Deal Profile Stratification

Past: Historical Performance Analysis 

Historical analysis informs users of a deal’s potential performance under different scenarios by looking at how similar loan cohorts from prior deals have performedJozwik recommends analyzing historical trends both from the recent past and frohistorical stress vintages to give a sense for what the expected performance of the deal will be, and what the worst-case performance would be under stress scenarios. 

Recent Trend Analysis:  Portfolio managers can understand expected performance by looking at how similar deals have been performing over the prior 2 to 3 years. There are a significant number of recently issued PLS that can be tracked to understand recent prepayment and default trends in the market. While the performance of these recent deals doesn’t definitively determine expectations for a new deal (as things can change, such as rate environment), it provides one data point to help ground data-driven analyses. This approach allows users to capitalize on the knowledge gained from prior market trends.  

Historical Vintage Proxy Analysis:  Portfolio managers can understand stressed performance of the deal by looking at performance of similar loans from vintages that experienced the stress environment of the housing crisisThough potentially cumbersome to execute, this approach leverages the rich set of historical performance data available in the mortgage space 

For a new RMBS Dealportfolio managers can review the distribution of key features, such as FICO, LTV, and documentation typeThey can calculate performance metrics, such as cumulative loss and default rates, from a wide set of historical performance data on RMBS, cut by vintage. When pulling these historical numbers, portfolio managers can adjust the population of loans to better align with the distribution of key loan features in the deal they are analyzing. So, they can get a view into how a similar loans pool originated in historical vintages, like 2007, performed. There are certainly underwriting changes that have occurred in the post-crisis era that would likely make this analysis ultraconservative. These ‘proxy cohorts’ from historical vintages can provide an alternative insight into what could happen in a worst-case scenario.  

Future: Forecasting Scenario Analysis 

Forecasting analysis should come in two flavors. First, very straightforward scenarios that are explicitly transparent about assumptions for CPR, CDR, and severity. These assumptions-based scenarios can be informed with outputs from the Historical Performance Analysis above.  

Second, forecasting analysis can leverage statistical models that consider both loan features and macroeconomic inputs. Scenarios can be built around macroeconomic inputs to the model to better understand how collateral and bond performance will change with changing economic conditions.  Macroeconomic inputs, such as mortgage rates and home prices, can be specified to create particular scenario runs. 

How RiskSpan Can Help 

Pulling the required data and models together is typically a burdenRiskSpan’s RS Edge has solved these issues and now offers one integrated solution for:  

  • Historical Data: Loan-level performance and collateral data on historical and pre-issue RMBS deals 
  • Predictive Models: Credit and Prepayment models for non-agency collateral types 
  • Deal Cashflow Engine: Intex is the leading source for an RMBS deal cashflow library 

There is a rich source of data, models, and analytics that can support decision making in the RMBS market. The challenge for a portfolio manager is piecing these often-disparate pieces of information together to a cohesive analysis that can provide a consistent view from deal to dealFurther, there is a massive amount of historical data in the mortgage space, containing a vast wealth of insight to help inform investment decisions. However, these datasets are notoriously unwieldy. Users of RS Edge cut through the complications of large, disparate datasets for clear, informative analysis, without the need for custom-built technology or analysts with advanced coding skills.


Introducing: RS Edge for Loans and Structured Products

RiskSpan Introduces RS Edge for Loans and Structured Products  

RiskSpan, the leading mortgage data and analytics provider, is excited to announce the release of RS Edge for Loans and Structured Products. 

RS Edge is the next generation of RiskSpan’s data, modeling, and analytics platform that manages portfolio risk and delivers powerful analysis for loans and structured products.  Users can derive insights from historical trends and powerful predictive forecasts under a range of economic scenarios on our cloud-native solution. RS Edge streamlines analysis by bringing together key industry data and integrations with leading 3rd party vendors. 

An on-demand team of data scientists, quants, and technologists with fixed-income portfolio expertise support the integration, calibration, and operation across all RS Edge modules 

RMBS Analytics in Action 

RiskSpan has developed a holistic approach to RMBS analysis that combines loan collateral, historical, and scenario analysis with deal comparison tools to more accurately predict future performance. Asset managers can define an acceptable level of risk and ground pricing decisions with data-driven analysis. This approach illuminates risk from shifting collateral and provides investors with confidence in their positions. 

Loan Analytics in Action 

Whole loan asset managers and investors use RiskSpan’s Loan Analytics to enhance and automate partnerships with Non-Qualified Mortgage originators and servicers. The product enhances the on-boarding, pricing analytics, forecasting, and storage of loan data for historical trend analytics. RS Edge forecasting analytics support ratesheet validation and loan pricing 

About RiskSpan 

RiskSpan provides innovative technology and services to the financial services industry. Our mission is to eliminate inefficiencies in loans and structured finance markets to improve investors’ bottom line through incremental cost savings, improved return on investment, and mitigated risk.  

RiskSpan is holding a webinar on November 6 to show how RS Edge pulls together past, present, and future for insights into new RMBS deals. Click below to register.


Navigating the Impact of ASU 2016-13 on AFS Securities

In Collaboration With Our Partners at Grant Thornton

Navigating the impact of ASU 2016-13 on the impairment of AFS debt securities

When the Financial Accounting Standards Board (FASB) issued Accounting Standards Update (ASU) 2016-13, Financial Instruments – Credit Losses, in June of 2016, most of the headlines regarding the ASU focused on its introduction of Subtopic 326-20, commonly referred to as the Current Expected Credit Losses (or, “CECL”) framework.  The CECL framework requires entities to measure lifetime expected credit losses on all financial instruments measured at amortized cost – financial assets like loans receivable and held-to-maturity debt securities.  The focus on the CECL framework was understandable – it represents a sea change in the accounting for a significant class of assets for many entities, particularly lending institutions.

However, ASU 2016-13 affected the accounting for credit losses on other financial instruments as well, such as debt securities held as available-for-sale (or “AFS”).  Below, we will discuss how ASU 2016-13 changed the accounting for credit losses on AFS debt securities.

AFS Framework prior to adopting ASU 2016-13:  OTTI

Prior to an entity’s adoption of ASU 2016-13, the guidance concerning impairment of AFS debt securities is found in Subtopic 320-10, particularly in paragraphs 320-10-35-18 through 35-34, and is known as the Other-Than-Temporary Impairment (or “OTTI”) framework.

Generally, AFS debt securities are carried on the balance sheet at fair value, and changes in the fair value of AFS debt securities are recognized outside of earnings as a component of Other Comprehensive Income (OCI). However, if an AFS debt security’s fair value is less than its amortized cost – that is, the AFS debt security is impaired – the entity must evaluate whether the impairment is an OTTI.

An entity should recognize an OTTI on an impaired security when one of three conditions exists:

  1. The entity intends to sell the security
  2. It is more likely than not the entity will be required to sell the security prior to recovery of the amortized cost basis of the security
  3. The entity does not expect to recover the amortized cost basis of the security

If condition (1) or (2) exists, then the entity will reduce the amortized cost basis of the AFS debt security to its current fair value.  Any subsequent increases in the fair value of the AFS debt security would be recognized outside of earnings as a component of OCI until the gains are realized via cash collection or sale.

If neither condition (1) nor (2) exists, then the entity must evaluate whether it does not expect to recover the amortized cost basis of the security.  The entity may perform a qualitative analysis, considering factors such as the magnitude of the impairment, the duration of the impairment, factors relevant to the issuer of the security, factors relevant to the industry in which the issuer of the security operates, and any other relevant information.  Alternatively, an entity may perform a quantitative analysis by comparing the net present value (NPV) of expected cash flows of the AFS debt security to its amortized cost basis, as described below.

If the entity does not expect to recover the amortized cost basis of the security, an OTTI exists and the security should be written down to its fair value.  The entity must then separate the total impairment (the amount by which the AFS debt security’s amortized cost exceeds its fair value) between the amount of impairment related to (a) credit losses and (b) all other factors.  To make this distinction, the entity compares the NPV of the expected future cash flows on the debt security, discounted at the security’s effective interest rate (or “EIR”), to the amortized cost basis of the security.   The amount by which the amortized cost of the AFS debt security exceeds its NPV is recognized in earnings as a credit loss, while any remaining impairment is recognized outside of earnings as a component of OCI.

AFS Framework upon adopting ASU 2016-13

ASU 2016-13 largely keeps the OTTI framework from Subtopic 320-10 intact.  If either (1) an entity intends to sell, or (2) it is more likely than not that it will be required to sell an AFS debt security whose amortized cost exceeds its fair value, the entity shall write that AFS debt security’s amortized cost basis down to its fair value through earnings.  For AFS debt securities that are impaired, but for which neither (1) the entity intends to sell, nor (2) it is more likely than not that it will be required to sell an AFS debt security whose amortized cost exceeds its fair value, the entity will still need to assess whether it expects to recover the amortized cost basis of the impaired AFS debt security either via a qualitative analysis or via the same quantitative framework in Subtopic 320-10 today (as described above).

However, ASU 2016-13 makes a few important changes.  The most significant changes include:

  • Entities may no longer consider the duration of an impairment when qualitatively assessing whether the entity does not expect to recover the amortized cost basis of an impaired AFS debt security.
  • If an entity recognizes a credit loss on an AFS debt security, the entity will establish an allowance for credit loss (or “ACL”) rather than perform a direct write-down of the amortized cost basis of the AFS debt security. Accordingly, subsequent reductions in the estimated ACL will be recognized in earnings as they occur.
  • The amount of credit losses to be recognized is limited by a “fair value floor” – that is, total credit losses cannot exceed the total amount by which the amortized cost of the AFS debt security exceeds its fair value.

The following flow chart illustrates the how an entity would evaluate an AFS debt security for impairment upon adoption of ASU 2016-13:

Example

blog-chart

Background

  • Entity A has an investment in an AFS debt security issued by Company X with an amortized cost of $100
  • At 12/31/X1, the fair value of the AFS debt security is $90
  • The of the AFS debt security is 10% (as determined in accordance with ASC 310-20)

Entity A does not intend to sell the AFS debt security, nor is it more likely than not that Entity A will be required to sell the AFS debt security prior to recovery of the amortized cost basis.  Entity A elects to perform a qualitative analysis to determine whether the AFS debt security has experienced a credit loss.  In performing that qualitative assessment, Entity A consider the following:

  • Extent of impairment: 10%
  • Adverse conditions: Company X is in an industry that is in decline
  • Company X’s credit rating was recently downgraded

Accordingly, Entity A determines that a credit loss has occurred.  Next, Entity A makes its best estimate of expected future cash flows, and discounts those cash flows to their NPV at the AFS debt security’s EIR of 10% as follows:

Future Expected Cash Flows

In this case, the NPV is $85, which would indicate a $15 ACL.  However, the fair value of the AFS debt security is $90, so the ACL is limited to $10 due to the “fair value floor”.  Accordingly, Entity A would recognize a credit loss expense of $10 and create an ACL, also for $10.

In subsequent periods, Entity A would continue to determine the NPV of future expected cash flows and adjust the ACL up or down as those changes occur, subject to the fair value floor.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_empty_space][startapp_block_title animation=”” title=”About the Author”][/vc_column][/vc_row][vc_row][vc_column width=”1/6″][vc_single_image image=”2439″][/vc_column][vc_column width=”5/6″][vc_column_text]Graham Dyer, CPA Grant Thornton

Graham is a partner in Grant Thornton, LLP’s national office where he provides technical accounting guidance to clients across the globe.  Graham has a particular focus on financial institutions, including matters such as the ALLL, consolidations, Purchased Credit Impaired loan income recognition, complex financial instruments, business combinations, and SOX/FDICIA matters. ​

Graham also serves on a number of industry technical committees, including the IASB’s IFRS 9 Impairment Transition Group and the FASB’s CECL Transition Resource Group.  Graham was previously a professional accounting fellow at the OCC.


RiskSpan Joins AICPA for CECL Task Force Auditing Subgroup Meeting

RiskSpan joined a dozen other vendors and auditors from the top-ten accounting firms for the AICPA’s CECL Task Force Auditing Subgroup meeting at Ernst & Young’s offices in New York on April 29th. The AICPA just released the “Key takeaways” from the meeting.

Among those key takeaways are:

  • Overarching Themes:
    • CECL is a “fresh start” from the incurred loss model.
      • CECL model estimates will be evaluated against ASC 326, not anchored to incurred loss model estimates.
      • Management may find it useful in validating their CECL model to understand what drove changes from ALLL levels today. However, management should be aware of potential anchoring, confirmation, availability biases that might occur when implementing the new standard.
  • Qualitative Adjustment Factors:
    • Conceptually, qualitative adjustments compensate for known limitations of the model. A less sophisticated model will likely require more qualitative adjustments and those adjustments may be greater in magnitude. Conversely, a more sophisticated model will likely require fewer qualitative adjustments and those adjustments may be less in magnitude
    • Due to fundamental changes in the model, nature and magnitude of the qualitative adjustments in the CECL model should be independently generated and not anchored to, or grounded in, the qualitative adjustments used in the current incurred loss model.
    • Management should not pre-determine the magnitude of the adjustment and then produce documentation to support it – the amount should be determined by a rigorous, repeatable, well documented process with appropriate internal controls around that process.
    • Adjustments to historical information and forecasts could be negative, positive, or no change. Regardless, it is important for management to understand, document, and support their rationale in all three scenarios.
  • Forecasting/Reversion
    • Forecasting
      • Reasonable and supportable forecasts should be objectively supported, analyzed and appropriately updated in a timely manner.
        • Adjustments should be determined through a concrete sequential thought process (rather than calculated and backed into).
        • Transition from reasonable and supportable forecasts to reversion techniques should be specific to the circumstances (i.e. reversion period and method may change, depending on economic conditions).
      • Should be developed by parties with relevant expertise
      • Should have internal controls in place over the selection of forecasted data and the source
      • Forecasted economic data utilized should be relevant to the portfolio (i.e. data specific to lending market may be more relevant than general, country-wide data).
      • Multiple scenarios
        • No requirement to consider multiple scenarios but may be helpful
        • Need robust support for the weighting used, which may be challenging
  • Data
    • Data used in models should be subject to controls that are designed to ensure completeness, accuracy and relevance to the portfolio (i.e., similar economic conditions, loan structure and underwriting). Data will also need to be available to external auditors for substantive testing.
    • Data should be evaluated for consistency – is the data consistent period over period (i.e., definition of default)?
    • Data aggregated by vendors may not have previously been subject to traceable, internal controls. Vendors, management, auditors and other interested parties must consider how to address such industry limitations prior to standard implementation.
    • If management is not able to validate the data (relevance, reliability and consistency), that data may be difficult to use in the financial reporting process.

RiskSpan joined the AICPA’s CECL Task Force Auditing Subgroup for a second meeting on June 27th. We will publish the “Key Takeaways” from that meeting when they are released.

Institutions are invited to reach out to us with any questions.


CRT Deal Monitor: April 2019 Update

Loans with Less than Standard MI Coverage

CRT Deal Monitor: Understanding When Credit Becomes Risky 

This analysis tracks several metrics related to deal performance and credit profile, putting them into a historical context by comparing the same metrics for recent-vintage deals against those of ‘similar’ cohorts in the time leading up to the 2008 housing crisis.  

Some of the charts in this post have interactive features, so click around! We’ll be tweaking the analysis and adding new metrics in subsequent months. Please shoot us an email if you have an idea for other metrics you’d like us to track. 

Monthly Highlights: 

The seasonal nature of recoveries is an easy-to-spot trend in our delinquency outcome charts (loan performance 6 months after being 60 days-past-due). Viewed from a very high level, both Fannie Mae and Freddie Mac display this trend, with visible oscillations in the split between loans that end up current and those that become more delinquent (move to 90+ days past due (DPD)). This trend is also consistent both before and after the crisis – the shares of loans that stay 60 DPD and move to 30 DPD are relatively stable. You can explore the full history of the FNMA and FHLMC Historical Performance Datasets by clicking the 6-month roll links below, and then clicking the “Autoscale” button in the top-right of the graph. Loans with Less-than-Standard MI Coverage

This trend is salient in April of 2019, as both Fannie Mae Connecticut Avenue Securities (CAS) and Freddie Mac Structured Agency Credit Risk (STACR) have seen 6 months of steady decreases in loans curing, and a steady increase in loans moving to 90+ DPD. While both CAS and STACR hit lows for recovery to current – similar to lows at the beginning of 2018 – it is notable that both CAS and STACR saw multi-year highs for recovery to current in October of 2018 (see Delinquency Outcome Monitoring links below). While continued US economic strength is likely responsible for the improved performance in October, it is not exactly clear why the oscillation would move the recoveries to current back to the same lows experienced in early 2018.  

Current Performance and Credit Metrics

Delinquency Trends:

The simplest metric we track is the share of loans across all deals that is 60+ days past due (DPD). The charts below compare STACR (Freddie) vs. CAS (Fannie), with separate charts for high-LTV deals (G2 for CAS and HQA for STACR) vs. low-LTV deals (G1 for CAS and DNA for STACR).

For comparative purposes, we include a historical time series of the share of loans 60+ DPD for each LTV group. These charts are derived from the Fannie Mae and Freddie Mac loan-level performance datasets. Comparatively, today’s deal performance is much better than even the pre-2006 era.

Low LTV Deals 60 DPD

High LTV Deals 60 DPD

Delinquency Outcome Monitoring:

The tables below track the status of loans that were 60+ DPD. Each bar in the chart represents the population of loans that were 60+ DPD exactly 6 months prior to the x-axis date.  

The choppiness and high default rates in the first few observations of the data are related to the very low counts of delinquent loans as the CRT program ramped up.  

STACR 6 Month Roll

CAS 6 Month Roll

The table below repeats the 60-DPD delinquency analysis for the Freddie Mac Loan Level Performance dataset leading up to and following the housing crisis. (The Fannie Mae loan level performance set yields a nearly identical chart.) Note how many more loans in these cohorts remained delinquent (rather than curing or defaulting) relative to the more recent CRT loans.

Fannie Performance 6 Month Roll

Freddie Performance 6 Month Roll

Deal Profile Comparison:

The tables below compare the credit profiles of recently issued deals. We focus on the key drivers of credit risk, highlighting the comparatively riskier features of a deal. Each table separates the high–LTV (80%+) deals from the low–LTV deals (60%-80%). We add two additional columns for comparison purposes. The first is the ‘Coming Cohort,’ which is meant to give an indication of what upcoming deal profiles will look like. The data in this column is derived from the most recent three months of MBS issuance loan–level data, controlling for the LTV group. These are newly originated and acquired by the GSEs—considering that CRT deals are generally issued with an average loan age between 6 and 15 months, these are the loans that will most likely wind up in future CRT transactions. The second comparison cohort consists of 2006 originations in the historical performance datasets (Fannie and Freddie combined), controlling for the LTV group. We supply this comparison as context for the level of risk that was associated with one of the worst–performing cohorts. 

Credit Profile LLTV – Click to see all deals

Credit Profile HLTV – Click to see all deals

Deal Tracking Reports:

Please note that defaults are reported on a delay for both GSEs, and so while we have CPR numbers available for the most recent month, CDR numbers are not provided because they are not fully populated yet. Fannie Mae CAS default data is delayed an additional month relative to STACR. We’ve left loss and severity metrics blank for fixed-loss deals.

STACR Performance – Click to see all deals

CAS Performance – Click to see all deals


RiskSpan Edge & CRT Data

For participants in the credit risk transfer (CRT) market, managing the massive quantity of data to produce clear insights into deal performance can be difficult and demanding on legacy systems. Complete analysis of the deals involves bringing together historical data, predictive models, and deal cash flow logic, often leading to a complex workflow in multiple systems. RiskSpan’s Edge platform (RS Edge) solves these challenges, bringing together all aspects of CRT analysis. RiskSpan is the only vendor to bring together everything a CRT analyst needs:  

  • Normalized, clean, enhanced data across programs (STACR/CAS/ACIS/CIRT),
  • Historical Fannie/Freddie performance data normalized to a single standard,
  • Ability to load loan-level files related to private risk transfer deals,
  • An Agency-specific, loan-level, credit model,
  • Seamless Intex integration for deal and portfolio analysis,
  • Scalable scenario analysis at the deal or portfolio level, and
  • Vendor and client model integration capabilities.
  • Ability to load loan-level files related to private risk transfer deals.

Deal Comparison Table All of these features are built into RS Edge, a cloud-native, data and analytics platform for loans and securities. The RS Edge user interface is accessible via any web browser, and the processing engine is accessible via an application programming interface (API). Accessing RS Edge via the API allows access to the full functionality of the platform, with direct integration into existing workflows in legacy systems such as Excel, Python, and R. To tailor RS Edge to the specific needs of a CRT investor, RiskSpan is rolling out a series of Excel tools, built using our APIs, which allow for powerful loan-level analysis from the tool everyone knows and loves. Accessing RS Edge via our new Excel templates, users can:

  • Track deal performance,
  • Compare deal profiles,
  • Research historical performance of the full GSE population,
  • Project deal and portfolio performance with our Agency-specific credit model or with user-defined CPR/CDR/severity vectors, and
  • Analyze various macro scenarios across deals or a full portfolio

Loan Attribute Distributions

The web-based user interface allows for on-demand analytics, giving users specific insights on deals as the needs arise. The Excel template built with our API allows for a targeted view tailored to the specific needs of a CRT investor.

For teams that prefer to focus their time on outcomes rather than the build, RiskSpan’s data team can build custom templates around specific customer processes. RiskSpan offers support from premiere data scientists who work with clients to understand their unique concerns and objectives to integrate our analytics with their legacy system of choice. Loan Performance History The images are examples of a RiskSpan template for CRT deal comparison: profile comparison, loan credit score distribution, and delinquency performance for five Agency credit risk transfer deals, pulled via the RiskSpan Data API and rendered in Excel. ______________________________________________________________________________________________

Get a Demo


Case Study: RS Edge – Analytics and Risk

The Client

Large Life Insurance Company – Investment Group

 

The Problem

The Client was shopping around for an analytics and risk platform to be used by both the trading desk and risk managers.

RiskSpan Edge Platform enabled highly scalable analytics and risk modeling providing visibility and control to address investment analysis, risk surveillance, stress testing and compliance requirements.

The Solution

Initially, the solution was intended for both the trading desk (as pre-trade analysis) as well as risk management (running scenarios on the existing portfolio).  Ultimately, the system was used exclusively by risk management and used heavily by mid-level risk management. 

Cloud Native Risk Service

We have transformed portfolio risk analytics through distributed cloud computing. Our optimized infrastructure powers risk and scenario analytics at speed and cost never before possible in the industry.

Perform advanced portfolio analysis to achieve risk oversight and regulatory compliance with confidence. Access reliable results with cloud-native interactive dashboards that satisfy investors, regulators, and clients.

Two Flexible Options
Fund Subscriber Service + Managed Service

Each deployment option includes on-demand analytics, standard batch and over-night processing or a hybrid model to suit your specific business needs. Our team will work with customers to customize deployment and delivery formats, including investor-specific reporting requirements.

Easy Integration + Delivery
Access Your Risk

Accessing the results of your risk run is easy via several different supported delivery channels. We can accommodate your specific needs – whether you’re a new hedge fund, fund-of-funds, bank or other Enterprise-scale customer.

“We feel the integration of RiskSpan into our toolkit will enhance portfolio management’s trading capabilities as well as increase the efficiency and scalability of the downstream RMBS analysis processes.  We found RiskSpan’s offering to be user-friendly, providing a strong integration of market / vendor data backed by a knowledgeable and responsive support team.”

The Deliverables

  • Enabled running various HPI scenarios and tweaked the credit model knobs to change the default curve, running a portfolio of a couple hundred non-agency RMBS
  • Scaling the processing power up/down via the cloud, and they would iterate through runs, changing conditions until they got the risk numbers they needed
  • Simplified integration into their risk reporting system, external to RiskSpan


Choosing a CECL Methodology | Doable, Defensible, Choices Amid the Clutter

CECL advice is hitting financial practitioners from all sides. As an industry friend put it, “Now even my dentist has a CECL solution.” With many high-level commentaries on CECL methodologies in publication (including RiskSpan’s ), we introduce this specific framework to help practitioners eliminate ill-fitting methodologies until one remains per segment. We focus on the commercially available methods implemented in the CECL Module of our RS Edge Platform, enabling us to be precise about which methods cover which asset classes, require which data fields, and generate which outputs. Our decision framework covers each asset class under the CECL standard and considers data availability, budgetary constraints, value placed on precision, and audit and regulatory scrutiny. Performance Estimation vs. Allowance Calculations Before evaluating methods, it is clarifying to distinguish performance estimation methods from allowance calculation methods (or simply allowance calculations). Performance estimation methods forecast the credit performance of a financial asset over the remaining life of the instrument, and allowance calculations translate that performance forecast into a single allowance number. There are only two allowance calculations allowable under CECL: the discounted cash flow (DCF) calculation (ASC 326-20-30-4), and the non-DCF calculation (ASC 326-20-30-5). Under the DCF allowance calculation, allowance equals amortized cost minus the present value of expected cash flows. The expected cash flows (the extent to which they differ from contractual cash flows) must first be driven by some performance estimation method. Under the non-DCF allowance calculation, allowance cumulative expected credit losses of amortized cost (roughly equal to future principal losses). These future losses of amortized cost, too, must first be generated by a performance estimation method. Next, we show how to select performance estimation methods, then allowance calculations. Selecting Your Performance Estimation Method Figure 1 below lays out the performance estimation methods available in RiskSpan’s CECL Module. We group methods into “Practical Methods” and “Premier Methods.” In general, Practical Methods calculate average credit performance from a user-selected historical performance data set and extrapolate those historical averages – as adjusted by user-defined management adjustments for macroeconomic expectations and other factors – across the future life of the asset. When using a Practical Method, every instrument in the same user-defined segment will have the same allowance ratio. Premier Methods involve statistical models built on large performance datasets containing instrument-level credit attributes, instrument-level performance outcomes, and contemporaneous macroeconomic data. While vendor-built Premier Methods come pre-built on large industry datasets, they can be tuned to institution-specific performance if the user supplies performance data. Premier Methods take instrument-level attributes and forward-looking macroeconomic scenarios as inputs and generate instrument-level, macro-conditioned results based on statistically valid methods. Management adjustments are possible, but the model results already reflect the input macroeconomic scenario(s). Check marks in Figure 1 indicate the class(es) of financial asset that each performance estimation method covers. Single checkmarks (✔) indicate methods that require the user to provide historical performance data. Double checkmarks (✔✔) indicate methods that, at the user’s option, can be executed using historical performance data from industry sources and therefore do not require the customer to supply historical performance data. All methods require the customer to provide basic positional data as of the reporting date (outstanding balance amounts, the asset class of each instrument, etc.) Figure 1 – Performance Estimation Methods in RiskSpan’s CECL Module Perfomance Estimation Methods in Riskspan's CECL Module [1] Commercial real estate [2] Commercial and industrial loans To help customers choose their performance estimation methods, we walk them through the decision tree shown in Figure 3. These steps to select a performance estimation method should be followed for each portfolio segment, one at a time. As shown, the first step to shorten the menu of methods is to choose between Practical Methods and Premier Methods. Premier Methods available today in the RS Edge Platform include both methods built by RiskSpan (prefixed RS) and methods built by our partner, Global Market Intelligence (S&P). The choice between Premier Methods and Practical Methods is primarily a tradeoff between instrument-level precision and scientific incorporation of macroeconomic scenarios on the Premier side versus lower operational costs on the Practical side. Because Premier Models produce instrument-specific forecasts, they can be leveraged to accelerate and improve credit screening and pricing decisions in addition to solving CECL. The results of Premier Methods reflect macroeconomic outlook using consensus statistical techniques, whereas Practical Methods generate average, segment-level historical performance that management then adjusts via Q-Factors. Such adjustments may not withstand the intense audit and regulatory scrutiny that larger institutions face. Also, implicit in instrument-level precision and scientific macroeconomic conditioning is that Premier Methods are built on large-count, multi-cycle, granular performance datasets. While there are Practical Methods that reference third-party data like Call Reports, Call Report data represents a shorter economic period and lacks granularity by credit attributes. The Practical Methods have two advantages. First, they easier for non-technical stakeholders to understand. Secondly, license fees for Premier Methods are lower than for Practical Methods. Suppose that for a particular asset class, an institution wants a Premium Method. For most asset classes, RiskSpan’s CECL Module selectively features one Premier Method, as shown Figure 1. In cases where the asset class is not covered by a Premier Method in Edge, the next question becomes: does a suitable, affordable vendor model exist? We are familiar with many models in the marketplace, and can advise on the benefits, drawbacks, and pricing of each. Vendor models come with explanatory documentation that institutions can review pre-purchase to determine comfort. Where a viable vendor model exists, we assist institutions by integrating that model as a new Premier Method, accessible within their CECL workflow. Where no viable vendor model exists, institutions must evaluate their internal historical performance data. Does it contain  enough instruments, span enough time ,and include enough fields  to build a valid model? If so, we assist institutions in building custom models and integrating them within their CECL workflows. If not, it’s time a begin or continue a data collection process that will eventually support modeling, and in the meantime, apply a Practical Method. To choose among Practical Methods, we first distinguish between debt securities and other asset classes. Debt securities do not require internal historical data because more robust, relevant data is available from industry sources. We offer one Practical Method for each class of debt security, as shown in Figure 1. For asset classes other than debt securities, the next step is to evaluate internal data. Does it represent (segment-level summary data is fine for Practical Methods) and to drive meaningful results? If not, we suggest applying the Remaining Life Method, a method that has been showcased by regulators and that references Call Report data (which the Edge platform can filter by institution size and location). If adequate internal data exists, eliminate methods that are not asset class-appropriate (see Figure 1) or that require specific data fields the institution lacks. Figure 2 summarizes data requirements for each Practical Method, with a tally of required fields by field type. RiskSpan can provide institutions with detailed data templates for any method upon request. From among the remaining Practical Methods, we recommend institutions apply this hierarchy:

  • Vintage Loss Rate: This method makes the most of recent observations and datasets that are shorter in timespan, whereas the Snapshot Loss Rate requires frozen pools to age substantially before counting toward historical performance averages. The Vintage Loss Rate explicitly considers the age of outstanding loans and leases and requires relatively few data fields.
  • Snapshot Loss Rate: This method has the drawbacks described above, but for well-aged datasets produces stable results and is a very intuitive and familiar method to financial institution stakeholders.
  • Remaining Life: This method ignores the effect of loan seasoning on default rates and requires user assumptions about prepayment rates, but it has been put forward by regulators and is a necessary and defensible option for institutions who lack the data to use the methods above.

Figure 2 – Data Requirements for Practical Methods (Number of Data Fields Required) Data Requirements for Practical Methods [3] Denotes fields required to perform method with customer’s historical performance data. If the customer’s data lacks the necessary fields, alternatively this method can be performed using Call Report data. Figure 3 – Methodology Selection Framework How to choose your CECL methodology Selecting Your Allowance Calculation After selecting a performance estimation method for each portfolio segment, we must select our corresponding allowance calculations. Note that all performance estimation methods in RS Edge generate, among their outputs, undiscounted expected credit losses of amortized cost. Therefore, users can elect the non-DCF allowance calculation for any portfolio segment regardless of the performance estimation method. Figure 5 shows this. A DCF allowance calculation requires the elements shown in Figure 4. Among the Premier (performance estimation) Methods, RS Resi, RS RMBS, and RS Structured Finance require contractual features as inputs and generate among their outputs the other elements of a DCF allowance calculation. Therefore, users can elect the DCF allowance calculation in combination with any of these methods without providing additional inputs or assumptions. For these methods, the choice between the DCF and non-DCF allowance calculation often comes down to anticipated  impact on allowance level. The remaining Premier Methods to discuss are the S&P commercial and industrial loans (C&I) – which covers all corporate entities, financial and non-financial, and applies to both loans and bonds – and the S&P commercial real estate (CRE) method. These methods do not require all the instruments’ contractual features as inputs (an advantage in terms of reducing the input data requirements). They project periodic default and LGD rates, but not voluntary prepayments or liquidation lags. Therefore, users provide additional contractual features as inputs and voluntary prepayment rate and liquidation lag assumptions. The CECL Module’s cash flow engine then integrates the periodic default and LGD rates produced by the S&P C&I and CRE methods, together with user-supplied contractual features and prepayment and liquidation lag assumptions, to produce expected cash flows. The Module discounts these cash flows according to the CECL requirements and differences the present values from amortized cost to calculate allowance. In considering this DCF allowance calculation with the S&P performance estimation methods, users typically weigh the impact on allowance level against the task of supplying the additional data and assumptions. To use a DCF allowance calculation in concert with a Practical (performance estimation) Method requires the user to provide contractual features (up to 20 additional data fields), liquidation lags, as well as monthly voluntary prepayment, default, and LGD rates that reconcile to the cumulative expected credit loss rate from the performance estimation method. This makes the allowance a multi-step process. It is therefore usually simpler and less costly overall to use a Premier Method if the institution wants to enable a DCF allowance . The non-DCF allowance calculation is the natural complement to the Practical Methods. Figure 4 – Elements of a DCF Allowance Calculation I believe the S&P ECL approach is always (even with added prepayment info) a method closely related to, but not a discounted cash flow method, since the allowance for credit losses in S&P approach is calculated directly from the expected credit losses and not as amortized cost minus(-) present value of future cash flows. But this is good since it requires less inputs and easier to relate to macro-economic factors than is a pure DCF. This is consistent with Figure 5. Elements of a DCF Allowance Calculation Figure 5 – Allowance Calculations Compatible with Each Performance Estimation Method Once you have selected a performance estimation method and allowance calculation method for each segment, you can begin the next phase of comparing modeled results to expectations and historical performance and tuning model settings accordingly and management inputs accordingly. We are available to discuss CECL methodology further with you; don’t hesitate to get in touch!

Get a Demo


Get Started
Log in

Linkedin   

risktech2024