Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Category: eBook

Guide to the LIBOR Transition

Guide to the LIBOR Transition

CONTRIBUTORS

Patrick Greene
Managing Director

Rachel Fetrow
Analyst

TABLE OF CONTENTS

Have questions about LIBOR?

Talk Scope

Chapter 1
What is LIBOR?

The London Interbank Offered Rate (LIBOR) is a reference rate, and over time since the 1980s has become the dominant rate for most adjustable-rate financial products. A group of banks (panel banks) voluntarily report the estimated transaction cost for unsecured bank-to-bank borrowing terms ranging from overnight to one year for various currencies. The number of currencies and maturities has fluctuated over time, but LIBOR is currently produced across seven maturities: overnight/spot, one week, one month, two months, three months, six months and one year. LIBOR rates are produced for the American dollar, the British pound sterling, the European euro, Japanese yen, and the Swiss Franc, resulting in the current 35 rates.[1][2] The aggregated calculations behind the rates are supposed to reflect the average of what banks believe they would have to pay to borrow currency or the cost of funds for a specified period. However, because the contributions are voluntary, and the rates submitted are a subjective assessment of probable cost, LIBOR indices do not reflect actual transactions.

LIBOR rates became heavily used in trading in the 1980s, officially launched by the British Bankers Association (BBA) in 1986 and regulated by the Financial Conduct Authority (FCA), the independent UK body that regulates financial firms, since April 2013.[3] Until 2014, LIBOR was developed by a group of UK banks, under the BBA. The Intercontinental Exchange Benchmark Administration (ICE) took over administration of the rate in 2014 in an effort to give the rate credible internal governance and oversight – ICE created third-party oversight, which resolved the BBA’s inherent conflict of interest in generating a sound rate while also protecting its member institutions.

Chapter 2
Why is LIBOR changing?

International investigations into LIBOR began in 2012 and revealed widespread efforts to manipulate the rates for profit, with issues discovered as far back as 2003. The investigations resulted in billions of dollars in fines for involved banks globally and jail time for some traders. Most recently, in October 2018, a Deutsche Bank trading supervisor and derivatives trader were convicted of conspiracy and wire fraud in relation to LIBOR rigging.[4]

The scandal challenged the validity of LIBOR and deterred panel banks from continuing their involvement in LIBOR generation. Because LIBOR rates are collected by voluntary contribution, in recent years the number of banks contributing, and therefore also the number of underlying transactions, are waning. In July 2017, Andrew Bailey, Chief Executive of the FCA announced that LIBOR rates would only be formally sustained by the FCA through the end of 2021, due to limited market activity around LIBOR benchmarks and the waning contributions of panel banks. The FCA has negotiated with current panel banks for their agreement to continue contributing data towards LIBOR rate generation through the end of 2021.[5]

Even without the challenge of collecting contributions from panel banks, many regulators have expressed concerns with the representative scale of LIBOR, which creates concerns of instability. The market of products referencing LIBOR dwarfs the transactions that LIBOR is supposed to represent. The New York Fed approximated that underlying transaction volumes for USD LIBOR range from $250 million to $500 million, while exposure for USD LIBOR as of the end of 2016 was nearly $200 trillion.[6]

Chapter 3
What are regulators proposing is the solution?

Talk Scope

In 2014, the Board of Governors of the Federal Reserve System and the Federal Reserve Bank of New York (New York Fed) convened the Alternative Reference Rates Committee (ARRC) in order to identify best practices for alternative reference rates and contract robustness, develop an adoption plan, and create an implementation plan with metrics of success and a timeline. The Committee was created in the wake of the LIBOR scandals, with the intention of verifying some alternatives, though no formal change in LIBOR was announced until 2017. The Federal Reserve reconstituted this board to include a broader set of market participants in March 2018 with the updated objective of developing a transition plan away from LIBOR and providing guidance on how affected parties can address risks in legacy contracts language that reference LIBOR.

In June 2017, the ARRC announced the Secure Overnight Financing Rate (SOFR) as its recommended alternative rate, and the New York Fed began publishing the rate on April 3, 2018. In October 2017, the ARRC adopted a “Paced Transition Plan” with specific steps and timelines designed to encourage use of its recommended rate.[7]

Chapter 4
What is SOFR?

The Secured Overnight Financing Rate (SOFR) is a broad measure of the cost of borrowing cash overnight collateralized by U.S. Treasury securities. As such, it will reflect an economic cost of lending and borrowing relevant to the wide array of market participants active in the financial markets. However, SOFR is fundamentally different from LIBOR. SOFR is an overnight, secured, nearly risk-free rate, while LIBOR is an unsecured rate published at several different maturities. It is a fully transaction-based rate incorporating data from transactions across three segments of the U.S. Treasury Repo market (tri-party repo, General Collateral Finance (GCF) repo and bilateral repo cleared through the Fixed Income Clearing Corporation (FICC)).[8]

The ARRC noted the need for replacement rate spreads due to the differences between rates:

Because LIBOR is unsecured and therefore includes an element of bank credit risk, it is likely to be higher than SOFR and prone to widen when there is severe credit market stress. In contrast, because SOFR is secured and nearly risk-free, it is expected to be lower than LIBOR and may stay flat (or potentially even tighten) in periods of severe credit stress. Market participants are considering certain adjustments, referenced in the fallback proposal as the applicable ‘Replacement Benchmark Spread’, which would be intended to mitigate some of the differences between LIBOR and SOFR.[9]

While the ARRC selection of SOFR as the U.S. replacement rate of choice is final, their selection is only a recommendation that LIBOR be replaced with SOFR. This creates a precarious outlook for the transition: financial institutions have to choose to take the transition seriously, and if they choose to employ rates other than SOFR, the transition could be longer and more complicated than many expect. That said, the cost benefit of choosing a different alternative reference rate is increasingly difficult to justify. With the selection of SOFR as the recommended rate, the New York Fed established an industry standard and did so in a lengthy process that included market participants and a public comment period. They also began publishing SOFR regularly on April 3, 2018.[10]

Additional steps taken by governmentsponsored enterprises (GSEs) have initiated the momentum in building out the SOFR market. In July 2018, Fannie Mae issued the first SOFR-denominated securities, leading the way for other institutions who have since followed suit.  In November 2018, the Federal Home Loan Banks (FHLBs) issued $4bn in debt tied to SOFR. The action was taken to support liquidity and help demonstrate SOFR demand to develop the SOFR market for the approximately 7,000 member institutions – banks, credit unions, and insurers – who are in the process of transitioning away from LIBOR.[11] CME Group, a derivatives and futures exchange companylaunched 3-month and 1-month SOFR futures contracts in 2018.[12] All of these steps taken to build out the market create a strong start for a rate that is already more stable than LIBORthe transaction volume underpinning SOFR rates is around $750billodaily, compared to USD LIBOR’s estimated $500 million.[13]

The ARRC has begun publishing guidance for fallback language and in the fall of 2018 published consultations on recommended language for floating rate notes and syndicated business loans.[14][15]

These initial steps to build out the necessary SOFR market put the United States ahead of the ARRC transition plan schedule and position the market well to begin SOFR implementation. However, a successful transition will require extensive engagement from other institutions. Affected institutions need to begin their transition now in order to make the gradual transition in time for the 2021 deadline.

Chapter 5
Who does this transition affect?

Talk Scope

The transition affects any institutions that hold contracts, products, or tools that reference LIBOR and will not reach full maturity or phase out before the end of 2021. 

 

WHAT ACTIONS DO AFFECTED INSTITUTIONS NEED TO TAKE?

  1. Establish a Sponsor and Project Team:  Affected institutions need to take a phased approach to the transition away from LIBOR. Because of the need for continuous oversight, they should begin by identifying an executive sponsor and establishing a project team. The team should be responsible for all transition-related activities across the organization, including assessment of exposure and the applicability of alternative reference rates where necessary, planning the steps and timing of transition, and coordinating the implementation of transition away from LIBOR.
  2. Conduct an Impact Assessment:  The first task of the project team is to complete an impact assessment to determine the institution’s LIBOR exposure across all financial products and existing contracts that mature after 2021, as well as any related models and business processes (including third-party vendors and data providers). Regarding contracts, the team should identify and categorize all variants of legacy fallback language in existing contracts. Additionally, the assessment should analyze the risk of the LIBOR transition to the institution’s basis and operational risk and across financial holdings.
  3. Mitigate Risks:  Using results from the LIBOR exposure assessment, the project team should develop a plan running through 2021 to prioritize transition activities in a way that best mitigates risk on LIBOR exposure, and communicates the transition activities to employees and clients with ample time for them to learn about and buy into the transition objectives.  
  4. Prepare new products and tools linked to alternative reference rates: This mitigates risk by limiting the number of legacy exposures that will still be in effect in 2021 and creates a clear direction for transition activities. New references may include financial instruments and products, contract language, models, pricing, risk, operational and technological processes and applications to support the new rates.
  5. Develop and Implement Transition Contract Terms: In legacy contracts that will mature after 2021, the project team will need to amend contracts and fallback language. The ARRC has begun to provide guidance for amendments or transitions related to some financial products and will continue to publish legacy transition guidance as it fulfills its mandate. Where necessary, products must move to ARRs.
  6. Update Business Processes: Based on the impact assessment, various business processes surrounding the management of interest rate changes, including those built into models and systems will require updating to accommodate the switch away from LIBOR. For new products utilizing the new index rate, procedures, processes and policies will need to be established and tested before rollout to clients. 
  7. Manage Change and Communicate:  The project team will need to develop educational materials explaining specific changes and their impacts to stakeholders. The materials must be distributed as part of an outreach strategy to external stakeholders, including clients and investors, as well as rating agencies and regulatory bodies. The outreach strategy should help to ensure that the transition message is consistent and clear as it is communicated from executives and board members to operational personnel, other stakeholders and outer spheres of influence.  
  8. Test: Financial institutions will want to prepare for regulatory oversight by testing business processes in advance. Regulators may look for documentation of the processes used to identify and remediate LIBOR risks and any risk exposure that has not been completed.

Chapter 6
Resources

Alternative reference Rates Committee. “Frequently Asked Questions.” Federal Reserve Bank of New York. 20 September 2018. https://www.newyorkfed.org/medialibrary/Microsites/arrc/files/2018/ARRC-Sept-20-2018-FAQ.pdf, Accessed November 2018.

Bloomberg. “SOFR’s growing use means it’s when, not if, it replaces LIBOR.” 15 October 2018. https://www.bloomberg.com/professional/blog/sofrs-growing-use-means-not-replaces-libor/, Accessed November 2018.

Bloomberg. “The long hunt of an incorruptible successor to LIBOR.” 17 October 2018. https://www.bloomberg.com/professional/blog/long-hunt-incorruptible-successor-libor-quicktake/, Accessed November 2018.

Bloomberg. “America’s LIBOR successor is racing to gain traction.” 11 October 2018. https://www.bloomberg.com/professional/blog/americas-libor-successor-racing-gain-traction/, Accessed November 2018.

Exchanges at Goldman Sachs. “Episode 107: LIBOR’s Long Goodbye.” 29 October 2018. https://www.goldmansachs.com/insights/podcasts/episodes/10-29-2018-granet-and-hammack.html, Accessed November 2018

LSTA. “LIBOR Replacement: Understanding the ARRC’s Loan Fallback Consultation.” 4 October 2018. https://event.webcasts.com/viewer/event.jsp?ei=1214424&tp_key=8e9f891c9a, Accessed November 2018.

McBride, James. “Understanding the LIBOR Scandal.” Council on Foreign Relations. 12 October 2016. https://www.cfr.org/backgrounder/understanding-libor-scandal, Accessed November 2018.

SIFMA Podcast. “The Transition from LIBOR.” 31 October 2018.

ENDNOTES

1 Kiff, John. “Back to Basics: What is LIBOR?” International Monetary Fund. Accessed November 2018. December 2012. https://www.imf.org/external/pubs/ft/fandd/2012/12/basics.htm, Accessed November 2018.

“LIBOR – current LIBOR interest rates.” Global Rates. https://www.global-rates.com/interest-rates/libor/libor.aspx, Accessed November 2018.

Bailey, Andrew. “The Future of LIBOR.” Financial Conduct Authority. 27 July 2017. https://www.fca.org.uk/news/speeches/the-future-of-libor, Accessed November 2018

4 “Two Former Deutsche Bank Traders Convicted for Role in Scheme to Manipulate a Critical Global Benchmark Interest Rate.” U.S. Department of Justice press release. 17 October 2018. https://www.justice.gov/opa/pr/two-former-deutsche-bank-traders-convicted-role-scheme-manipulate-critical-global-benchmark, Accessed November 2018.

Bailey, Andrew. “The Future of LIBOR.” Financial Conduct Authority. 27 July 2017. https://www.fca.org.uk/news/speeches/the-future-of-libor, Accessed November 2018.

6 Alternative Reference Rates Committee. “Second Report.” Federal Reserve Bank of New York. March 2018. https://www.newyorkfed.org/medialibrary/Microsites/arrc/files/2018/ARRC-Second-report, Accessed November 2018.

Alternative Reference Rates Committee. Federal Reserve Bank of New York. https://www.newyorkfed.org/arrc/index.html, Accessed November 2018.

Federal Reserve Bank of New York. “Secured Overnight Financing Rate Data.” https://apps.newyorkfed.org/markets/autorates/sofr, Accessed November 2018.

Federal Reserve Bank of New York. “ARRC Consultation: Regarding more robust LIBOR fallback contract language for new originations of LIBOR syndicated business loans,” 24 September 2018. https://www.newyorkfed.org/medialibrary/Microsites/arrc/files/2018/ARRC-Syndicated-Business-Loans-Consultation.pdf, Accessed November 2018.

10 Federal Reserve Bank of New York. “Statement Introducing the Treasury Repo Reference Rates,” 3 April 2018. https://www.newyorkfed.org/markets/opolicy/operating_policy_180403, Accessed November 2018.

11Guida, Victoria. “Federal Home Loan Banks boost LIBOR replacement with $4B debt issuance,” Politico. 13 November 2018. https://www.politico.com/story/2018/11/13/federal-home-loan-banks-libor-replacement-939489, Accessed November 2018.

12 CME Group. “Secured Overnight Financing Rate (SOFR) Futures.” https://www.cmegroup.com/trading/interest-rates/secured-overnight-financing-rate-futures.html, Accessed November 2018.

13 Graph: LSTA. “LIBOR and the Loan Market.” 24 April 2018. https://www.lsta.org/uploads/DocumentModel/3523/file/libor-in-the-loan-market_042418.pdf, Accessed November 2018.

14 Federal Reserve Bank of New York. “ARRC Consultation: Regarding more robust LIBOR fallback contract language for new issuances of LIBOR floating rate notes,” 24 September 2018. https://www.newyorkfed.org/medialibrary/Microsites/arrc/files/2018/ARRC-FRN-Consultation.pdf, Accessed November 2018.

15 Federal Reserve Bank of New York. “ARRC Consultation: Regarding more robust LIBOR fallback contract language for new originations of LIBOR syndicated business loans,” 24 September 2018. https://www.newyorkfed.org/medialibrary/Microsites/arrc/files/2018/ARRC-Syndicated-Business-Loans-Consultation.pdf, Accessed November 2018.

16 Federal Reserve Bank of New York. “Minutes,” Alternative Reference Rates Committee (ARRC). 31 October 2017. https://www.newyorkfed.org/medialibrary/microsites/arrc/files/2017/October-31-2017-ARRC-minutes.pdf, Accessed November 2018.

Learn More

Talk Scope

Navigating the Challenges of CECL

Navigating the Challenges of CECL

CONTRIBUTORS

David Andrukonis, CFA
Director

John Vandermeulen, CPA, CFA, FRM, CIA, CAMS
Managing Director

Janet Jozwik
Co-Head of Quantitative Analytics

Kishore Konuri
Database and Data Applications Lead

TABLE OF CONTENTS

Have questions about CECL?

Talk Scope

See our CECL Solution

Get a Demo

Chapter 1
Introduction

The Current Expected Credit Loss (CECL) model has gone through final deliberations at the Financial Accounting Standards Board (FASB); the dramatic impact of this change on loan accounting for banks of all sizes will be far reaching.

A Roadmap Forward

BACKGROUND

Properly accounting for troubled loans has always posed a challenge for banks. The 2008 financial crisis may have been just jarring enough for regulators to do something about it.

The crisis prompted the Financial Accounting Standards Board (FASB) as well as U.S. and international regulatory bodies to enact a better process for the estimation of potential credit loss within a portfolio of loans. Hence, in 2012, the FASB set out on a course to remedy loan accounting treatment via a project related to the impairment of financial instruments.

The FASB teamed up with the International Accounting Standards Board (IASB) on a joint project aimed at assisting financial institutions with estimations around allowance for loan losses.

This proposed accounting change has had a dramatic impact throughout the industry, requiring significant operational changes, as well as data management and systems enhancements that are still being worked out, as banks seek to understand all the ramifications of implementing CECL models. 

CECL is perhaps the largest change to bank accounting in decades. While everyone would agree that improved methodologies for the recognition and measurement of credit losses for loans and debt securities is a good idea in theory, there is industry angst around how sweeping the operational and process changes will be, and how costly a proposition this can become. 

 

HOW DID WE GET HERE?

 

Technically known as Accounting for Financial Instruments – Credit Losses, (Subtopic 825-15)CECL came about as a result of a desire to improve investor ability to comprehend bank financial statements.

The FASB stated their objectives as follows:

“The objective of this project is to significantly improve the decision usefulness of financial instrument reporting for users of financial statements.

The Boards believe that simplification of the accounting requirements for financial instruments should be an outcome of this improvement.

The Boards’ goal is to develop a single credit loss model for financial assets that enables more timely recognition reporting of credit losses.”

 

WHAT ARE THE BASICS?

 

CECL is a fundamental change to the way banks estimate losses within their loan portfolios. 

Existing bank methods for loss accounting is based upon an “incurred loss” basis. Said another way, banks currently do not have to estimate potential losses on a loan unless the losses are probable and reasonably estimable. 

The new approach, in contrast, is an “expected loss” methodology. 

In its simplest terms, CECL requires a credit loss to be booked for accounting purposes at the origination of a loan, based upon what is expected to happen many years in the future. 

For practical purposes, both models can be performed at the portfolio level, using historical loss performance as an anchor point.

Given the scale of the change, the FASB has tried to manage some of the many misunderstandings that come with such a dramatic adjustment in approach. Of primary concern among industry participants, in addition to the operational changes necessary, is the potential for large increases to banks’ allowance for loan and lease losses. 

 

WHAT IS THE TIMING?

 

Originally proposed to the industry in December of 2012, the CECL rollout encountered numerous delays and industry discussions, as the accounting board sought input to ensure that the intent is realized through implementation. 

The new standard underwent multiple iterations and was put out for industry feedback, as well as roundtable discussions. FASB received dozens of comment letters from industry participants regarding concerns with implementation. 

With final standard now in place, the formal implementation timetable varies slightly, depending on a bank’s SEC registrant status. For SEC registrants, the implementation of this new standard is set to commence during the 2020 financial statement period. For non-SEC registrants, it will be implemented during the 2021 financial statement period.

 

WHAT ARE THE MARKET IMPACTS?

 

Data Management is Pivotal

CECL will require organizations to be more focused on data management than ever before. Entities of all sizes will need to do an assessment of required field-level information, and to the extent data collection, assimilation, cleansing, and organization need to be improved, will need to have a well-thought-out program to enhance their ability to turn data into information.

Forecasting Methodologies Will Need to Change

Loss estimation methodologies for financial services firms of all sizes will need to be modified.

While in some respects the new requirement will simplify accounting treatment (for instance, impaired assets and non-impaired assets will no longer be treated separately), methodologies for life-of-loan loss estimation will take on a new dimension.

Banks that are suitably equipped to assess historical loan-loss performance (overall and by origination year cohort) and those with the most efficient methods for predictive analytics will have a distinct competitive advantage over less-sophisticated competitors.

Process, Process, Process

Like most paradigm shifts of this magnitude, a well-documented game plan for conversion is essential. A thorough, documented, supportable, auditable, and repeatable process will be critical.

Requirements, methodologies, systems inputs, roles and responsibilities, and approval methods will all need to be transparent throughout the organization in order to appease financial statement auditors and regulators alike.

Top 10 Organizational Impacts

#1 – ESTIMATES OF IMPACT ARE DIVERSE

 

Portfolio composition clearly plays a significant role in determining how CECL will impact loss reserves. While the general consensus holds that reserves will need to increase, estimates range from as little as 2-3% to as much as 50-60%. Balance sheets with longer-term assets will generally see a larger impact. Prepayment assumptions, which are factored into CECL, take on new importance for longer-lived assets such as 30- year mortgages. Certain loan types, such as unconditionally cancellable lines of credit and renewable loans, may see allowance levels drop, as no losses beyond the contractual end date are permitted under CECL guidance, regardless of the unlikelihood of cancelling a line of credit or the likelihood of renewing the loan. Additionally, methodologies will have to take multiple asset classes into account, including commercial real estate and consumer loans.

It is never too early to begin assessing the impact and evaluating the various modeling options from an operational perspective. Internal and external parties are sure to be interested in what the impact will be and the methodology employed could have a significant impact on the overall impact of adoption.

 

#2 – CECL GUIDANCE DOES NOT PRESCRIBE A LOSS METHODOLOGY, HOWEVER THE CURRENT CONSENSUS LEANS TOWARDS IMPLEMENTING A DISCOUNTED CASH FLOW MODEL

 

While many methodologies can be utilized, including vintage analysis, loss rate method, roll-rate method or a probability of default method, the discounted cash flow methodology looks to be the most reasonable approach based on CECL guidance, particularly for portfolios with longer-term assets. While a discounted cash flow methodology may be more complex to implement, this methodology appears to yield an allowance that most closely reflects the true economics of the financial instrument. This is primarily due to the present value discounting inherent in this methodology, which is not explicitly considered in the other methodologies.

Overall, the FASB expects the sophistication of the methodology employed to be commensurate with the complexity of the institution and to reasonably reflect its expectations of future credit losses. The various modeling types and segmentation methods should be evaluated prior to implementation to determine which methodology is most suitable.

 

#2 – LOL MODELS ALREADY IN PRODUCTION WILL LIKELY NEED TO BE MODIFIED TO BE UTILIZED FOR CECL

 

Banks currently employing life-of-loan (LOL) models will likely need to modify them to comply with CECL. Many of these models include forecasts of new production as well as anticipated loan renewals, neither of which is included in the CECL calculation. Banks should perform a gap analysis for any model being considered for CECL. Future model validation requirements should also be taken into account.

 

#4 – ORGANIZATIONS NEED TO PREPARE DATA AND MODELS FOR SOX AND MODEL VALIDATION REVIEW

 

Datasets and models that were not previously subject to SOX and financial reporting control testing will now need to be reviewed. Data storage needs will be significant, and current databases should be evaluated for auditability and scalability. Controls around the databases supporting the life of loan calculation may need to be enhanced to meet financial reporting expectations.

 

#5 – ACCOUNTING CLOSE PROCESSES WILL NEED TO BE ENHANCED

 

ALLL processes have historically been able to largely ignore originations occurring near the end of the accounting period. CECL closes this loophole by requiring lenders to book a day-one loss upon origination. Banks have traditionally had very short closing cycles. Systems will need to provide the data necessary to book the lifetime loss potentially requiring origination systems to be enhanced in order to be able to capture that data in real time.

 

#6 – DATA NEEDS TO BE ENHANCED TO SUPPORT LOL LOSS CALCULATION

 

Portfolio data covering a full business cycle will be needed to support CECL calculations. Twelve years is a reasonable starting point to cover a full business cycle, but this could vary depending on asset type. Twelve years will capture results leading up to, during, and after the financial crisis. Additional history could reduce volatility in CECL calculations. Starting the assessment of data gaps to be performed in the near term and developing project plans based on that assessment are necessary to be operationally ready for CECL.

 

#7 – CREDIT DISCLOSURES NEED TO INCREASE SIGNIFICANTLY

 

Because no single prescribed methodology accompanies CECL, disclosures on how banks actually perform the calculation need to be robust and address how the forecast was derived, the time period for which a “reasonable and supportable” forecast was determined, and when historical loss rates were utilized.

The historic relationship between loan-level credit performance indicators (current delinquencies, defaults, LTV ratios, etc.) and the overall allowance level will not necessarily continue to hold in the future. Period-over-period improvements may occur (i.e., delinquency rates decrease), however these could be more than offset by a worsening change in the forecast. Disclosures need to bridge the gap when this occurs.

Credit-quality-indicators disclosures by year of origination (vintage) are required for SEC filers and are optional for non-public companies. Overall, the level and sophistication of these disclosures needs to be commensurate with the complexity and size of the institution.

 

#8 – ALLOWANCE WILL LIKELY BE MUCH MORE VOLATILE, POTENTIALLY NECESSITATING ADDITIONAL REGULATORY CAPITAL BUFFERS

 

Small changes to future forecasts will typically have significant impacts on the reserve. This could make the allowance significantly more volatile, and regulators may impose additional capital buffers to absorb this volatility. Operationally speaking, banks need to assess capital impacts to determine whether additional regulatory capital is needed to compensate for the initial impact and the increased volatility going forward.

 

#9 – ASSUMPTIONS WILL NEED TO ALIGN WITH OTHER PROCESSES SUCH AS ALM AND FORECASTING

 

Assumptions used to calculate the LOL loss are expected to be aligned with those used for other forecasts within the institution (e.g., income forecast, ALM, etc.). Auditors and regulators will not look favorably on utilizing one forecast for CECL purposes and a different forecast for other purposes.

 

#10 – ENTITIES NEED TO MONITOR CHANGES FROM THE TRANSITIONS RESOURCE GROUP (TRG)

 

The FASB established the TRG, made up of bankers, auditors, and regulators, to address issues and questions associated with CECL implementation. While the LOL concept is unlikely to change, banks would do well to follow the activities of the TRG as specific implementation guidance is likely to be issued over the coming years.

Chapter 2
Data Requirements

IS YOUR ORGANIZATION READY?

Sample Size Requirements for CECL Modeling

BETTER MODELING LOWERS SAMPLE REQUIREMENTS AND RELIANCE ON PROXY DATA

Many bankers are questioning whether they have enough internal loan data for CECL modeling. Ensuring data sufficiency is a critical first step in meeting the CECL requirements—banks need to find and obtain relevant third-party data if it isn’t. This section explains in plain English how to calculate statistically sufficient sample sizes to determine whether third-party data is required. More importantly, it shows modeling techniques that reduce the required sample size. Investing in the right modeling approach could ultimately save the time and expense of obtaining third-party data.

 

CECL DATA REQUIREMENTS: SAMPLE SIZE FOR A SINGLE HOMOGENOUS POOL

Let’s first consider the sample required for a single pool of nearly identical loans. In the case of a uniform pool of loans — with the same FICO, loan-to-value (LTV) ratio, loan age, etc. — there is a straightforward formula to calculate the sample size necessary to estimate the pool’s default rate, shown in Exhibit 1.

Exhibit 1

PD = probability of default as estimated from existing data

3.84 is a multiple associated with the standard confidence level of 95

%Error Margin = material error threshold in ALLL as a percentage of the pool’s principal balance. For example, if a bank has a $1 billion portfolio and ALLL estimation errors of less than $2.5 million are immaterial, then the %Error Margin is 0.25%

Average Loss Severity is expressed as a percentage of defaulted principal.

As the formula shows, the sample size depends on several variables, some of which must be estimated:

  • Materiality Threshold and Confidence Level: Suppose the bank has a $1 billion loan portfolio and determines that, from a financial statement materiality standpoint, the ALLL estimate needs to be reliable to within +/- $2.5 million. Statistically, we would say that we need to be 95% confident that our loss reserve estimate is within an error margin of +/- $2.5 million of the true figure. The wider our materiality thresholds and lower our required confidence levels, the smaller the sample size we need.
  • Loss Severity: As average loss severity increases, a greater sample size is needed to achieve the same error margin and confidence level. For example, if average loss severity is 0%, then an estimate zero losses would be appropriate regardless of default rates. Theoretically, the exercise of estimating default rates does not even need to be performed, and the required sample size is zero. On the opposite end, if average loss severity is 100%, every dollar of defaulted balance translates into a dollar of loss, so modelers can least afford to misestimate default rates. Required sample size will therefore be great.
  • Default Rates: A preliminary estimate of default rate, based on the available sample, also affects the required sample size. Holding dollar error margin constant, fewer loans are needed for low-default-rate populations.

Example: Suppose we have originated a pool of low-risk commercial real estate loans. We have historical observations for 500 such loans, of which 495 paid off and five defaulted, so our preliminary default rate estimate is 1%. Of the five defaults, loss severity averaged 25% of original principal balance. We deem ALLL estimate errors within 0.25% of the relevant principal balance to be immaterial. Is our internal sample of 500 loans enough for CECL modeling purposes, or do we need to obtain proxy data?

 

Simply apply the formula from Exhibit 1:

In this case, our internal sample of 500 loans is more than enough to give us a statistical confidence interval that is narrower than our materiality thresholds. We do not need proxy data to inform our CECL model in this case.

CECL DATA REQUIREMENTS: SAMPLE SIZE ACROSS AN ASSET CLASS

If we have an asset class with loans of varying credit risk characteristics, one way to determine the needed sample is just to carve up the portfolio into many buckets of loans with like-risk characteristics, determine the number of loans needed for each bucket on a standalone basis per the formula above, and then sum these amounts. The problem with this approach – assuming our concern is to avoid material ALLL errors at the asset class level – is that it will dramatically overstate the aggregate number of loans required.

A better approach, which still involves segregating the portfolio into risk buckets, is to assign varying margins of error across the buckets in a way that minimizes the aggregate sample required while maintaining a proportional portfolio mix and keeping the aggregate margin of error within the aggregate materiality threshold. A tool like Solver within Microsoft Excel can perform this optimization task with precision. The resulting error margins (as a percentage of each bucket’s default rate estimates) are much wider than they would be on a standalone basis for buckets with low frequencies and slightly narrower for buckets with high default frequencies.

Even at its most optimized, though, the total number of loans needed to estimate the default rates of multiple like-risk buckets will skyrocket as the number of key credit risk variables increases. A superior approach to bucketing is loan-level modeling, which treats the entire asset class as one sample but estimates loan-specific default rates according to the individual risk characteristics of each loan.

LOAN-LEVEL MODELING

Suppose within a particular asset class, FICO is the only factor that affects default rates, and we segregate loans into four FICO buckets based on credit performance. (Assume for simplicity that each bucket holds an equal number of loans.) The buckets’ default rates range from 1% to 7%. As before, average loss severity is 25% and our materiality threshold is 0.25% of principal balance. Whether with a bucketing approach or loan-level modeling, either way we need a sample of about 5,000 loans total across the asset class. (We calculate the sample required for bucketing with Solver as described above and calculate the sample required for loan-level modeling with an iterative approach described below.)

Now suppose we discover that loan age is another key performance driver. We want to incorporate this into our model because an accurate ALLL minimizes earnings volatility and thereby minimizes excessive capital buffers. We create four loan age buckets, leaving us now with 4 × 4 = 16 buckets (again, assuming the buckets hold equal loan count). With four categories each of two variables, we would need around 9,000 loans for loan-level modeling but 20,000 loans for a bucketing approach, with around 1,300 in each bucket. (These are ballpark estimates that assume the loan-level model has been properly constructed and fit the data reasonably well. Estimates will vary by bank with the default rates and loss severities of the available sample. Also, while this article deals with loan count sufficiency, we have noted previously that the same dataset must also cover a sufficient timespan, whether the bank is using loan-level modeling or bucketing.)

Finally, suppose we include a third variable, perhaps stage in the economic cycle, LTV, Debt Service Coverage Ratio, or something else.

Again assume we segregate loans into four categories based on this third variable. Now we have 4^3= 64 equal-sized buckets. With loan-level modeling we need around 12,000 loans. With bucketing we need around 100,000 loans, an average of around 1,600 per bucket.

As the graph shows, a bucketing approach forces us to choose between less insight and an astronomical sample size requirement. As we increase the number of variables used to forecast credit losses, the sample needed for loan-level modeling increases slightly, but the sample needed for bucketing explodes. This points to loan-level modeling as the best solution because well-performing CECL models incorporate many variables. (Another benefit of loan-level credit models, one that is of particular interest to investors, is that the granular intelligence they provide can facilitate better loan screening and pricing decisions.)

Exhibit 2: Loan-Level Modeling Yields Greater Insight from Smaller Samples

CECL DATA REQUIREMENTS: SAMPLE SIZE FOR LOAN-LEVEL MODELING

Determining the sample size needed for loan-level modeling is an iterative process based on the standard errors reported in the model output of a statistical software package. After estimating and running a model on the existing sample, convert the error margin of each default rate (1.96 × the standard error of the default rate estimate to generate a 95% confidence interval) into an error margin of dollars lost by multiplying the default rate error margin by loss severity and the relevant principal balance. Next, sum each dollar error margin to determine whether the aggregate dollar error margin is within the materiality threshold, and adjust the sample size up or down as necessary.

Chapter 3
What Data do I need for CECL Modeling?

A DETAILED GUIDE TO CECL DATA COLLECTION

Even with CECL compelling banks to collect more internal loan data, we continue to emphasize profitability as the primary benefit of robust, proprietary, loan-level data. Make no mistake, the data template we outline below is for CECL modeling. CECL compliance, however, is a prerequisite to profitability. Also, while third-party data may suffice for some components of the CECL estimate, especially in the early years of implementation, reliance on third-party data can drag down profitability. Third-party data is often expensive to buy, may be unsatisfactory to an auditor, and can yield less accurate forecasts. Inaccurate forecasts mean volatile loss reserves and excessive capital buffers that dilute shareholder profitability. An accurate forecast built on internal data not only solves these problems but can also be leveraged to optimize loan screening and loan pricing decisions.

Below is a detailed table of data fields to collect. Banks should collect this dataset whether they plan to build credit models themselves or hire a vendor. A good vendor would expect a dataset like the one outlined below.

The table is not exhaustive for every asset class and circumstance, but it covers the basics (and then some) and is plenty serviceable for CECL modeling. A regional bank that had collected and preserved these data fields at a loan level over this past business cycle would be in a league of its own. In our new CECL world, it would hold a data asset worth perhaps more than its loan portfolio.

The following variables are useful in building probability of default, loss severity, and prepayment models – the models that inform credit loss forecasts. Note that variables in italics can be calculated from the collected variables.

Below the table are a few important notes.

Notes:

  • Preserve the full time series: All data should be preserved, beginning with the origination data. Each new month of data should add to, not overwrite, prior data. If the loan servicing system cannot accommodate this, other reasonably priced databases are available. A full time series of data is required to establish delinquency roll rates, the impact of macroeconomics on delinquency transitions, and prepayment patterns. The cost of anything less than a full time series of data is lost accuracy. We have previously written that datasets should span at least ten years.
  • Preserve original credit characteristics: A CECL model needs to forecast credit losses long into the future, based on credit characteristics available at the time the model is run. It does little good to learn relationships between today’s FICO scores and short-term default probabilities over the next twelve months. For the most part, the model must predict default based on static credit characteristics, with dynamism entering the model through the macroeconomic inputs, which might assume improving, deteriorating, or stable conditions. An exception exists where the credit characteristic itself can be reasonably predicted, as is the case with LTV on the basis of real estate indices.
  • Capture updated credit characteristics: When it doubt, capture the data. We noted in the prior bullet point that updated credit characteristics may not always be useful, especially if they are not captured systematically and regularly across the portfolio. But a credit modeler might discover that a better model can be built using “most recent” credit characteristics rather than original credit characteristics in certain cases. Also, updated credit characteristics can be useful for portfolio segmentation.
  • Notes on specific variables: The reasons for collecting some of the variables will be apparent. Here are the less self-explanatory CECL data requirements and the reasons for collecting these variables:
    • Payment date permits matching loan outcomes with macroeconomic factors and calculating loan age, an important explanatory variable.
    • Loan age is useful in establishing default probabilities as most assets exhibit different default probabilities at different stages in their life.
    • Interest rate information is useful in confirming scheduled payment and as an explanatory variable in default and prepayment models. Loans exhibit higher default and prepayment probabilities, all else equal, when charged higher interest rates.
    • Scheduled payment permits calculating prepayment and underpayment.
    • TDRs and modifications inform default probabilities as they are signs of distress.
    • Outstanding principal balance at end of period is useful to confirm scheduled payment, to measure loss severity, and possibly as an explanatory variable in prepayment and credit models.
  • Why No Risk Ratings? It wouldn’t hurt to include risk ratings in the monthly data, and modelers who are committed to building a risk ratings migration model will need them. We prefer delinquency state transition models, however, because they are objective. Risk ratings are either subjective or else an amalgamation of metrics that could be disaggregated and modeled individually. For most banks, a risk rating is akin to a prediction – it ranks likelihood of future defaults or magnitudes of forecasted losses. Predicting future risk rating is thus like predicting a prediction. It is both more useful and more doable to predict objective outcomes, which is why we prefer to model based on delinquency status.

Chapter 4
How to Select Your CECL Methodology

DOABLE, DEFENSIBLE CHOICES AMID THE CLUTTER

CECL advice is hitting financial practitioners from all sides. As an industry friend put it, “Now even my dentist has a CECL solution.”

With many high-level commentaries on CECL methodologies in publication (including RiskSpan’s), we introduce this specific framework to help practitioners eliminate ill-fitting methodologies until one remains per segment. We focus on the commercially available methods implemented in the CECL Module of the RiskSpan (RS) Edge Platform, enabling us to be precise about which methods cover which asset classes, require which data fields, and generate which outputs. Our decision framework covers each asset class under the CECL standard and considers data availability, budgetary constraints, value placed on precision, and audit and regulatory scrutiny.

PERFORMANCE ESTIMATION VS. ALLOWANCE CALCULATIONS

Before evaluating methods, it is clarifying to distinguish performance estimation methods from allowance calculation methods (or simply allowance calculations). Performance estimation methods forecast the credit performance of a financial asset over the remaining life of the instrument, and allowance calculations translate that performance forecast into a single allowance number.

There are only two allowance calculations allowable under CECL: the discounted cash flow (DCF) calculation (ASC 326-20-30-4), and the non-DCF calculation (ASC 326-20-30-5). Under the DCF allowance calculation, allowance equals amortized cost minus the present value of expected cash flows. The expected cash flows (the extent to which they differ from contractual cash flows) must first be driven by some performance estimation method. Under the non-DCF allowance calculation, allowance equals cumulative expected credit losses of amortized cost (roughly equal to future principal losses). These future losses of amortized cost, too, must first be generated by a performance estimation method.

Next, we show how to select performance estimation methods, then allowance calculations.

Next, we show how to select performance estimation methods, then allowance calculations.

SELECTING YOUR PERFORMANCE ESTIMATION METHOD

Figure 1 below lays out the performance estimation methods available in RiskSpan’s CECL Module. We group methods into “Practical Methods” and “Premium Methods.”

In general, Practical Methods calculate average credit performance from a user-selected historical performance data set and extrapolate those historical averages – as adjusted by user-defined management adjustments for macroeconomic expectations and other factors – across the future life of the asset. When using a Practical Method, every instrument in the same user-defined segment will have the same allowance ratio.

Premium Methods involve statistical models built on large performance datasets containing instrument-level credit attributes, instrument-level performance outcomes, and contemporaneous macroeconomic data. While vendor-built Premium Methods come pre-built on large industry datasets, they can be tuned to institution-specific performance if the user supplies performance data. Premium Methods take instrument-level attributes and forward-looking macroeconomic scenarios as inputs and generate instrument-level, macro-conditioned results based on statistically valid methods. Management adjustments are possible, but the model results already reflect the input macroeconomic scenario(s).

the customer to supply historical performance data. All methods require the customer to provide basic positional data as of the reporting date (outstanding balance amounts, the asset class of each instrument, etc.)

FIGURE 1 – PERFORMANCE ESTIMATION METHODS IN RISKSPAN’S CECL MODULE

To help customers choose their performance estimation methods, we walk them through the decision tree shown in Figure 3. These steps to select a performance estimation method should be followed for each portfolio segment, one segment at a time. As shown, the first step to shorten the menu of methods is to choose between Practical Methods and Premium Methods. Premium Methods available today in the RS Edge Platform include both methods built by RiskSpan (prefixed “RS”) and methods built by our partner, S&P Global Market Intelligence (“S&P”).

The choice between Premium Methods and Practical Methods is primarily a tradeoff between instrument-level precision and scientific incorporation of macroeconomic scenarios on the Premium side versus lower operational costs on the Practical side. Because Premium Methods produce instrument-specific forecasts, they can be leveraged to accelerate and improve credit screening and pricing decisions in addition to solving CECLSuch adjustments may not withstand the intense audit and regulatory scrutiny that larger institutions face.

Suppose that for a particular asset class, an institution wants a Premium Method. For most asset classes, RiskSpan’s CECL Module selectively features one Premium Method, as shown in Figure 1. In cases where the asset class is not covered by a Premium Method in Edge, the next question becomes: does a suitable, affordable vendor model exist? We are familiar with many models in the marketplace, and can advise on the benefits, drawbacks, and pricing of each. Vendor models come with explanatory documentation that institutions can review pre-purchase to determine comfort. Where a viable vendor model exists, we assist institutions by integrating that model as a new Premium Method, accessible within their CECL workflow. Where no viable vendor model exists, institutions must evaluate their internal historical performance data. Does it contain enough instruments, span enough time, and include enough fields to build a valid model? If so, we assist institutions in building custom models and integrating them within their CECL workflows. If not, it’s time to begin or to continue a data collection process that will eventually support modeling, and in the meantime, apply a Practical Method.

To choose among Practical Methods, we first distinguish between debt securities and other asset classes. Debt securities do not require internal historical data because more robust, relevant data is available from industry sources. We offer one Practical Method for each class of debt security, as shown in Figure 1.

For asset classes other than debt securities, the next step is to evaluate internal data. Does it represent enough instruments (segment-level summary data is fine for Practical Methods) and span enough time to drive meaningful results? If not, we suggest applying the Remaining Life Method, a method that has been showcased by regulators and that references Call Report data (which the Edge platform can filter by institution size and location). If adequate internal data exists, eliminate methods that are not asset class-appropriate (see Figure 1) or that require specific data fields the institution lacks. Figure 2 summarizes data requirements for each Practical Method, with a tally of required fields by field type. RiskSpan can provide institutions with detailed data templates for any method upon request. From among the remaining Practical Methods, we recommend institutions apply this hierarchy:

  1. Vintage Loss Rate: This method makes the most of recent observations and datasets that are shorter in timespan, whereas the Snapshot Loss Rate requires frozen pools to age substantially before counting toward historical performance averages. The Vintage Loss Rate explicitly considers the age of outstanding loans and leases and requires relatively few data fields.
  2. Snapshot Loss Rate: This method has the drawbacks described above, but for well-aged datasets produces stable results and is a very intuitive and familiar method to financial institution stakeholders.
  3. Remaining Life: This method ignores the effect of loan seasoning on default rates and requires user assumptions about prepayment rates, but it has been put forward by regulators and is a necessary and defensible option for institutions who lack the data to use the methods above.
 

FIGURE 2 – DATA REQUIREMENTS FOR PRACTICAL METHODS

 

FIGURE 3 – METHODOLOGY SELECTION FRAMEWORK

 

SELECTING YOUR ALLOWANCE CALCULATION

After selecting a performance estimation method for each portfolio segment, we must select our corresponding allowance calculations.

Note that all performance estimation methods in RS Edge generate, among their outputs, undiscounted expected credit losses of amortized cost. Therefore, users can elect the non-DCF allowance calculation for any portfolio segment regardless of the performance estimation method. Figure 5 shows this.

A DCF allowance calculation requires the elements shown in Figure 4. Among the Premium (performance estimation) Methods, RS Resi, RS RMBS, and RS Structured Finance require contractual features as inputs and generate among their outputs the other elements of a DCF allowance calculation. Therefore, users can elect the DCF allowance calculation in combination with any of these methods without providing additional inputs or assumptions. For these methods, the choice between the DCF and non-DCF allowance calculation often comes down to anticipated impact on allowance level.

The remaining Premium Methods to discuss are the S&P C&I method – which covers all corporate entities, financial and non-financial, and applies to both loans and bonds – and the S&P CRE method. These methods do not require all the instruments’ contractual features as inputs (an advantage in terms of reducing the input data requirements). They do project periodic default and LGD rates, but not voluntary prepayments or liquidation lags. Therefore, to use the DCF allowance calculation in combination with the S&P C&I and CRE performance estimation methods, users provide additional contractual features as inputs and voluntary prepayment rate and liquidation lag assumptions. The CECL Module’s cash flow engine then integrates the periodic default and LGD rates produced by the S&P C&I and CRE methods, together with user-supplied contractual features and prepayment and liquidation lag assumptions, to produce expected cash flows. The Module discounts these cash flows according to the CECL requirements and differences the present values from amortized cost to calculate allowance. In considering this DCF allowance calculation with the S&P performance estimation methods, users typically weigh the impact on allowance level against the task of supplying the additional data and assumptions.

To use a DCF allowance calculation in concert with a Practical (performance estimation) Method requires the user to provide contractual features (up to 20 additional data fields), liquidation lags, as well as monthly voluntary prepayment, default, and LGD rates that reconcile to the cumulative expected credit loss rate from the performance estimation method. This makes the allowance a multi-step process. It is therefore usually simpler and less costly overall to use a Premium Method if the institution wants to enable a DCF allowance calculation. The non-DCF allowance calculation is the natural complement to the Practical Methods.

 

FIGURE 4 – ELEMENTS OF A DCF ALLOWANCE CALCULATION

 

FIGURE 5 – ALLOWANCE CALCULATIONS COMPATIBLE WITH EACH PERFORMANCE ESTIMATION METHOD

One you have selected a performance estimation method and allowance calculation method for each segment, you can begin the next phase of comparing modeled results to expectations and historical performance and tuning model settings accordingly and management inputs accordingly. We are available to discuss CECL methodology further with you; don’t hesitate to get in touch!

Chapter 5
DCF vs. Non-DCF Allowance

MYTH AND REALITY

FASB’s CECL standard allows institutions to calculate their allowance for credit losses as either “the difference between the amortized cost basis and the present value of the expected cash flows” (ASC 326-20-30-4) or “expected credit losses of the amortized cost basis” (ASC 326-20-30-5). The first approach is commonly called the discounted cash flow or “DCF approach” and the second approach the “non-DCF approach.” In the second approach, the allowance equals the undiscounted sum of the amortized cost basis projected not to be collected. For the purposes of this post, we will equate amortized cost with unpaid principal balance.

A popular misconception – even among savvy professionals – is that a DCF-based allowance is always lower than a non-DCF allowance given the same performance forecast. In fact, a DCF allowance is sometimes higher and sometimes lower than a non-DCF allowance, depending upon the remaining life of the instrument, the modeled recovery rate, the effective interest rate (EIR), and the time from default until recovery (liquidation lag). Below we will compare DCF and non-DCF allowances while systematically varying these key differentiators.

Our DCF allowances reflect cash inflows that follow the SIFMA standard formulas. We systematically vary time to maturity, recovery rate, liquidation lag and EIR to show their impact on DCF vs. non-DCF allowances (see Table 1 for definitions of these variables). We hold default rate and voluntary prepayment rate constant at reasonable levels across the forecast horizon. See Table 2 for all loan features and behavioral assumptions held constant throughout this exercise.

For clarity, we reiterate that the DCF allowances we will compare to non-DCF allowances reflect amortized cost minus discounted cash inflows, per ASC 326-20-30-4. A third approach, which is unsound and therefore excluded, is the discounting of accounting losses. This approach will understate expected credit losses by using the interest rate to discount principal losses while ignoring lost interest itself.

 

TABLE 1 – KEY DRIVERS OF DCF VS. NON-DCF ALLOWANCE DIFFERENCES (SYSTEMATICALLY VARIED BELOW)

 

TABLE 2 – LOAN FEATURES AND BEHAVIORAL ASSUMPTIONS HELD CONSTANT

Figure 1 compares DCF versus non-DCF allowances. It is organized into nine tables, covering the landscape of loan characteristics that drive DCF vs. non-DCF allowance differences. The cells of the tables show DCF allowance minus Non-DCF allowance in basis points. Thus, positive values mean that the DCF allowance is greater.

  • Tables A, B and C show loans with 100% recovery rates. For such loans, ultimate recovery proceeds match exposure at default. Under the non-DCF approach, as long as recovery proceeds eventually cover principal balance at the time of default, allowance will be zero. Accordingly, the non-DCF allo­wance is 0 in every cell of tables A, B and C. Longer liquidation lags, however, diminish present value and thus increase DCF allowances. The greater the discount rate (the EIR), the deeper the hit to present value. Thus, the DCF allowance increases as we move from the top-left to the bottom-right of tables A, B and C. Note that even when liquidation lag is 0, 100% recovery still excludes the final month’s interest, and a DCF allowance (which reflects total cash flows) will accordingly reflect a small hit. Tables A, B and C differ in one respect – the life of the loan. Longer lives translate to greater total defaulted dollars, greater amounts exposed to the liquidation lags, and greater DCF allowances.
  • Tables G, H and I show loans with 0% recovery rates. While 0% recovery rates may be rare, it is instructive to understand the zero-recovery case to sharpen our intuitions around the comparison between DCF and non-DCF allowances. With zero recovery proceeds, the loans produce only monthly (or periodic) payments until default. Liquidation lag, therefore, is irrelevant. As long as the EIR is positive and there are defaults in payment periods besides the first, the present value of a periodic cash flow stream (using EIR as the discount rate) will exceed cumulative principal collected. Book value minus the present value of the periodic cash flow stream, therefore, will be less than than the cumulative principal not collected, and thus DCF allowance will be lower. Appendix A explains why this is the case. As Tables G, H and I show, the advantage (if we may be permitted to characterize a lower allowance as an advantage) of the DCF approach on 0% recovery loans is greater with greater discount rates and greater loan terms.
  • Tables D, E and F show a more complex (and more realistic) scenario where the recovery rate is 75% (loss-given-default rate is 25%). Note that each cell in Table D falls in between the corresponding values from Table A and Table G; each cell in Table E falls in between the corresponding values from Table B and Table H; and each cell in Table F falls in between the corresponding values from Table C and Table I. In general, we can see that long liquidation lags will hurt present values, driving DCF allowances above non-DCF allowances. Short (zero) liquidation lags allow the DCF advantage from the periodic cash flow stream (described above in the comments about Tables G, H and I) to prevail, but the size of the effect is much smaller than with 0% recovery rates because allowances in general are much lower. With moderate liquidation lags (12 months), the two approaches are nearly equivalent. Here the difference is made by the loan term, where shorter loans limit the periodic cash flow stream that advantages the DCF allowances, and longer loans magnify the impact of the periodic cash flow stream to the advantage of the DCF approach.
 

FIGURE 1 – DCF ALLOWANCE RELATIVE TO NON-DCF ALLOWANCE (DIFFERENCE IN BASIS POINTS)

 

Conclusion

  • Longer liquidation lags will increase DCF allowances relative to non-DCF allowances as long as recovery rate is greater than 0%.
  • Greater EIRs will magnify the difference (in either direction) between DCF and non-DCF allowances.
  • At extremely high recovery rates, DCF allowances will always exceed non-DCF allowances; at extremely low recovery rates, DCF allowances will always be lower than non-DCF allowances. At moderate recovery rates, other factors (loan term and liquidation lag) make the difference as to whether DCF or non-DCF allowance is higher.
  • Longer loan terms both a) increase allowance in general, by exposing balances to default over a longer time horizon; and b) magnify the significance of the periodic cash flow stream relative to the liquidation lag, which advantages DCF allowances.
    • Where recovery rates are extremely high (and so non-DCF allowances are held low or to zero) the increase to defaults from longer loan terms will drive DCF allowances further above non-DCF allowances.
    • Where recovery rates are moderate or low, the increase to loan term will lower DCF allowances relative to non-DCF allowances.

Note that we have not specified the asset class of our hypothetical instrument in this exercise. Asset class by itself does not influence the comparison between DCF and non-DCF allowances. However, asset class (for example, a 30-year mortgage secured by a primary residence, versus a five-year term loan secured by business equipment) does influence the variables (loan term, recovery rate, liquidation lag, and effective interest rate) that drive DCF vs. non-DCF allowance differences. Knowledge of an institution’s asset mix would enable us to determine how DCF vs. non-DCF allowances will compare for that portfolio. 

Chapter 6
Stakeholders

WHAT CECL MEANS TO INVESTORS

For many institutions, CECL will mean a one-time reduction in book equity and lower stated earnings during periods of portfolio growth. These reductions occur because CECL implicitly double-counts credit risk from the time of loan origination, as we will meticulously demonstrate. But for investors, will the accounting change alter the value of your shares?
 

THREE DISTINCT MEASURES OF VALUE

To answer this question well, we need to parse three distinct measures of value:

  1. Book Value: This is total shareholders’ equity as reported in financial reports like 10-Ks and annual reports prepared in accordance with U.S. GAAP.
  2. Current Market Value (also known as Market Cap): Current share price multiplied by the number of outstanding shares. This is the market’s collective opinion of the value of an institution. It could be very similar to, or quite different from, book value, and may change from minute to minute.
  3. Intrinsic Value (also known as Fundamental Value or True Value): The price that a rational investor with perfect knowledge of an institution’s characteristics would be willing to pay for its shares. It is by comparing an estimate of intrinsic value versus current market value that we deem a stock overpriced or underpriced. Investors with a long-term interest in a company should be concerned with its intrinsic or true value.
 

HOW DOES AN ACCOUNTING CHANGE AFFECT EACH MEASURE OF VALUE

Accounting standards govern financial statements, which investors then interpret. An informed, rational investor will “look through” any accounting quirk that distorts the true economics of an enterprise. Book value, therefore, is the only measure of value that an accounting change directly affects.

An accounting change may indirectly affect the true value of a company if costly regulations kick in as a result of a lower book value or if the operational cost of complying with the new standard is cumbersome. These are some of the risks to fundamental value from CECL, which we discuss later, along with potential mitigants.

 

KEY FEATURE OF CECL: DOUBLE-COUNTING CREDIT RISK

The single-most important thing for investors to understand about CECL is that it double-counts the credit risk of loans in a way that artificially reduces stated earnings and the book values of assets and equity at the time a loan is originated. It is not the intent of CECL to double-count credit risk, but it has that effect, as noted by the two members of the Financial Accounting Standards Board (FASB) who dissented from the rule. (CECL was adopted by a 5-2 vote.)

Consider this simple example of CECL accounting: A bank makes a loan with an original principal balance of $100. CECL requires the bank to recognize an expense equal to the present value of expected credit lossesi and to record a credit allowance that reduces net assets by this same amount. Suppose we immediately reserve our $100 loan down to a net book value of $99 and book a $1 expense. Why did we even make the loan? Why did we spend $100 on something our accountant says is worth $99? Is lending for suckers?

Intuitively, consider that to make a loan of $100 is to buy a financial asset for a price of $100. If other banks would have made the same loan at the same interest rate (which is to say, they would have paid the same price for the same asset), then our loan’s original principal balance was equal to its fair market value at the time of origination. It is critical to understand that an asset’s fair market value is the price which market participants would pay after considering all of the asset’s risks, including credit risk. Thus, any further allowance for credit risk below the original principal balance is a double-counting of credit risk.

Here’s the underlying math: Suppose the $100 loan is a one-year loan, with a single principal and interest payment due at maturity. If the note rate is 5%, the contractual cash flow is $105 next year. This $105 is the most we can receive; we receive it if no default occurs. What is the present value of the $105 we hope to receive? One way to determine it is to discount the full $105 amount by a discount rate that reflects the risk of nonpayment. We established that 5% is the rate of return that banks are requiring of borrowers presenting similar credit risk, so an easy present value calculation is to discount next year’s contractual
$105 cash flow by the 5% contractual interest rate, i.e., $105 / (1 + 5%) = $100. Alternatively, we could reduce the contractual cash flow of $105 by some estimate of credit risk. Say we estimate that if we made many loans like this one, we would collect an average of $104 per loan. Our expected future cash flow, then, is $104. If we take the market value of $100 for this loan as an anchor point, then the market’s required rate of return for expected cash flows must be 4%. ($104 / (1 + 4%) = $100.) It is only sensible that the market requires a lower rate of return on cash flows with greater certainty of collection.

What the CECL standard does is require banks to discount the lower expected cash flows at the higher contractual rate (or to use non-discounting techniques that have the same effect). This would be like discounting $104 at 5% and calculating a fair market value for the asset of $104 / (1 + 5%) ≈ $99. This (CECL’s) method double-counts credit risk by $1. The graph below shows the proper relationship between cash flow forecasts and discount rates when performing present value calculations, and shows how CECL plots off the line.

FASB Vice Chairman James Kroeker and Board member Lawrence Smith described the double-counting issue in their dissent to the standards update: “When performing a present value calculation of future cash flows, it is inappropriate to reflect credit risk in both the expected future cash flows and the discount rate because doing so effectively double counts the reflection of credit risk in that present value calculation. If estimates of future cash flows reflect the risk of nonpayment, then the discount rate should be closer to risk-free. If estimates of future cash flows are based on contractual amounts (and thus do not reflect a nonpayment risk), the discount rate should be higher to reflect assumptions about future defaults.” Ultimately, the revised standard “results in financial reporting that does not faithfully reflect the economics of lending activities.”ii

The Accounting Standards Update notes two tangential counterpoints to Kroeker and Smith’s dissent. The first point is that banks would find alternative methods challenging, which may be true but is irrelevant to the question of whether CECL faithfully reflects true economics. The second point is that the valuation principles Kroeker and Smith lay out are for fair value estimates, whereas the accounting standard is not intended to produce fair value estimates. This concedes the only point we are trying to make, which is that the accounting treatment deviates (downwardly, in this case) from the fundamental and market value that an investor should care about.

 

HOW CECL AFFECTS EACH MEASURE OF VALUE

As noted previously, the direct consequences of CECL will hit book value. Rating agency Fitch estimates that the initial implementation of CECL would shave 25 to 50 bps off the aggregate tangible common equity ratio of US banks if applied in today’s economy. The ongoing impact of CECL will be less dramatic because the annual impact to stated earnings is just the year-over-year change in CECL. Still, a growing portfolio would likely add to its CECL reserve every year.iii 

There are many indirect consequences of CECL that may affect market and true value: 

  1. Leverage: The combination of lower book values from CECL with regulations that limit leverage on the basis of book value could force some banks to issue equity or retain earnings to de-leverage their balance sheet. Consider these points:
    1. There is a strong argument to be made to regulators that the capital requirements that pre- dated CECL, if not adjusted for the more conservative asset calculations of CECL, will have become more conservative de facto than they were meant to be. There is no indication that regulators are considering such an adjustment, however. A joint statement on CECL from the major regulators tells financial institutions to “[plan] for the potential impact of the new accounting standard on capital.”
    2. Withholding a dividend payment does not automatically reduce a firm’s true value. If the enterprise can put retained earnings to profitable use, the dollar that wasn’t paid out to investors this year can appreciate into a larger payment later.
    3. The deeper threat to value (across all three measures) comes if regulations force a permanent de-leveraging of the balance sheet. This action would shift the capital mix away from tax-advantaged debt and toward equity, increase the after-tax cost of capital and decrease earnings and cash flow per share, all else equal.v Because banks face the shift to CECL together, however, they may be able to pass greater capital costs on to their borrowers in the form of higher fees or higher interest rates.
    4. Banks can help themselves in a variety of ways. The more accurate a bank’s loss forecasts prove to be, the more stable its loss reserve will be, and the less likely regulators are to require additional capital buffers. Management can also disclose whether their existing capital buffers are sufficient to absorb the projected impact of CECL without altering capital plans. Conceivably, management could elect to account for its loans under the fair value option to avoid CECL’s double-counting bias, but this would introduce market volatility to stated earnings which could prompt its own capital buffers.
  2. Investor Perception of Credit Risk: Investors’ perception of the credit risk a bank faces affects its market value. If an increase in credit allowance due to CECL causes investors to worry that a bank faces greater credit risk than they previously understood, the bank’s market value will fall on this reassessment. On the other hand, if investors have independently assessed the credit risk borne by an institution, a mere change in accounting treatment will not affect their view. An institution’s true value comes from the cash flows that a perfectly informed investor would expect. Unless CECL changes the kinds of loans an institution makes or the securities it purchases, its true credit risk has not changed, and nothing the accounting statements say can change that.
  3. Actual Changes in Credit Risk: Some banks may react to CECL by shifting their portfolio mix toward shorter duration or less credit risky investments, in an effort to mitigate CECL’s impact on their book value. If underwriting unique and risky credits was a core competency of these banks, and they shift toward safer assets with which they have no special advantage, this change could hurt their market and fundamental value.
  4. Volatility: ABA argues that the inherent inaccuracies of forecasts over long time horizons will increase the volatility of the loss reserve under CECL.vi Keefe, Bruyette & Woods (KBW) goes the other way, writing that CECL should reduce the cyclicality of stated earnings.vii KBW’s point can loosely be understood by considering that long-term averages are more stable than short-term averages, and short-term averages drive existing loss reserves. Certainly, if up-front CECL estimates are accurate, even major swings in charge-offs can be absorbed without a change in the reserve as long as the pattern of charge-offs evolves as expected. While cash flow volatility would hurt fundamental value, the concern from volatility of stated earnings is that it could exacerbate capital buffers required by regulators.
  5. Transparency: All else equal, investors prefer a company whose risks are more fully and clearly disclosed. KBW reasons that the increased transparency required by CECL will have a favorable impact on financial stock prices.
  6. Comparability Hindered: CECL allows management to choose from a range of modeling techniques and even to choose the macroeconomic assumptions that influence its loss reserve, so long as the forecast is defensible and used firm-wide. Given this flexibility, two identical portfolios could show different loss reserves based on the conservatism or aggressiveness of management. This situation will make peer comparisons impossible unless disclosures are adequate and investors put in the work to interpret them. Management can help investors understand, for example, if its loss reserve is larger because its economic forecast is more conservative, as opposed to because its portfolio is riskier.
  7. Operational Costs: Complying with CECL requires data capacity and modeling resources that could increase operational costs. The American Bankers Association notes that such costs could be “huge.”ix Management can advise stakeholders whether it expects CECL to raise its operational costs materially. If compliance costs are material, they will affect all measures of value to the extent that they cannot be passed on to borrowers. As noted earlier, the fact that all US financial institutions face the shift to CECL together increases the likelihood of their being able to pass costs on to borrowers.
  8. Better Intelligence: Conceivably, the enhancements to data collection and credit modeling required by CECL could improve banks’ ability to price loans and screen credit risks. These effects would increase all three measures of value.
 

CONCLUSION

CECL is likely to reduce the book value of most financial institutions. If regulators limit leverage because of lower book equity or the operational costs of CECL are material, and these costs cannot be transferred on to borrowers, then market values and fundamental values will also sag. If banks react to the standard by pulling back from the kinds of loans that have been their core competency, this, too, will hurt fundamental value. On the positive side, the required investment in credit risk modeling offers the opportunity for banks to better screen and price their loans.

Bank management can provide disclosures to analysts and investors to help them understand any changes to the bank’s loan profile, fee and interest income, capital structure and operational costs. Additionally, by optimizing the accuracy of its loss forecasts, management can contain the volatility of its CECL estimate and minimize the likelihood of facing further limitations on leverage.

Chapter 7
Resources

Financial institutions can partner with consulting firms to get ahead of the new CECL standard. A comprehensive CECL solution requires expertise from a wide range of disciplines, including data management, econometric and credit risk modeling, accounting, and model risk governance.

Different financial institutions will need more outside resources in some of these areas than others. The ideal one-stop CECL partner, therefore, will have a breadth and depth of expertise such that your financial institution can trust substantial CECL-related work to them, but enough modularity in their offering so that you only pay for the services you need.

A final consideration is whether a firm can provide the level of customization in CECL modeling that fits your circumstances, from an off-the-shelf model to a custom build.

EXTENSIVE EXPERTISE

While the CECL standard is new, the data management and modeling skills it requires are not, nor is the principle of aligning model assumptions across departments or the need to satisfy the demands of auditors and regulators. Consultants and industry veterans who are experienced at helping financial institutions solve such problems can help prepare you for CECL.

For as sweeping a change as CECL is, it is important to partner with a firm with accounting and regulatory expertise that understands the standard and its implications.

 
 

Besides a strong understanding of CECL and its specific requirements, CECL demands expertise in the following specific areas:

 

DATA MANAGEMENT. Data management will be at the forefront of tackling the transition from an “incurred loss” approach to an “expected lifetime loss” approach. Institutions will need to store extensive data in a controlled environment which will be subject to heightened scrutiny due to its impact on financial reporting.

The process of gathering, consolidating, and organizing data from disparate systems can prove painful for most organizations. Financial institutions that need to build or enhance their database to support CECL estimates should seek a partner with data hosting platforms or data management experts who can deploy to the client site.

MODELING & ANALYTICS. Models that translate raw data into value-added, decision-useful information can set leading entities apart from their peers both in terms of performance and in the eyes of regulators. CECL will require major upgrades and extensions to existing models or, in many cases, new models. A CECL model must forecast life-of-loan cash flows, considering both the credit characteristics of the loan portfolio and short-term and long-term economic forecasts. A vendor with credit and econometric modeling expertise can provide the necessary modeling help.

CECL also marks an opportunity for firms of all sizes to address shortcomings in their analytic capabilities. Methods for analyzing big data, surveillance capabilities, and decision support visualization tools have improved dramatically and should now be viewed as “must-haves” for organizations interested in differentiating themselves from their competition. The ideal CECL partner will have analytics expertise along these lines.

RISK & CONTROLS ADVISORY. Meeting the CECL standard will involve operational changes. Models, assumptions, and datasets supporting CECL will be subject to a high level of scrutiny due to their impacts on financial reporting. A consulting firm with accounting expertise can aid in the interpretation of the new standard and in understanding requisite process changes.

Additionally, documentation will be a critical component to a company’s transition to the new standard. Being able to support new methodologies with thorough documentation will be central to avoiding regulatory turbulence.

MODULARITY

As noted previously, different financial institutions will need help with different aspects of CECL preparedness. The ideal consulting partner will have a flexible solution and service offering that can be targeted to meet the institution’s need.

In some cases it may be appropriate for an outside consulting firm to lead a specific activity, and in other cases to augment and complement full-time staff.

APPROPRIATE LEVELS OF CUSTOMIZATION

Do you need a credit model custom-built from your internal data, or will an off-the-shelf model do the job? A good CECL partner can help you think through these questions and provide a model with the appropriate level of customization.

As a CECL partner, RiskSpan can assist with data organization, portfolio stratification, model development, and loss reserving. CECL requires coordination among different departments. RiskSpan’s core competencies consist of finance and accounting, credit risk modeling, technology and data infrastructure. Our firm can share expertise and align efforts across departments to solve challenges that might arise as a result of CECL requirements.

CECL INVOLVES WELL-STRUCTURED DATA AND MODELING. A FEW NOTES ON RISKSPAN’S EXPERTISE IN THESE AREAS:

  1. Modeling: Established frameworks and expertise in analyzing loan performance, developing custom models and integrating off-the-shelf solutions.
  2. Data Management: Domain knowledge in data intake, storage, and optimized extraction for loan performance analytics.
  • Project Management & Gap Assessment
  • Calibrate and Deploy OTS (Off-The-Shelf) Model
  • Assess Data
  • Establish Database
  • Collect Data Until Sample Spans Business Cycle
  • Build Custom Model
  • Document and Assess Accounting Process
  • Validate Model (Independent Party)
  • Monitor and Maintain Model

TALK MORE ABOUT CECL

Talk Scope

eBook: Navigating the Challenges of CECL With Ease

ebook

Navigating the Challenges of CECL With Ease

The new Current Expected Credit Loss (CECL) model is perhaps the largest change to bank accounting in decades. While everyone would agree that improved methodologies for the recognition and measurement of credit losses for loans and debt securities is a good idea in theory, there is industry angst around how sweeping the operational and process changes will be, and how costly a proposition this can become. 

Learn from RiskSpan experts the timeline, market impacts, top organizational challenges, data and modeling requirements, and how to select your methodology.


eBook: Machine Learning in Modeling Loan Data

ebook

Machine Learning in Modeling Loan Data

Understanding the challenges of implementing a machine learning solution is critical to yielding leverageable results. In this three part eBook, we cover the fundamentals of machine learning, a use case with modeling loan data as well as how machine learning can be used for data visualization.


eBook: Machine Learning in Model Risk Management

ebook

Machine Learning in Model Risk Management

In this eBook, we first address some of the ways in which machine learning techniques can be leveraged by model validators to assess models developed using conventional means. We then tackle several considerations that model validators should take into account when independently assessing machine learning models that appear in their inventories. 

In this eBook, you’ll learn:

  • Real-world examples illustrating how machine learning models can be used to solve financial problems
  • Procedures for validating machine learning models
  • Machine learning methods that can be applied during model validation to understand and mitigate model risk

eBook: A Validator’s Guide to Model Risk Management

ebook

A Validator’s Guide to Model Risk Management

Learn from RiskSpan model validation experts what constitutes a model, considerations for validating vendor models, how to prepare, how to determine scope, comparisons of performance metrics, and considerations for evaluating model inputs.


Calculating VaR: A Review of Methods

Calculating VaR

A Review of Methods

CONTRIBUTOR

Don Brown
Co-Head of Quantitative Analytics

TABLE OF CONTENTS

Have questions about calculating VaR?

Talk Scope

Chapter 1
Introduction

Many firms now use Value-at-Risk (“VaR”) for risk reporting. Banks need VaR to report regulatory capital usage under the Market Risk Rule, as outlined in the Fed and OCC regulations and. Additionally, hedge funds now use VaR to report a unified risk measure across multiple asset classes. There are multiple approaches to VaR, so which method should we choose? In this brief paper, we outline a case for full revaluation VaR in contrast to a simulated VaR using a “delta-gamma” approach to value assets.

The VaR for a position or book of business can be defined as some threshold  (in dollars) where the existing position, when faced with market conditions similar to some given historical period, will have P/L greater than  with probability. Typically,  is chosen to be  or. To compute this threshold , we need to

  1. Set a significance percentile , a market observation period, and holding period n.1
  2. Generate a set of future market conditions (“scenarios”) from today to period n.
  3. Compute a P/L on the position for each scenario

After computing each position’s P/L, we sum the P/L for each scenario and then rank the scenarios’ P/L to find the kth percentile (worst) loss.2 This loss defines our VaR T at the kth percentile for observation-period length n. Determining what significance percentile k and observation length n to use is straightforward and is often dictated by regulatory rules, for example 99th percentile 10-day VaR is used for risk-based capital under the Market Risk Rule. Generating the scenarios and computing P/L under these scenarios is open to interpretation. We cover each of these in the next two sections, with their advantages and drawbacks.

Chapter 2
Generating Scenarios

To compute VaR, we first need to generate projective scenarios of market conditions. Broadly speaking, there are two ways to derive this set of scenarios3

  1. Project future market conditions using a Monte Carlo simulation framework
  2. Project future market conditions using historical (actual) changes in market conditions

MONTE CARLO SIMULATION

Many commercial providers simulate future market conditions using Monte Carlo simulation. To do this, they must first estimate the distributions of risk factors, including correlations between risk factors. Using correlations that are derived from historical data makes the general assumption that correlations are constant within the period. As shown in the academic literature, correlations tend to change, especially in extreme market moves – exactly the kind of moves that tend to define the VaR threshold.4 By constraining correlations, VaR may be either overstated or understated depending on the structure of the position. To account for this, some providers allow users to “stress” correlations by increasing or decreasing them. Such a stress scenario is either arbitrary, or is informed by using correlations from yet another time-period (for example, using correlations from a time of market stress), mixing and matching market data in an ad hoc way.

Further, many market risk factors are highly correlated, which is especially true on the interest rate curve. To account for this, some providers use a single factor for rate-level and then a second or third factor for slope and curvature of the curve. While this may be broadly representative, this approach may not capture subtle changes on other parts of the curve. This limited approach is acceptable for non-callable fixed income securities, but proves problematic when applying curve changes to complex securities such as MBS, where the security value is a function of forward mortgage rates, which in turn is a multivariate function of points on the curve and often implied volatility.

MONTE CARLO SIMULATION

Many commercial providers simulate future market conditions using Monte Carlo simulation. To do this, they must first estimate the distributions of risk factors, including correlations between risk factors. Using correlations that are derived from historical data makes the general assumption that correlations are constant within the period. As shown in the academic literature, correlations tend to change, especially in extreme market moves – exactly the kind of moves that tend to define the VaR threshold.4 By constraining correlations, VaR may be either overstated or understated depending on the structure of the position. To account for this, some providers allow users to “stress” correlations by increasing or decreasing them. Such a stress scenario is either arbitrary, or is informed by using correlations from yet another time-period (for example, using correlations from a time of market stress), mixing and matching market data in an ad hoc way.

Further, many market risk factors are highly correlated, which is especially true on the interest rate curve. To account for this, some providers use a single factor for rate-level and then a second or third factor for slope and curvature of the curve. While this may be broadly representative, this approach may not capture subtle changes on other parts of the curve. This limited approach is acceptable for non-callable fixed income securities, but proves problematic when applying curve changes to complex securities such as MBS, where the security value is a function of forward mortgage rates, which in turn is a multivariate function of points on the curve and often implied volatility.

HISTORICAL SIMULATION

RiskSpan projects future market conditions by using actual (observed) -day changes in market conditions over the look-back period. For example, if we are computing 10-day VaR for regulatory capital usage under the Market Risk Rule, RiskSpan takes actual 10-day changes in market variables. This approach allows our VaR scenarios to account for natural changes in correlation under extreme market moves, such as occurs during a flight-to-quality where risky assets tend to underperform risk-free assets, and risky assets tend to move in a highly correlated manner. RiskSpan believes this is a more natural way to capture changing correlations, without the arbitrary overlay of how to change correlations in extreme market moves. This, in turn, will more correctly capture VaR.5

 

HISTORICAL SIMULATION

RiskSpan projects future market conditions by using actual (observed) -day changes in market conditions over the look-back period. For example, if we are computing 10-day VaR for regulatory capital usage under the Market Risk Rule, RiskSpan takes actual 10-day changes in market variables. This approach allows our VaR scenarios to account for natural changes in correlation under extreme market moves, such as occurs during a flight-to-quality where risky assets tend to underperform risk-free assets, and risky assets tend to move in a highly correlated manner. RiskSpan believes this is a more natural way to capture changing correlations, without the arbitrary overlay of how to change correlations in extreme market moves. This, in turn, will more correctly capture VaR.5

Chapter 3
Calculating Simulated P/L

Get a Demo

With the VaR scenarios defined, we move on to computing P/L under these scenarios. Generally, there are two methods employed

  1. A Taylor approximation of P/L for each instrument, sometimes called “delta-gamma”
  2. A full revaluation of each instrument using its market-accepted technique for valuation

Market practitioners sometimes blend these two techniques, employing full revaluation where the valuation technique is simple (e.g. yield + spread) and using delta-gamma where revaluation is more complicated (e.g. OAS simulation on MBS).

 

DELTA-GAMMA P/L APPROXIMATION

Many market practitioners use a Taylor approximation or “delta-gamma” approach to valuing an instrument under each VaR scenario. For instruments whose price function is approximately linear across each of the m risk factors, users tend to use the first order Taylor approximation, where the instrument price under the kth VaR scenario is given by

Making the price change in the kth scenario

Where ΔP is the simulated price change, Δxi is the change in the ith risk factor, and  is the price delta with respect to the ith risk factor evaluated at the base case. In many cases, these partial derivatives are approximated by bumping the risk factors up/down.6 If the instrument is slightly non-linear, but not non-linear enough to use a higher order approximation, then approximating a first derivative can be a source of error in generating simulated prices. For instruments that are approximately linear, using first order approximation is typically as good as full revaluation. From a computation standpoint, it is marginally faster but not significantly so. Instruments whose price function is approximately linear also tend to have analytic solutions to their initial price functions, for example yield-to-price, and these analytic solutions tend to be as fast as a first-order Taylor approximation. If the instrument is non-linear, practitioners must use a higher order approximation which introduces second-order partial derivatives. For an instrument with m risk-factors, we can approximate the price change in the kth scenario by using the multivariate second order Taylor approximation

To simplify the application of the second-order Taylor approximation, practitioners tend to ignore many of the cross-partial terms. For example, in valuing MBS under delta-gamma, practitioners tend to simplify the approximation by using the first derivatives and a single “convexity” term, which is the second derivative of price with respect to overall rates. Using this short-cut raises a number of issues:

  1. It assumes that the cross-partials have little impact. For many structured products, this is not true.7
  2. Since structured products calculate deltas using finite shifts, how exactly does one calculate a second-order mixed partials?8
  3. For structured products, using a single, second-order “convexity” term assumes that the second order term with respect to rates is uniform across the curve and does not vary by where you are on the curve. For complex mortgage products such as mortgage servicing rights, IOs and Inverse IOs, convexity can vary greatly depending on where you look at the curve.

Using a second-order approximation assumes that the second order derivatives are constant as rates change. For MBS, this is not true in general. For example, in the graphs below we show a constant-OAS price curve for TBA FNMA 30yr 3.5%, as well as a graph of its “DV01”, or first derivative with respect to rates. As you can see, the DV01 graph is non-linear, implying that the convexity term (second derivative of the price function) is non-constant, rendering a second-order Taylor approximation a weak assumption. This is especially true for large moves in rate, the kind of moves that dominate the computation of the VaR.9

In addition to the assumptions above, we occasionally observe that commercial VaR providers compute 1-day VaR and, in the interest of computational savings, scale this 1-day VaR by √10 to generate 10-day VaR. This approximation only works if

  1. Changes in risk factors are all independently, identically distributed (no autocorrelation or heteroscedasticity)
  2. The asset price function is linear in all risk factors

In general, neither of these conditions hold and using a scaling factor of √10 will likely yield an incorrect value for 10-day VaR.10

 

RATIONALIZING WEAKNESS IN THE APPROXIMATION

With the weaknesses in the Taylor approximation cited above, why do some providers still use delta-gamma VaR? Most practitioners will cite that the Taylor approximation is much faster than full revaluation for complex, non-linear instruments. While this seems true at first glance, you still need to:

  1. Compute or approximate all the first partial derivatives
  2. Compute or approximate some of the second partial derivatives and decide which are relevant or irrelevant. This choice may vary from security type to security type.

Neither of these tasks are computationally simple for complex, path-dependent securities which are found in many portfolios. Further, the choice of which second-order terms to ignore has to be supported by documentation to satisfy regulators under the Market Risk Rule.

Even after approximating partials and making multiple, qualitative assessments of which second-order terms to include/exclude, we are still left with error from the Taylor approximation. This error grows with the size of the market move, which also tends to be the scenarios that dominate the VaR calculation. With today’s flexible cloud computation and ultra-fast, cheap processing, the Taylor approximation and its computation of partials ends up being only marginally faster than a full revaluation for complex instruments.11

With the weaknesses in Taylor approximation, especially with non-linear instruments, and the speed and cheapness of full revaluation, we believe that fully revaluing each instrument in each scenario is both more accurate and more straightforward than having to defend a raft of assumptions around the Taylor approximation.

Chapter 4
Conclusion

Talk Scope

With these points in mind, what is the best method for computing VaR? Considering the complexity of many instruments, and considering the comparatively cheap and fast computation available through today’s cloud computing, we believe that calculating VaR using a historical-scenario, full revaluation approach provides the most accurate and robust VaR framework.

From a scenario generation standpoint, using historical scenarios allows risk factors to evolve in a natural way. This in turn captures actual changes in risk factor correlations, changes which can be especially acute in large market moves. In contrast, a Monte Carlo simulation of scenarios typically allows users to “stress” correlations, but these stresses scenarios are arbitrary which may ultimately lead to misstated risk.

From a valuation framework, we feel that full revaluation of assets provides the most accurate representation of risk, especially for complex instruments such as complex ABS and MBS securities. The assumptions and errors introduced in the Taylor approximation may overwhelm any minor savings in run-time, given today’s powerful and cheap cloud analytics. Further, the Taylor approximation forces users to make and defend qualitative judgements of which partial derivatives to include and which to ignore. This greatly increasing the management burden around VaR as well as regulatory scrutiny around justifying these assumptions.

In short, we believe that a historical scenario, full-revaluation VaR provides the most accurate representation of VaR, and that today’s cheap and powerful computing make this approach feasible for most books and trading positions. For VaR, it’s no longer necessary to settle for second-best.

References

ENDNOTES

1 The holding period n is typically one day, ten days, or 21 days (a business-month) although in theory it can be any length period.
 
2 We can also partition the book into different sub-books or “equivalence classes” and compute VaR on each class in the partition. The entire book is the trivial partition.
 
3 There is a third approach to VaR: parametric VaR, where the distributions of asset prices are described by the well-known distributions such as Gaussian. Given the often-observed heavy-tail distributions, combined with difficulties in valuing complex assets with non-linear payoffs, we will ignore parametric VaR in this review.
 
4 The academic literature contains many papers on increased correlation during extreme market moves, for example [5]

5 For example, a bank may have positions in two FX pairs that are poorly correlated in times normal times and highly negatively correlated in times of stress. If a 99%ile worst-move coincides with a stress period, then the aggregate P/L from the two positions may offset each other. If we used the overall correlation to drive a Monte Carlo simulated VaR, the calculated VaR could be much higher.

6 This is especially common in MBS, where the first and second derivatives are computed using a secant-line approximation after shifting risk factors, such as shifting rates ± 25bp

7 For example, as rates fall and a mortgage becomes more refinancible, the mortgage’s exposure to implied volatility also increases, implying that the cross-partial for price with respect to rates and vol is non-zero.

8 Further, since we are using finite shifts, the typical assumption that ƒxy = ƒyx which is based on the smoothness of ƒ(x,y) does not necessarily hold. Therefore, we need to compute two sets of cross partials, further increasing the initial setup time.

9 Why is the second derivative non-constant? As rates move significantly, prepayments stop rising or falling. At these “endpoints,” cash flows on the mortgage change little, making the instrument positively convex like a fixed-amortization schedule bond. In between, changes in prepayments case the mortgage to extend or shorten as rates rise or fall, respectively, which in turn make the MBS negatively convex.

10 Much has been written on the weakness of this scaling, see for example [7]

11 For example, using a flexible computation grid RiskSpan can perform a full OAS revaluation on 20,000 MBS passthroughs using a 250-day lookback period in under one hour. Lattice-solved options are an order of magnitude faster, and analytic instruments such as forwards, European options, futures and FX are even faster.

1 The holding period n is typically one day, ten days, or 21 days (a business-month) although in theory it can be any length period.

2 We can also partition the book into different sub-books or “equivalence classes” and compute VaR on each class in the partition. The entire book is the trivial partition.

3 There is a third approach to VaR: parametric VaR, where the distributions of asset prices are described by the well-known distributions such as Gaussian. Given the often-observed heavy-tail distributions, combined with difficulties in valuing complex assets with non-linear payoffs, we will ignore parametric VaR in this review.

4 The academic literature contains many papers on increased correlation during extreme market moves, for example [5]

5 For example, a bank may have positions in two FX pairs that are poorly correlated in times normal times and highly negatively correlated in times of stress. If a 99%ile worst-move coincides with a stress period, then the aggregate P/L from the two positions may offset each other. If we used the overall correlation to drive a Monte Carlo simulated VaR, the calculated VaR could be much higher.

6 This is especially common in MBS, where the first and second derivatives are computed using a secant-line approximation after shifting risk factors, such as shifting rates ± 25bp

7 For example, as rates fall and a mortgage becomes more refinancible, the mortgage’s exposure to implied volatility also increases, implying that the cross-partial for price with respect to rates and vol is non-zero.

8 Further, since we are using finite shifts, the typical assumption that ƒxy = ƒyx which is based on the smoothness of ƒ(x,y) does not necessarily hold. Therefore, we need to compute two sets of cross partials, further increasing the initial setup time.

9 Why is the second derivative non-constant? As rates move significantly, prepayments stop rising or falling. At these “endpoints,” cash flows on the mortgage change little, making the instrument positively convex like a fixed-amortization schedule bond. In between, changes in prepayments case the mortgage to extend or shorten as rates rise or fall, respectively, which in turn make the MBS negatively convex.

10 Much has been written on the weakness of this scaling, see for example [7]

11 For example, using a flexible computation grid RiskSpan can perform a full OAS revaluation on 20,000 MBS passthroughs using a 250-day lookback period in under one hour. Lattice-solved options are an order of magnitude faster, and analytic instruments such as forwards, European options, futures and FX are even faster.

Get the fully managed solution

Get a Demo

Get Started
Log in

Linkedin   

risktech2024