Get Started
Articles Tagged with: Credit Analytics

RiskSpan VQI: Current Underwriting Standards Q3 2020

RiskSpan’s Vintage Quality Index, which had declined sharply in the first half of the year, leveled off somewhat in the third quarter, falling just 2.8 points between June and September, in contrast to its 12 point drop in Q2.

This change, which reflects a relative slowdown in the tightening of underwriting standards reflects something of a return to stability in the Agency origination market.

Driven by a drop in cash-out refinances (down 2.3% in the quarter), the VQI’s gradual decline left the standard credit-related risk attributes (FICO, LTV, and DTI) largely unchanged.

The share of High-LTV loans (loans with loan-to-value ratios over 80%) which fell 1.3% in Q3, has fallen dramatically over the last year–1.7% in total. More than half of this drop (6.1%) occurred before the start of the COVID-19 crisis. This suggests that, even though the Q3 VQI reflects tightening underwriting standards, the stability of the credit-related components, coupled with huge volumes from the GSEs, reflects a measure of stability in credit availability.

Risk Layers – September 20 – All Issued Loans By Count

Risk Layers – September 20 – All Issued Loans By Count

Analytical And Data Assumptions

Population assumptions:

  • Monthly data for Fannie Mae and Freddie Mac.

  • Loans originated more than three months prior to issuance are excluded because the index is meant to reflect current market conditions.

  • Loans likely to have been originated through the HARP program, as identified by LTV, MI coverage percentage, and loan purpose are also excluded. These loans do not represent credit availability in the market as they likely would not have been originated today but for the existence of HARP.                                                                                                                          

Data assumptions:

  • Freddie Mac data goes back to 12/2005. Fannie Mae only back to 12/2014.

  • Certain fields for Freddie Mac data were missing prior to 6/2008.   

GSE historical loan performance data release in support of GSE Risk Transfer activities was used to help back-fill data where it was missing.

An outline of our approach to data imputation can be found in our VQI Blog Post from October 28, 2015.                                                


Consistent & Transparent Forbearance Reporting Needed in the PLS Market

There is justified concern within the investor community regarding the residential mortgage loans currently in forbearance and their ultimate resolution. Although most of the 4M loans in forbearance are in securities backed by the Federal Government (Fannie Mae, Freddie Mac or Ginnie Mae), approximately 400,000 loans currently in forbearance represent collateral that backs private-label residential mortgage-backed securities (PLS). The PLS market operates without clear, articulated standards for forbearance programs and lacks the reporting practices that exist in Agency markets. This leads to disparate practices for granting forbearance to borrowers and a broad range of investor reporting by different servicers. COVID-19 has highlighted the need for transparent, consistent reporting of forbearance data to investors to support a more efficient PLS market.

Inconsistent investor reporting leaves too much for interpretation. It creates investor angst while making it harder to understand the credit risk associated with underlying mortgage loans. RiskSpan performed an analysis of 2,542 PLS deals (U.S. only) for which loan-level foreclosure metrics are available. The data shows that approximately 78% of loans reported to be in forbearance were backing deals originated between 2005-2008 (“Legacy Bonds”).  As you would expect, new issue PLS has a smaller percentage of loans reported to be in forbearance.

% total forebearance UPB

Not all loans in forbearance will perform the same and it is critical for investors to receive transparent reporting of underlying collateral within their PLS portfolio in forbearance.  These are unchartered times and, unlike historic observations of borrowers requesting forbearance, many loans presently in forbearance are still current on their mortgage payments. In these cases, they have elected to join a forbearance program in case they need it at some future point. Improved forbearance reporting will help investors better understand if borrowers will eventually need to defer payments, modify loan terms, or default leading to foreclosure or sale of the property.

In practice, servicers have followed GSE guidance when conducting forbearance reviews and approval. However, without specific guidance, servicers are working with inconsistent policies and procedures developed on a company-by-company basis to support the COVID forbearance process. For example, borrowers can be forborne for 12-months according to FHFA guidance. Some servicers have elected to take a more conservative approach and are providing forbearance in 3-month increments with extensions possible once a borrower confirms they remain financially impacted by the COVID pandemic.

Servicers have the data that investors want to analyze. Inconsistent practices in the reporting of COVID forbearances by servicers and trustees has resulted in forbearance data being unavailable on certain transactions. This means investors are not able to get a clear picture of the financial health of borrowers in transactions. In some cases, trustees are not reporting forbearance information to investors which makes it nearly impossible to obtain a reliable credit assessment of the underlying collateral.  

The PLS market has attempted to identify best practices for monthly loan-level reporting to properly assess the risk of loans where forbearance has been granted.  Unfortunately, the current market crisis has highlighted that not all market participants have adopted the best practices and there are not clear advantages for issuers and servicers to provide clear, transparent forbearance reporting. At a minimum, RiskSpan recommends that the following forbearance data elements be reported by servicers for PLS transactions:

  • Last Payment Date: The last contractual payment date for a loan (i.e. the loan’s “paid- through date”).
  • Loss Mitigation Type: A code indicating the type of loss mitigation the servicer is pursuing with the borrower, loan, or property.
  • Forbearance Plan Start Date: The start date when either a) no payment or b) a payment amount less than the contractual obligation has been granted to the borrower.
  • Forbearance Plan Scheduled End Date: The date on which a Forbearance Plan is scheduled to end.
  • Forbearance Exit – Reason Code: The reason provided by the borrower for exiting a forbearance plan.
  • Forbearance Extension Requested: Flag indicating the borrower has requested one or more forbearance extensions.
  • Repayment Plan Start Date: The start date for when a borrower has agreed to make monthly mortgage payments greater than the contractual installment in an effort to repay amounts due during a Forbearance Plan.
  • Repayment Plan Scheduled End Date: The date at which a Repayment Plan is scheduled to end.
  • Repayment Plan Violation Date: The date when the borrower ceased complying with the terms of a defined repayment plan.

The COVID pandemic has highlighted monthly reporting weaknesses by servicers in PLS transactions. Based on investor discussions, additional information is needed to accurately assess the financial health of the underlying collateral. Market participants should take the lessons learned from the current crisis to re-examine prior attempts to define monthly reporting best practices. This includes working with industry groups and regulators to implement consistent, transparent reporting policies and procedures that provide investors with improved forbearance data.


The Why and How of a Successful SAS-to-Python Model Migration

A growing number of financial institutions are migrating their modeling codebases from SAS to Python. There are many reasons for this, some of which may be unique to the organization in question, but many apply universally. Because of our familiarity not only with both coding languages but with the financial models they power, my colleagues and I have had occasion to help several clients with this transition.

Here are some things we’ve learned from this experience and what we believe is driving this change.

Python Popularity

The popularity of Python has skyrocketed in recent years. Its intuitive syntax and a wide array of packages available to aid in development make it one of the most user-friendly programming languages in use today. This accessibility allows users who may not have a coding background to use Python as a gateway into the world of software development and expand their toolbox of professional qualifications.

Companies appreciate this as well. As an open-source language with tons of resources and low overhead costs, Python is also attractive from an expense perspective. A cost-conscious option that resonates with developers and analysts is a win-win when deciding on a codebase.

Note: R is another popular and powerful open-source language for data analytics. Unlike R, however, which is specifically used for statistical analysis, Python can be used for a wider range of uses, including UI design, web development, business applications, and others. This flexibility makes Python attractive to companies seeking synchronicity — the ability for developers to transition seamlessly among teams. R remains popular in academic circles where a powerful, easy-to-understand tool is needed to perform statistical analysis, but additional flexibility is not necessarily required. Hence, we are limiting our discussion here to Python.

Python is not without its drawbacks. As an open-source language, less oversight governs newly added features and packages. Consequently, while updates may be quicker, they are also more prone to error than SAS’s, which are always thoroughly tested prior to release.

Visualization Capabilities

While both codebases support data visualization, Python’s packages are generally viewed more favorably than SAS’s, which tend to be on the more basic side. More advanced visuals are available from SAS, but they require the SAS Visual Analytics platform, which comes at an added cost.

Python’s popular visualization packages — matplotlib, plotly, and seaborn, among others — can be leveraged to create powerful and detailed visualizations by simply importing the libraries into the existing codebase.

Accessibility

SAS is a command-driven software package used for statistical analysis and data visualization. Though available only for Windows operating systems, it remains one of the most widely used statistical software packages in both industry and academia.

It’s not hard to see why. For financial institutions with large amounts of data, SAS has been an extremely valuable tool. It is a well-documented language, with many online resources and is relatively intuitive to pick up and understand – especially when users have prior experience with SQL. SAS is also one of the few tools with a customer support line.

SAS, however, is a paid service, and at a standalone level, the costs can be quite prohibitive, particularly for smaller companies and start-ups. Complete access to the full breadth of SAS and its supporting tools tends to be available only to larger and more established organizations. These costs are likely fueling its recent drop-off in popularity. New users simply cannot access it as easily as they can Python. While an academic/university version of the software is available free of charge for individual use, its feature set is limited. Therefore, for new users and start-up companies, SAS may not be the best choice, despite being a powerful tool. Additionally, with the expansion and maturity of the variety of packages that Python offers, many of the analytical abilities of Python now rival those of SAS, making it an attractive, cost-effective option even for very large firms.

Future of tech

Many of the expected advances in data analytics and tech in general are clearly pointing toward deep learning, machine learning, and artificial intelligence in general. These are especially attractive to companies dealing with large amounts of data.

While the technology to analyze data with complete independence is still emerging, Python is better situated to support companies that have begun laying the groundwork for these developments. Python’s rapidly expanding libraries for artificial intelligence and machine learning will likely make future transitions to deep learning algorithms more seamless.

While SAS has made some strides toward adding machine learning and deep learning functionalities to its repertoire, Python remains ahead and consistently ranks as the best language for deep learning and machine learning projects. This creates a symbiotic relationship between the language and its users. Developers use Python to develop ML projects since it is currently best suited for the job, which in turn expands Python’s ML capabilities — a cycle which practically cements Python’s position as the best language for future development in the AI sphere.

Overcoming the Challenges of a SAS-to-Python Migration

SAS-to-Python migrations bring a unique set of challenges that need to be considered. These include the following.

Memory overhead

Server space is getting cheaper but it’s not free. Although Python’s data analytics capabilities rival SAS’s, Python requires more memory overhead. Companies working with extremely large datasets will likely need to factor in the cost of extra server space. These costs are not likely to alter the decision to migrate, but they also should not be overlooked.

The SAS server

All SAS commands are run on SAS’s own server. This tightly controlled ecosystem makes SAS much faster than Python, which does not have the same infrastructure out of the box. Therefore, optimizing Python code can be a significant challenge during SAS-to-Python migrations, particularly when tackling it for the first time.

SAS packages vs Python packages

Calculations performed using SAS packages vs. Python packages can result in differences, which, while generally minuscule, cannot always be ignored. Depending on the type of data, this can pose an issue. And getting an exact match between values calculated in SAS and values calculated in Python may be difficult.

For example, the true value of “0” as a float datatype in SAS is approximated to 3.552714E-150, while in Python float “0” is approximated to 3602879701896397/255. These values do not create noticeable differences in most calculations. But some financial models demand more precision than others. And over the course of multiple calculations which build upon each other, they can create differences in fractional values. These differences must be reconciled and accounted for.

Comparing large datasets

One of the most common functions when working with large datasets involves evaluating how they change over time. SAS has a built-in function (proccompare) which compares datasets swiftly and easily as required. Python has packages for this as well; however, these packages are not as robust as their SAS counterparts. 

Conclusion

In most cases, the benefits of migrating from SAS to Python outweigh the challenges associated with going through the process. The envisioned savings can sometimes be attractive enough to cause firms to trivialize the transition costs. This should be avoided. A successful migration requires taking full account of the obstacles and making plans to mitigate them. Involving the right people from the outset — analysts well versed in both languages who have encountered and worked through the pitfalls — is key.


August 12 Webinar: Good Models, Bad Scenarios? Delinquency, Forbearance, and COVID

Recorded: August 12th | 1:00 p.m. EDT

Business-as usual macroeconomic scenarios that seemed sensible a few months ago are now obviously incorrect. Off-the-shelf models likely need enhancements. How can institutions adapt? 

Credit modelers don’t need to predict the future. They just need to forecast how borrowers are likely to respond to changing economic conditions. This requires robust datasets and insightful scenario building.

Let our panel of experts walk you through how they approach scenario building, including:

  • How mortgage delinquencies have traditionally tracked unemployment and how these assumptions may need to be altered when unemployment is concentrated in non-homeowning population segments.
  • The likely impacts of home purchases and HPI on credit performance.
  • Techniques for translating macroeconomic scenarios into prepayment and default vectors.


Featured Speakers

Shelley Klein

Shelley Klein

VP of Loss Forecast and Allowance, Fannie Mae

Janet Jozwik

Janet Jozwik

Managing Director, RiskSpan

Suhrud-Dagli

Suhrud Dagli

Co-founder and CIO, RiskSpan

Michael Neal

Michael Neal

Senior Research Associate, The Urban Institute


COVID-19 and the Cloud

COVID-19 creates a need for analytics in real time

Regarding the COVID-19 pandemic, Warren Buffet has observed that we haven’t faced anything that quite resembles this problem” and the fallout is “still hard to evaluate. 

The pandemic has created unprecedented shock to economies and asset performance. The recent unemployment  data, although encouraging , has only added to the uncertaintyFurthermore, impact and recovery are unevenoften varying considerably from county to county and city to city. Consider: 

  1. COVID-19 cases and fatalities were initially concentrated in just a few cities and counties resulting in almost a total shutdown of these regions. 
  2. Certain sectors, such as travel and leisure, have been affected worse than others while other sectors such as oil and gas have additional issues. Regions with exposure to these sectors have higher unemployment rates even with fewer COVID-19 cases. 
  3. Timing of reopening and recoveries has also varied due to regional and political factors. 

Regional employment, business activity, consumer spending and several other macro factors are changing in real time. This information is available through several non-traditional data sources. 

Legacy models are not working, and several known correlations are broken. 

Determining value and risk in this environment is requiring unprecedented quantities of analytics and on-demand computational bandwidth. 

COVID-19 in the Cloud

Need for on-demand computation and storage across the organization 

I don’t need a hard disk in my computer if I can get to the server faster… carrying around these non-connected computers is byzantine by comparison.” ~ Steve Jobs 


Front office, risk management, quants and model risk management – every aspect of the analytics ecosystem requires the ability to run large number of scenarios quickly. 

Portfolio managers need to recalibrate asset valuation, manage hedges and answer questions from senior management, all while looking for opportunities to find cheap assets. Risk managers are working closely with quants and portfolio managers to better understand the impact of this unprecedented environment on assets. Quants must not only support existing risk and valuation processes but also be able to run new estimations and explain model behavior as data streams in from variety of sources. 

These activities require several processors and large storage units to be stood up on-demand. Even in normal times infrastructure teams require at least 10 to 12 weeks to procure and deploy additional hardware. With most of the financial services world now working remotely, this time lag is further exaggerated.  

No individual firm maintains enough excess capacity to accommodate such a large and urgent need for data and computation. 

The work-from-home model has proven that we have sufficient internet bandwidth to enable the fast access required to host and use data on the cloud. 

Cloud is about how you do computing

“Cloud is about how you do computing, not where you do computing.” ~ Paul Maritz, CEO of VMware 


Cloud computing is now part of everyday vocabulary and powers even the most common consumer devices. However, financial services firms are still in early stages of evaluating and transitioning to a cloud-based computing environment. 

Cloud is the only way to procure the level of surge capacity required today. At RiskSpan we are computing an average of half-million additional scenarios per client on demand. Users don’t have the luxury to wait for an overnight batch process to react to changing market conditions. End users fire off a new scenario assuming that the hardware will scale up automagically. 

When searching Google’s large dataset or using Salesforce to run analytics we expect the hardware scaling to be limitless. Unfortunately, valuation and risk management software are typically built to run on a pre-defined hardware configuration.  

Cloud native applications, in contrast, are designed and built to leverage the on-demand scaling of a cloud platform. Valuation and risk management products offered as SaaS scale on-demand, managing the integration with cloud platforms. 

Financial services firms don’t need to take on the burden of rewriting their software to work on the cloud. Platforms such as RS Edge enable clients to plug their existing data, assumptions and models into a cloudnative platform. This enables them to get all the analytics they’ve always had—just faster and cheaper.  

Serverless access can also help companies provide access to their quant groups without incurring additional IT resource expense. 

A recent survey from Flexera shows that 30% of enterprises have increased their cloud usage significantly due to COVID-19.

COVID-19 in the Cloud

Cloud is cost effective 

In 2000, when my partner Ben Horowitz was CEO of the first cloud computing company, Loudcloud, the cost of a customer running a basic Internet application was approximately $150,000 a month.”  ~ Marc Andreessen, Co-founder of Netscape, Board Member of Facebook 


Cloud hardware is cost effective, primarily due to the on-demand nature of the pricing model. $250B asset manager uses RS Edge to run millions of scenarios for a 45minute period every day. Analysis is performed over a thousand servers at a cost of $500 per month. The same hardware if deployed for 24 hours would cost $27,000 per month 

Cloud is not free and can be a two-edged sword. The same on-demand aspect thaenables end users to spin up servers as needed, if not monitoredcan cause the cost of such servers to accumulate to undesirable levelsOne of the benefits of a cloud-native platform is built-on procedures to drop unused servers, which minimizes the risk of paying for unused bandwidth. 

And yes, Mr. Andreeseen’s basic application can be hosted today for less than $100 per month 

The same survey from Flexera shows that organizations plan to increase public cloud spending by 47% over the next 12 months. 

COVID-19 in the Cloud

Alternate data analysis

“The temptation to form premature theories upon insufficient data is the bane of our profession.” ~ Sir Arthur Conan Doyle, Sherlock Holmes.


Alternate data sources are not always easily accessible and available within analytic applications. The effort and time required to integrate them can be wasted if the usefulness of the information cannot be determined upfront. Timing of analyzing and applying the data is key. 

Machine learning techniques offer quick and robust ways of analyzing data. Tools to run these algorithms are not readily available on a desktop computer.  

Every major cloud platform provides a wealth of tools, algorithms and pre-trained models to integrate and analyze large and messy alternate datasets. 

Join fintova’s Gary Maier and me at 1 p.m. EDT on June 24th as we discuss other important factors to consider when performing analytics in the cloud. Register now.


Chart of the Month: Tracking Mortgage Delinquency Against Non-traditional Economic Indicators by MSA

Tracking Mortgage Delinquency Against Non-traditional Economic Indicators by MSA 

Traditional economic indicators lack the timeliness and regional granularity necessary to track the impact of COVID-19 pandemic on communities across the country. Unemployment reports published by the Bureau of Labor Statistics, for example, tend to have latency issues and don’t cover all workers. As regional economies attempt to get back to a new “normal” RiskSpan has begun compiling non-traditional “alternative” data that can provide a more granular and real-time view of issues and trends. In past crises, traditional macro indicators such as home price indices and unemployment rates were sufficient to explain the trajectory of consumer credit. However, in the current crisis, mortgage delinquencies are deteriorating more rapidly with significant regional dispersion. Serious mortgage delinquencies in the New York metro region were around 1.1% by April 2009 vs 30 day delinquencies at 9.9% of UPB in April 2020.  

STACR loan–level data shows that nationwide 30–day delinquencies increased from 0.8% to 4.2% nationwide. In this chart we track the performance and state of employment of 5 large metros (MSA). 

May Chart of the Month


Indicators included in our Chart of the Month: 

Change in unemployment is the BLS measure computed from unemployment claims. Traditionally this indicator has been used to measure economic health of a region. BLS reporting typically lags by months and weeks. 

Air quality index is a measure we calculate using level PM2.5 reported by EPA’s AirNow database on a daily basis. This metric is a proxy of increased vehicular traffic in different regions. Using a nationwide network of monitoring sites, EPA has developed ambient air quality trends for particle pollution, also called Particulate Matter (PM). We compute the index as daily level of PM2.5 vs the average of the last 5 years.  For regions that are still under a shutdown air quality index should be less than 100 (e.g. New York at 75% vs Houston at 105%) 

Air pollution from traffic has increased in regions where businesses have opened in May ’20 (e.g. LA went up from 69% in April to 98% in May).  However, consumer spending has not always increased at the same level.  We look to proxies for hourly employment levels. 

New Daily COVID-19 Cases: This is a health crisis and managing the rate of new COVID-19 cases will drive decisions to open or close businesses. The chart reports the monthly peak in new cases using daily data from Opportunity Insight 

Hourly Employment and Hours Worked at small businesses is provided by Opportunity Insight using data from Homebase. Homebase is a company that provides virtual scheduling and time-tracking tools, focused on small businesses in sectors such as retail, restaurant, and leisure/accommodation. The chart shows change in level of hourly employment as compared to January 2020. We expect this is to be a leading indicator of employment levels for this sector of consumers. 


Sources of data: 

Freddie Mac’s (STACR) transaction database 

Opportunity Insight’s Recovery Tracker 

Bureau of Labor and Statistics (BLS)’ MSA level economic reports 

Environment Protection Agency (EPA)’s AirNow database. 


Webinar: MGIC Perspectives on CECL – Presented by MGIC with RiskSpan – Thank you

Thank you for accessing our webinar.

To view the webinar, please click view now.

VIEW NOW


Webinar: Managing Your Whole Loan Portfolio with Machine Learning

webinar

Managing Your Whole Loan Portfolio with Machine Learning

Whole Loan Data Meets Predictive Analytics

  • Ingest whole loan data
  • Normalize data sets
  • Improve data quality
  • Analyze your historical data
  • Improve your predictive analytics 

Learn the Power of Machine Learning

DATA INTAKE — How to leverage machine learning to help streamline whole loan data prep

MANAGE DATA — Innovative ways to manage the differences in large data sets

DATA IMPROVEMENT — Easily clean your data to drive better predictive analytics


About The Hosts

LC Yarnelle

Director – RiskSpan

LC Yarnelle is a Director with experience in financial modeling, business operations, requirements gathering and process design. At RiskSpan, LC has worked on model validation and business process improvement/documentation projects. He also led the development of one of RiskSpan’s software offerings, and has led multiple development projects for clients, utilizing both Waterfall and Agile frameworks.  Prior to RiskSpan, LC was as an analyst at NVR Mortgage in the secondary marketing group in Reston, VA, where he was responsible for daily pricing, as well as on-going process improvement activities.  Before a career move into finance, LC was the director of operations and a minority owner of a small business in Fort Wayne, IN. He holds a BA from Wittenberg University, as well as an MBA from Ohio State University. 

Matt Steele

Senior Analyst – RiskSpan

LC Yarnelle is a Director with experience in financial modeling, business operations, requirements gathering and process design. At RiskSpan, LC has worked on model validation and business process improvement/documentation projects. He also led the development of one of RiskSpan’s software offerings, and has led multiple development projects for clients, utilizing both Waterfall and Agile frameworks.  Prior to RiskSpan, LC was as an analyst at NVR Mortgage in the secondary marketing group in Reston, VA, where he was responsible for daily pricing, as well as on-going process improvement activities.  Before a career move into finance, LC was the director of operations and a minority owner of a small business in Fort Wayne, IN. He holds a BA from Wittenberg University, as well as an MBA from Ohio State University. 



Changes to Loss Models…and How to Validate Them

So you’re updating all your modeling assumptions. Don’t forget about governance.

Modelers have now been grappling with how COVID-19 should affect assumptions and forecasts for nearly two months. This exercise is raising at least as many questions as it is answering.

No credit model (perhaps no model at all) is immune. Among the latest examples are mortgage servicers having to confront how to bring their forbearance and loss models into alignment with new realities.

These new realities are requiring servicers to model unprecedented macroeconomic conditions in a new and changing regulatory environment. The generous mortgage forbearance provisions ushered in by March’s CARES Act are not tantamount to loan forgiveness. But servicers probably shouldn’t count on reimbursement of their forbearance advances until loan liquidation (irrespective of what form the payoff takes).

The ramifications of these costs and how servicers should modeling them is a central topic to be addressed in a Mortgage Bankers Association webinar on Wednesday, May 13, “Modeling Forbearance Losses in the COVID-19 world” (free for MBA members). RiskSpan CEO Bernadette Kogler will lead a panel consisting of Faith Schwartz, Suhrud Dagli, and Morgan Snyder in a discussion of the forbearance’s regulatory implications, the limitations of existing models, and best practices for modeling forbearance-related advances, losses, and operational costs.

Models, of course, are only as good as their underlying data and assumptions. When it comes to forbearance modeling, those assumptions obviously have a lot to do with unemployment, but also with the forbearance take-up rate layered on top of more conventional assumptions around rates of delinquency, cures, modifications, and bankruptcies.

The unique nature of this crisis requires modelers to expand their horizons in search of applicable data. For example, GSE data showing how delinquencies trend in rising unemployment scenarios might need to be supplemented by data from Greek or other European crises to better simulate extraordinarily high unemployment rates. Expense and liquidation timing assumptions will likely require looking at GSE and private-label data from the 2008 crisis. Having reliable assumptions around these is critically important because liquidity issues associated with servicing advances are often more an issue of timing than of anything else.

Model adjustments of the magnitude necessary to align them with current conditions almost certainly qualify as “material changes” and present a unique set of challenges to model validators. In addition to confronting an expanded workload brought on by having to re-validate models that might have been validated as recently as a few months ago, validators must also effectively challenge the new assumptions themselves. This will likely prove challenging absent historical context.

RiskSpan’s David Andrukonis will address many of these challenges—particularly as they relate to CECL modeling—as he participates in a free webinar, “Model Risk Management and the Impacts of COVID-19,” sponsored by the Risk Management Association. Perhaps fittingly, this webinar will run concurrent with the MBA webinar discussed above.

As is always the case, the smoothness of these model-change validations will depend on the lengths to which modelers are willing to go to thoroughly document their justifications for the new assumptions. This becomes particularly important when introducing assumptions that significantly differ from those that have been used previously. While it will not be difficult to defend the need for changes, justifying the individual changes themselves will prove more challenging. To this end, meticulously documenting every step of feature selection during the modeling process is critical not only in getting to a reliable model but also in ensuring an efficient validation process.

Documenting what they’re doing and why they’re doing it is no modeler’s favorite part of the job—particularly when operating in crisis mode and just trying to stand up a workable solution as quickly as possible. But applying assumptions that have never been used before always attracts increased scrutiny. Modelers will need to get into the habit of memorializing not only the decisions made regarding data and assumptions, but also the other options considered, and why the other considered options were ultimately passed over.

Documenting this decision-making process is far easier at the time it happens, while the details are fresh in a modeler’s mind, than several months down the road when people inevitably start probing.

Invest in the “ounce of prevention” now. You’ll thank yourself when model validation comes knocking.


Estimating Credit Losses in the COVID-19 Pandemic

In the years of calm economic expansion before CECL adoption, institutions carefully tuned the macroeconomic forecasting approaches and macro-conditioned credit models they must defend under the new standard. Now, seemingly an hour before public entities are to record (and support) their first macro-conditioned credit losses, a disease with no cure and no vaccine sweeps the globe and darkens whole sectors of the economy. Truth is stranger than fiction. 

Institutions Need New Scenarios and Model Adjustments, and Fast 

Institutions must overhaul their projection capabilities to withstand audit scrutiny and with only nominal relief in CECL deadlines. 

Faced with this unprecedented crisis, many institutions will need to find new sources of macroeconomic scenarios. Business-as-usual scenarios that seemed sensible a few months ago now appear wildly optimistic. 

Credit and prepay models built on data prior to February 2020 – the models that institutions have spent so much time and effort validating – must now be rethought entirely. 

Institutions may not have a great deal of time to make the necessary adjustments. While the Coronavirus Aid, Relief, and Economic Security (CARES) Act (in Section 4014, Optional Temporary Relief from Current Expected Credit Losses) allows banks and credit unions a brief delay in CECL adoption, RiskSpan’s public bank clients are adopting as scheduled. One reason is the short and uncertain length of the delay, which expires either on 12/31/2020 or when the national coronavirus emergency is declared over, whichever comes first. The national emergency could be declared over at any time, and indeed we hope the national emergency does end soon. Another reason is that, as Grant Thornton has noted, eligible entities that defer adoption will need to retrospectively restate their year-to-date results when they adopt ASU 2016-13. Ultimately, the “relief” is anti-relief. 

The revised CECL approaches that institutions race into production will need to withstand the inspection not only of the normal sets of eyes, but many other senior stakeholders. Scrutiny on credit accounting will be more intense than ever in light of COVID-19. 

Finally, to converge on a new macroeconomic scenario and model adjustments, institutions will be prompted by auditors and senior management to run their portfolio many times under many different combinations of approaches. As you can imagine, the volume of runs hitting RiskSpan’s Allowance Suite has spiked this month, with institutions running many different scenarios, and institutions with available-for-sale bond portfolios sending more impaired bonds than anticipated. The physics of pulling off so many runs in such a short time are impossible for teams and systems not set up for that scale. 

How RiskSpan is Helping Institutions Overcome These Challenges 

RiskSpan helps clients solve the new credit accounting rules for loans, held-to-maturity debt securities, and available-for-sale debt securities. As we all navigate these unique and evolving times, let us share how we incorporate the impact of COVID-19 into the allowances we generate. The toolbox includes new macroeconomic scenarios that reflect COVID-19, adjustments to credit and prepay models, an ability to selectively bypass models with user-defined scenarios, and even – sparingly – support for the dreaded “Q-factor” or management qualitative adjustment. 

COVID-19-Driven Macroeconomic scenarios 

RiskSpan partners with S&P Global Market Intelligence (“Market Intelligence”), employing their CECL model within our Allowance Suite. Each quarter, we apply a new macroeconomic forecast from the S&P Global Ratings team of economists (“S&P Global Economics”). We feed this forecast to all credit models in our platform to influence projections of default and severity and ultimately allowance. S&P Global Economics recent research has focused significantly on coronavirus, including the global and US economic impact of containment and mitigation measures and the recovery timeline. When the credit models take in this bearish outlook for the 3/31/2020 runs, they will return higher defaults and severities compared to prior quarters when the macroeconomic forecast was benign, which in turn will drive higher allowances. Auditors, examiners, and investors will rightly expect to see this.  

MODEL ENHANCEMENTS AND TUNINGS 

RiskSpan advises clients to apply model enhancements or adjustments as follows:

C&I loans, Corporate Bonds, Bank Loans, CLOs 

Corporate & Industrial (C&I) loans often carry internal risk ratings that are ‘through-the-cycle’ evaluations of the default risk and highly independent of cyclical changes in creditworthiness. Corporate bonds carry public credit ratings that are designed to represent forward-looking opinions about the creditworthiness of issuers and obligations, known to be relatively stable throughout a cycle. (Note: higher ratings have been consistently more stable than lower ratings).  

During upswings (downturns), an obligor’s point-in-time or short-term default rates will fall (rise) as the economic environment changes, and credit expectations may be better (worse) than implied by stable credit indicators and associated long-term default frequencies.  

To appropriately reflect the impact of COVID-19 on allowances, most of our clients are now applying industry-specific Point-in-Time (PIT) adjustments, based on Market Intelligence’s Probability of Default Model Market Signals (PDMS). These PIT signals, which use recent, industry-specific trading activity, are used as a guide to form limited adjustments to stable (or in some cases lagging) internal risk ratings of commercial loans and the current credit ratings of corporate bonds. (Adjustments are for the purpose of the CECL model only.)  

Because these adjusted risk ratings are key inputs to Market Intelligence’s CECL Model that we apply to these asset classes, Market Intelligence’s PDMS can influence allowances. Since economic conditions impact certain industry sectors (e.g., airlines, oil and gas, retail) in different ways, the industry-specific notches tend to vary by industry – some positive, some neutral, some negative. Consequently, in a diversified portfolio, we would not ordinarily expect a directional bias to the overall allowance, even though the allowance by industry will be refined. But this assumes a normal economic climate. During a major market downturn like we experienced in the runup to March 31, 2020, notches were negative across almost all industries, and we saw higher allowances as a result. Given the environment, this result is to be expected.  

Resi Loans and RMBS 

Even if we forecast the macroeconomy exactly right, the models of how borrowers perform given different macroeconomic patterns were built on prior decades of experience. Some of the macroeconomic twists and turns that this crisis will unleash will take different shapes than the last crisis.  

For example, a model built on the past two decades of data can only extrapolate what borrowers will do if unemployment goes to 20%; the historical data doesn’t contain such a stress. Even if the macroeconomic patterns do resemble prior crises, policy response will be different, and so will borrower behavior. And then after some recovery period, we expect borrower behavior to fall back into its classic grooves. For these reasons, we recommend model tunings that, all else equal, boost or dampen delinquencies, defaults and recoveries during a time-limited recovery window to account for the near-term impacts of COVID-19. We help clients quantify these model tunings by back-testing model projections against experience from recent weeks. 

In the past month, we have observed slower prepays from housing turnover because social distancing has blocked on-site walkthroughs and therefore home sales. Refinance applications, however, continue to roll in (as expected in this low rate environment), and the refi market is adopting property inspection waivers and remote notarization to work through the demand. As noted under Credit Model Tuning above, we help clients quantify and apply prepay model tunings that act in the short-term and can phase out across the long-term forecast horizon. 

ABS 

Conventionally, ABS research departments form expected-case projections for underlying collateral by averaging the historical default, severity, and prepay behavior of the issuer. Because CECL calls for expected-case projections, RiskSpan’s bond research team has applied the same approach to generate ABS collateral projections for clients. 

ABS researchers identify stress scenarios by applying multiples or standard deviations to the historical averages. In the current climate, the expected case is a stress case. Therefore, RiskSpan has refined its methodology to apply our stress scenarios as our expected scenarios in times – like now – when the S&P Global Economics baseline macroeconomic forecasts show stress. 

MODEL OVERRIDES/USER-DEFINED SCENARIOS 

Where clients have their own views of how loans or bonds will perform, we have always empowered them to bypass models and define their own projections of default, severity, and prepayment. 

Resi Loans and RMBS 

For resi loans and RMBS collateral, we have rolled out new “build-a-curve” functionality that allows clients to use our platform to create their own default and severity paths by stipulating drivers such as: 

  • Peak unemployment rate,  
  • How long the peak will last and where unemployment rate will settle, 
  • Share of those unemployed who will roll delinquent, 
  • Length of external forbearance timelines, and 
  • Share of loans that roll delinquent that will eventually default. 

“Q-Factors” 

At many institutions, we have seen “Q-factors” (“qualitative” management adjustments on top of modeled allowance results) go from all but forbidden before this crisis to all but required during it. This is because the macroeconomic scenarios now being fed into credit models is beyond the data upon which any vendor or proprietary models were built.  

For example, unemployment rate gradually rose to 10% during the great recession. Many scenarios institutions are now considering call for unemployment rate to spike suddenly to 20% or more. Models can only extrapolate (beyond their known sample) to project obligor performance under these scenarios—there is nothing else they can do. But we know that such extrapolations are unlikely to be exactly right. This creates a strong argument to allow, or even encourage, management adjustments to model results. We are advising many clients to do just that, drawing on available data from natural disasters. 

Throughput 

As important as these quantitative refinements are, performing multiple runs to better understand the range of possible allowance results is equally important to meeting auditor expectations. Whereas before some institutions would use month-end allowances from a month before quarter-end because of tight reporting deadlines, now such institutions are running again at quarter-end, under a very tight timeline, to meet auditor demands for up-to-the-minute analysis. Whereas previously many institutions would run one macroeconomic scenario, now – at the prompting of auditors or their own management – they are running multiple. Institutions that previously did not apply Market Intelligence’s PDMS to their commercial loans and corporate bonds are now running with and without it to evaluate the difference. The dimensionality quickly explodes from one run per quarter to two, ten, or twenty. RiskSpan is happy to offer its platform to clients to support such throughput. 

———————————– 

We will be exploring these topics in greater detail in a webinar on May 28th, 2020 at 1:00 p.m. EDT. You can join us by registering here. I can also be reached directly at dandrukonis@riskspan.com


Get Started