Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Category: Article

EDGE: GNMA Delinquencies and Non-Bank Servicers

In the past two months, investors have seen outsized buyouts of delinquent loans from GNMA pools, leading to a significant uptick in prepayment speeds. Nearly all of these buyouts were driven by bank servicers, including Wells Fargo, US Bank, Truist, and Chase. GNMA buyout speeds in July’s report were the fastest, with Wells Fargo leading the charge on their seriously delinquent loans. The August report saw lower but still above-normal buyout activity. For September, we expect a further decline in bank buyout speeds, as the 60-day delinquent bucket for banks has declined from 6.6% just prior to the July report to 2.2% today.[1]

During that same time, buyouts from non-banks were nearly non-existent. We note that the roll rate from 60-day delinquent to 90-day delinquent (buyout-eligible) is comparable between banks and non-banks.[2] So buyout-eligible delinquencies for non-banks continue to build. That pipeline, coupled with the fact that non-banks service more than 75% of GNMA’s current balance, presents a substantial risk of future GNMA buyouts.

As discussed in previous posts, the differential in buyouts between banks and their non-bank counterparts is mainly due to bank servicers being able to warehouse delinquent loans until they reperform, modified or unmodified, or until they can otherwise dispose of the loan. Non-bank servicers typically do not have the balance sheet or funding to perform such buyouts in size. If these large non-bank servicers were to team with entities with access to cheap funding or were to set up funding facilities sponsored by investors, they could start to take advantage of the upside in re-securitization. The profits from securitizing reperforming loans is substantial, so non-bank servicers can afford to share the upside with yield-starved investors in return for access to funding. In this scenario, both parties could engage in a profitable trade.

Where do delinquencies stand for non-bank servicers? In the table below, we summarize the percentage of loans that have missed 3 or more payments for the top five non-bank servicers, by coupon and vintage.[3] In this table, we show 90-day+ delinquencies, which are already eligible for buyout, as opposed to the 60 day delinquency analysis we performed for banks, where 60 day delinquencies feed the buyout-eligible bucket via a 75% to 80% roll-rate from 60-day to 90-day delinquent.

30 yr GN2 Multi-lender pools

In this table, 2017-19 vintage GN2 3.5 through 4.5s show the largest overhang of non-bank delinquencies coupled with the largest percentage of non-bank servicing for the cohort.

We summarize delinquencies for the top five non-bank servicers because they presumably have a better chance at accessing liquidity from capital markets than smaller non-bank servicers. However, we observe significant build-up of 90-day+ delinquency across all non-bank servicers, which currently stands at 7.7% of non-bank UPB, much higher than the 6.6% bank-serviced 60-day delinquency in June.

Within the top five non-bank servicers, Penny Mac tended to have the largest buildup of 90-day+ delinquencies and Quicken tended to have the lowest but results varied from cohort to cohort.

In the graph below, we show the 90+ delinquency pipeline for all GN2 30yr multi-lender pools.

90+ DQ in GN2 Multi-lender Pools

While we cannot say for certain when (or if) the market will see significant buyout activity from non-bank servicers, seriously delinquent loans continue to build. This overhang of delinquent loans, coupled with the significant profits to be made from securitizing reperforming loans, poses the risk for a significant uptick in involuntary speeds in GN2 multi-lender pools. [4]

If you interested in seeing variations on this theme, contact us. Using Edge, we can examine any loan characteristic and generate a S-curve, aging curve, or time series.

 

 


 

 

 

[1] For this analysis, we focused on the roll rate for loans in 30yr GN2 Multi-lender pools vintage 2010 onward. See RiskSpan for analysis of other GNMA cohorts.

[2] Over the past two months, 77% of bank-serviced loans that were 60-days delinquent rolled to a buyout-eligible delinquency state compared to 75% for non-banks.

[3] This analysis was performed for loans that are securitized in 30yr GN2 multi-lender pools issued 2010 onward. The top five servicers include Lakeview, Penny Mac, Freedom, Quicken, and Nationstar (Mr. Cooper).

[4] Reperforming loans could include modifications or cures without modification. Even with a six-month waiting period for securitizing non-modified reperforming loans, the time-value of borrowing at current rates should prove only a mild hinderance to repurchases given the substantial profits on pooling reperforming loans.


Consistent & Transparent Forbearance Reporting Needed in the PLS Market

There is justified concern within the investor community regarding the residential mortgage loans currently in forbearance and their ultimate resolution. Although most of the 4M loans in forbearance are in securities backed by the Federal Government (Fannie Mae, Freddie Mac or Ginnie Mae), approximately 400,000 loans currently in forbearance represent collateral that backs private-label residential mortgage-backed securities (PLS). The PLS market operates without clear, articulated standards for forbearance programs and lacks the reporting practices that exist in Agency markets. This leads to disparate practices for granting forbearance to borrowers and a broad range of investor reporting by different servicers. COVID-19 has highlighted the need for transparent, consistent reporting of forbearance data to investors to support a more efficient PLS market.

Inconsistent investor reporting leaves too much for interpretation. It creates investor angst while making it harder to understand the credit risk associated with underlying mortgage loans. RiskSpan performed an analysis of 2,542 PLS deals (U.S. only) for which loan-level foreclosure metrics are available. The data shows that approximately 78% of loans reported to be in forbearance were backing deals originated between 2005-2008 (“Legacy Bonds”).  As you would expect, new issue PLS has a smaller percentage of loans reported to be in forbearance.

% total forebearance UPB

Not all loans in forbearance will perform the same and it is critical for investors to receive transparent reporting of underlying collateral within their PLS portfolio in forbearance.  These are unchartered times and, unlike historic observations of borrowers requesting forbearance, many loans presently in forbearance are still current on their mortgage payments. In these cases, they have elected to join a forbearance program in case they need it at some future point. Improved forbearance reporting will help investors better understand if borrowers will eventually need to defer payments, modify loan terms, or default leading to foreclosure or sale of the property.

In practice, servicers have followed GSE guidance when conducting forbearance reviews and approval. However, without specific guidance, servicers are working with inconsistent policies and procedures developed on a company-by-company basis to support the COVID forbearance process. For example, borrowers can be forborne for 12-months according to FHFA guidance. Some servicers have elected to take a more conservative approach and are providing forbearance in 3-month increments with extensions possible once a borrower confirms they remain financially impacted by the COVID pandemic.

Servicers have the data that investors want to analyze. Inconsistent practices in the reporting of COVID forbearances by servicers and trustees has resulted in forbearance data being unavailable on certain transactions. This means investors are not able to get a clear picture of the financial health of borrowers in transactions. In some cases, trustees are not reporting forbearance information to investors which makes it nearly impossible to obtain a reliable credit assessment of the underlying collateral.  

The PLS market has attempted to identify best practices for monthly loan-level reporting to properly assess the risk of loans where forbearance has been granted.  Unfortunately, the current market crisis has highlighted that not all market participants have adopted the best practices and there are not clear advantages for issuers and servicers to provide clear, transparent forbearance reporting. At a minimum, RiskSpan recommends that the following forbearance data elements be reported by servicers for PLS transactions:

  • Last Payment Date: The last contractual payment date for a loan (i.e. the loan’s “paid- through date”).
  • Loss Mitigation Type: A code indicating the type of loss mitigation the servicer is pursuing with the borrower, loan, or property.
  • Forbearance Plan Start Date: The start date when either a) no payment or b) a payment amount less than the contractual obligation has been granted to the borrower.
  • Forbearance Plan Scheduled End Date: The date on which a Forbearance Plan is scheduled to end.
  • Forbearance Exit – Reason Code: The reason provided by the borrower for exiting a forbearance plan.
  • Forbearance Extension Requested: Flag indicating the borrower has requested one or more forbearance extensions.
  • Repayment Plan Start Date: The start date for when a borrower has agreed to make monthly mortgage payments greater than the contractual installment in an effort to repay amounts due during a Forbearance Plan.
  • Repayment Plan Scheduled End Date: The date at which a Repayment Plan is scheduled to end.
  • Repayment Plan Violation Date: The date when the borrower ceased complying with the terms of a defined repayment plan.

The COVID pandemic has highlighted monthly reporting weaknesses by servicers in PLS transactions. Based on investor discussions, additional information is needed to accurately assess the financial health of the underlying collateral. Market participants should take the lessons learned from the current crisis to re-examine prior attempts to define monthly reporting best practices. This includes working with industry groups and regulators to implement consistent, transparent reporting policies and procedures that provide investors with improved forbearance data.


Machine Learning Models: Benefits and Challenges

Having good Prepayment and Credit Models is critical in the analysis of Residential Mortgage-Backed Securities. Prepays and Defaults are the two biggest risk factors that traders, portfolio managers and originators have to deal with. Traditionally, regression-based Behavioral Models have been used to accurately predict human behavior. Since prepayments and defaults are not just complex human decisions but also competing risks, accurately modeling them has been challenging. With the exponential growth in computing power (GPUs, parallel processing), storage (Cloud), “Big Data” (tremendous amount of detailed historical data) and connectivity (high speed internet), Artificial Intelligence (AI) has gained significant importance over the last few years. Machine Learning (ML) is a subset of AI and Deep Learning (DL) is a further subset of ML. The diagram below illustrates this relationship:

AI

Due to the technological advancements mentioned above, ML based prepayment and credit models are now a reality. They can achieve better predictive power than traditional models and can deal effectively with high-dimensionality (more input variables) and non-linear relationships. The major drawback which has kept them from being universally adopted is their “black box” nature which leads to validation and interpretation issues. Let’s do a quick comparison between traditional and ML models:

behavioral models versus machine learning models

Within ML Models are two ways to train them:

  • Supervised Learning  (used for ML Prepay and Credit Models)
    • Regression based
    • Classification based
  • Unsupervised Learning
    • Clustering
    • Association

Let’s compare the major differences between Supervised and Unsupervised Learning:

Supervised learning versus unsupervised learning

The large amounts of loan level time series data available for RMBS (agency and non-agency) lends itself well for the construction of ML models and early adopters have reported higher accuracy. Besides the obvious objections mentioned above (black box, lack of control, interpretation) ML models are also susceptible to overfitting (like all other models). Overfitting is when a model does very well on the training data but less well on unseen data (validation set). The model ends up “memorizing” the noise and outliers in the input data and is not able to generalize accurately. The non-parametric and non-linear nature of ML Models accentuates this problem. Several techniques have been developed to address this potential problem: reducing the complexity of decision trees, expanding the training dataset, adding weak learners, dropouts, regularization, reducing the training time, cross validation etc.. The interpretation problem is a bit more challenging since users demand both, predictive accuracy and some form of interpretability. Several interpretation methods are used currently, like PDP (Partial dependence plot), ALE (accumulated local effects), PFI (permutation feature importance) and ICE (individual conditional expectation) but each has its shortcomings. Some of the challenges with the interpretability methods are:

  • Isolating Cause and Effect – This is not often possible with supervised ML models since they only exploit associations and do not explicitly model cause/effect relationships.
  • Mistaking Correlation for Dependence – Independent variables have a correlation coefficient of zero but a zero correlation coefficient may not imply independence. The correlation coefficient only tracks linear correlations and the non-linear nature of the models makes this difficult.
  • Feature interaction and dependence – An incorrect conclusion can be drawn about the features influence on the target when there are interactions and dependencies between them.

While ML based prepay and credit models offer better predictive accuracy and automatically capture feature interactions and non-linear effects, they are still a few years away from gaining widespread acceptance. A good use for such models, at this stage, would be to use them in conjunction with traditional models. They would be a good benchmark to test traditional models with.


Note: Some of the information on this post was obtained from publicly available sources on the internet. The author wishes to thank  Lei Zhao and Du Tang of the modeling group for proofreading this post.


The Why and How of a Successful SAS-to-Python Model Migration

A growing number of financial institutions are migrating their modeling codebases from SAS to Python. There are many reasons for this, some of which may be unique to the organization in question, but many apply universally. Because of our familiarity not only with both coding languages but with the financial models they power, my colleagues and I have had occasion to help several clients with this transition.

Here are some things we’ve learned from this experience and what we believe is driving this change.

Python Popularity

The popularity of Python has skyrocketed in recent years. Its intuitive syntax and a wide array of packages available to aid in development make it one of the most user-friendly programming languages in use today. This accessibility allows users who may not have a coding background to use Python as a gateway into the world of software development and expand their toolbox of professional qualifications.

Companies appreciate this as well. As an open-source language with tons of resources and low overhead costs, Python is also attractive from an expense perspective. A cost-conscious option that resonates with developers and analysts is a win-win when deciding on a codebase.

Note: R is another popular and powerful open-source language for data analytics. Unlike R, however, which is specifically used for statistical analysis, Python can be used for a wider range of uses, including UI design, web development, business applications, and others. This flexibility makes Python attractive to companies seeking synchronicity — the ability for developers to transition seamlessly among teams. R remains popular in academic circles where a powerful, easy-to-understand tool is needed to perform statistical analysis, but additional flexibility is not necessarily required. Hence, we are limiting our discussion here to Python.

Python is not without its drawbacks. As an open-source language, less oversight governs newly added features and packages. Consequently, while updates may be quicker, they are also more prone to error than SAS’s, which are always thoroughly tested prior to release.

CONTACT US

Visualization Capabilities

While both codebases support data visualization, Python’s packages are generally viewed more favorably than SAS’s, which tend to be on the more basic side. More advanced visuals are available from SAS, but they require the SAS Visual Analytics platform, which comes at an added cost.

Python’s popular visualization packages — matplotlib, plotly, and seaborn, among others — can be leveraged to create powerful and detailed visualizations by simply importing the libraries into the existing codebase.

Accessibility

SAS is a command-driven software package used for statistical analysis and data visualization. Though available only for Windows operating systems, it remains one of the most widely used statistical software packages in both industry and academia.

It’s not hard to see why. For financial institutions with large amounts of data, SAS has been an extremely valuable tool. It is a well-documented language, with many online resources and is relatively intuitive to pick up and understand – especially when users have prior experience with SQL. SAS is also one of the few tools with a customer support line.

SAS, however, is a paid service, and at a standalone level, the costs can be quite prohibitive, particularly for smaller companies and start-ups. Complete access to the full breadth of SAS and its supporting tools tends to be available only to larger and more established organizations. These costs are likely fueling its recent drop-off in popularity. New users simply cannot access it as easily as they can Python. While an academic/university version of the software is available free of charge for individual use, its feature set is limited. Therefore, for new users and start-up companies, SAS may not be the best choice, despite being a powerful tool. Additionally, with the expansion and maturity of the variety of packages that Python offers, many of the analytical abilities of Python now rival those of SAS, making it an attractive, cost-effective option even for very large firms.

Future of tech

Many of the expected advances in data analytics and tech in general are clearly pointing toward deep learning, machine learning, and artificial intelligence in general. These are especially attractive to companies dealing with large amounts of data.

While the technology to analyze data with complete independence is still emerging, Python is better situated to support companies that have begun laying the groundwork for these developments. Python’s rapidly expanding libraries for artificial intelligence and machine learning will likely make future transitions to deep learning algorithms more seamless.

While SAS has made some strides toward adding machine learning and deep learning functionalities to its repertoire, Python remains ahead and consistently ranks as the best language for deep learning and machine learning projects. This creates a symbiotic relationship between the language and its users. Developers use Python to develop ML projects since it is currently best suited for the job, which in turn expands Python’s ML capabilities — a cycle which practically cements Python’s position as the best language for future development in the AI sphere.

Overcoming the Challenges of a SAS-to-Python Migration

SAS-to-Python migrations bring a unique set of challenges that need to be considered. These include the following.

Memory overhead

Server space is getting cheaper but it’s not free. Although Python’s data analytics capabilities rival SAS’s, Python requires more memory overhead. Companies working with extremely large datasets will likely need to factor in the cost of extra server space. These costs are not likely to alter the decision to migrate, but they also should not be overlooked.

The SAS server

All SAS commands are run on SAS’s own server. This tightly controlled ecosystem makes SAS much faster than Python, which does not have the same infrastructure out of the box. Therefore, optimizing Python code can be a significant challenge during SAS-to-Python migrations, particularly when tackling it for the first time.

SAS packages vs Python packages

Calculations performed using SAS packages vs. Python packages can result in differences, which, while generally minuscule, cannot always be ignored. Depending on the type of data, this can pose an issue. And getting an exact match between values calculated in SAS and values calculated in Python may be difficult.

For example, the true value of “0” as a float datatype in SAS is approximated to 3.552714E-150, while in Python float “0” is approximated to 3602879701896397/255. These values do not create noticeable differences in most calculations. But some financial models demand more precision than others. And over the course of multiple calculations which build upon each other, they can create differences in fractional values. These differences must be reconciled and accounted for.

Comparing large datasets

One of the most common functions when working with large datasets involves evaluating how they change over time. SAS has a built-in function (proccompare) which compares datasets swiftly and easily as required. Python has packages for this as well; however, these packages are not as robust as their SAS counterparts. 

Conclusion

In most cases, the benefits of migrating from SAS to Python outweigh the challenges associated with going through the process. The envisioned savings can sometimes be attractive enough to cause firms to trivialize the transition costs. This should be avoided. A successful migration requires taking full account of the obstacles and making plans to mitigate them. Involving the right people from the outset — analysts well versed in both languages who have encountered and worked through the pitfalls — is key.


Edge: Potential for August Buyouts in Ginnie Mae

In the July prepayment report, many cohorts of GN2 multi-lender pools saw a substantial jump in speeds. These speeds were driven by large delinquency buyouts from banks, mostly Wells Fargo, which we summarized in our most recent analysis. Speeds on moderately seasoned GN2 3% through 4% were especially hard-hit, with increases in involuntary prepayments as high as 25 CBR.

The upcoming August prepayment report, due out August 7th, should be substantially better. Delinquencies for banks with the highest buyout efficiency are significant lower than they were last month, which will contribute to a decrease in involuntary speeds by 5 to 15 CBR, depending on the cohort. In the table below, we show potential bank buyout speeds for some large GN2 multi-lender cohorts. These speeds assume an 80% roll-rate from 60DQ to 90DQ and 100% buyouts from the banks mentioned above. The analysis does not include buyouts from non-banks, whose delinquencies continue to build.July prepay report

We have details on other coupon and vintage cohorts as well as buyout analysis at an individual pool level. Please ask for details.

——————————————————

If you are interested in seeing variations on this theme, contact us. Using Edge, we can examine any loan characteristic and generate an S-curve, aging curve, or time series.


Where Would We Be Without the Mortgage Market?

It’s bleak out there. Can you imagine how much bleaker it would be if the U.S. mortgage market weren’t doing its thing to prop up the economy?

The mortgage market is helping healthy borrowers take advantage of lower interest rates to improve their personal balance sheets. And it is helping struggling borrowers by offering generous loss mitigation options. 

The mortgage market plays a unique role in the U.S. economy. It is a hybrid consortium of originators, guarantors, investors, and policymakers intent on offering competitive rates in a transparent market structure—a structure that is the beneficiary of both good government policy and a robust, competitive private marketplace. 

The mortgage market’s pro-cyclical role in the U.S. economy allocates credit and interest rate risk among borrowers, investors and the federal government. When the government’s interest rates go down, so do mortgage rates.
 

March 2020

COVID-19 turned the world’s economies on their heads. Once strong growing economies ground to a stop. By mid-March, the negative effect of the pandemic in the U.S. was clear, with sharply rising unemployment claims and a declining Q1 GDP. COVID-19 did not spare the mortgage market. Fear of borrower defaults led to a freezing up of the credit market, which in turn fueled anxiety among mortgage servicers, guarantors, investors, and originators.

The U.S. government and Federal Reserve responded quickly. Applying lessons learned from the 2008, they initiated housing relief programs early. Congress immediately passed legislation enabling forbearance and eviction protection programs to borrowers and renters. The Federal Reserve promptly cut interest rates to near zero while using its balance sheet to quell market concerns and ensure liquidity.

The FHFA’s Credit Risk Transfer program worked as intended, sharing with willing investors the credit risk uncertainty and, in due course, the resulting credit losses. By April, the mortgage market’s guarantors—Ginnie Mae, Fannie Mae and Freddie Mac—imposed P&I advance programs on servicers and investors, thus ensuring the continuation of the mortgage servicing market.


Rallying the Troops

Boy, it was a tough spring for the industry. But now all the pieces were in place:

  1. New legislation to aid borrowers
  2. Lower rates and market liquidity from the Fed
  3. P&I advance solutions and underwriting guidance from the Agencies

The U.S. mortgage market was finally in a position to play its role in steadying the economy. Mortgages help the economy by lowering debt burden ratios and increasing available spendable income and investible assets. Both of these conditions contribute to the stabilization and recovery of the economy. 

This relief is provided through:

  • Rate-and-term refinances, which lower borrowers’ monthly mortgage payments,
  • Purchase loans, which help borrowers capitalize on low interest rates to buy new houses, and
  • Cash-out refinances, which enable borrowers to convert home equity into spendable and investable cash.

Mortgage origination volume in 2020 is now projected to reach $2.8 trillion—a 30% increase over 2019—despite 11% unemployment and more than 4 million loans in forbearance.

But near-term issues remain

It would be a misstatement to say all things are great for the U.S. mortgage market. While mortgage rates are at 50-year lows, they are not as low as they could be. The dramatic increase in volume has forced originators to raise rates in order to manage their production surges. Mortgage servicing rights values have plunged on new originations, which also leads to higher borrower rates. In other words, a good portion of the pro-cyclical benefit of lower interest rates is not actually making its way into the hands of mortgage borrowers.

In addition, the current high rate of unemployment and forbearance will ultimately come home to roost in the form of elevated default rates as the economy’s recovery from COVID-19 continues to look more U-shaped than the originally hoped for V-shape. Any increases in default rates will certainly be met with new rounds of government intervention. This almost always results in higher costs to servicers.  

Long-term uncertainties

The pandemic continues to wreak havoc on people and economies. Its duration and cumulative impacts are still unknown but are certain to reshape the U.S. mortgage market. Still unanswered are the growing questions around how the following will affect local real estate values, defaults, and future business volumes:

  • The emerging work-from-home economy
  • Permanent employment dislocations from the loss of travel, entertainment, and retail jobs
  • Loss of future rate-and-term refinance business because of today’s low rates
  • Muted future purchase volumes due to high unemployment

Notwithstanding these uncertainties, the U.S. mortgage market will play a vital role in the economy’s rebuilding. Its resiliency and willingness to learn from past mistakes, combined with an activist role of government and its guarantors, not only ensure the market’s long-term viability and success. These qualities also position it as a mooring point for an economy otherwise tossed about in a turbulent storm of uncertainty. 


RiskSpan Vintage Quality Index (VQI): Q2 2020

The RiskSpan Vintage Quality Index (“VQI”) is a monthly index designed to quantify the underwriting environment of a monthly vintage of mortgage originations and help credit modelers control for prevailing underwriting conditions at various times. Published quarterly by RiskSpan, the VQI generally trends slowly, with interesting monthly changes found primarily in the individual risk layers. (Assumptions used to construct the VQI can be found at the end of this post.) The VQI has reacted dramatically to the economic tumult caused by COVID-19, however, and in this post we explore how the VQI’s reaction to the current crisis compares to the start of the Great Recession. We examine the periods leading up to the start of each crisis and dive deep into the differences between individual risk layers.

Reacting to a Crisis

In contrast with its typically more gradual movements, the VQI’s reaction to a crisis is often swift. Because the VQI captures the average riskiness of loans issued in a given month, crises that lower lender (and MBS investor) confidence can quickly drive the VQI down as lending standards are tightened. For this comparison, we will define the start of the COVID-19 crisis as February 2020 (the end of the most recent economic expansion, according to the National Bureau of Economic Research), and the start of the Great Recession as December 2007 (the first official month of that recession). As you might expect, the VQI reacted by moving sharply down immediately after the start of each crisis.[1]

riskspan-VQI-report

Though the reaction appears similar, with each four-month period shedding roughly 15% of the index, the charts show two key differences. The first difference is the absolute level of the VQI at the start of the crisis. The vertical axis on the graphs above displays the same spread (to display the slope of the changes consistently), but the range is shifted by a full 40 points. The VQI maxed out at 139.0 in December 2007, while at the start of the COVID-19 crisis, the VQI stood at just 90.4.

A second difference surrounds the general trend of the VQI in the months leading up to the start of each crisis. The VQI was trending up in the 18 months leading up the Great Recession, signaling an increasing riskiness in the loans being originated and issued. (As we discuss later, this “last push” in the second half of 2007 was driven by an increase in loans with high loan-to-value ratios.) Conversely, 2019 saw the VQI trend downward, signaling a tightening of lending standards.

Different Layers of Risk

Because the VQI simply indexes the average number of risk layers associated with the loans issued by the Agencies in a given month, a closer look at the individual risk layers provides insights that can be masked when analyzing the VQI as a whole.

The risk layer that most clearly depicts the difference between the two crises is the share of loans with low FICO scores (below 660).

riskspan-VQI-report

The absolute difference is striking: 27.9% of loans issued in December 2007 had a low FICO score, compared with just 7.1% of loans in February 2020. That 20.8% difference perfectly captures the underwriting philosophies of the two periods and pretty much sums up the differing quality of the two loan cohorts.

FICO trends before the crisis are also clearly different. In the 12 months leading up to the Great Recession the share of low-FICO loans rose from 24.4% to 27.9% (+3.2%). In contrast, the 12 months before the COVID-19 crisis saw the share of low-FICO loans fall from 11.5% to 7.2% (-4.3%).

The low-FICO risk layer’s reaction to the crisis also differs dramatically. Falling 27.9% to 15.4% in 4 months (on its way to 3.3% in May 2009), the share of low-FICO loans cratered following the start of the recession. In contrast, the risk layer has been largely unimpacted by the current crisis, simply continuing its downward trend mostly uninterrupted.

Three other large drivers of the difference between the VQI in December 2007 and in February 2020 are the share of cash-out refinances, the share of loans for second homes, and the share of loans with debt-to-income (DTI) ratios above 45%. What makes these risk layers different from FICO is their reaction to the crisis itself. While their absolute levels in the months leading up to the Great Recession were well above those seen at the beginning of 2020 (similar to low-FICO), none of these three risk layers appear to react to either crisis but rather continue along the same general trajectory they were on in the months leading up to each crisis. Cash-out refinances, following a seasonal cycle are mostly unimpacted by the start of the crises, holding a steady spread between the two time-periods:

riskspan-vqi-report

Loans for second homes were already becoming more rare in the runup to December 2007 (the only risk layer to show a reaction to the tumult of the fall of 2007) and mostly held in the low teens immediately following the start of the recession:

Great Recession

Finally, loans with high DTIs (over 45%) have simply followed their slow trend down since the start of the COVID-19 crisis, while they actually became slightly more common following the start of the Great Recession:

riskspan-VQI-report

The outlier, both pre- and post-crisis, is the high loan-to-value risk layer. For most of the 24 months leading up to the start of the Great Recession the share of loans with LTVs above 80% was well below the same period leading up to the COVID-19 crisis. The pre-Great Recession max of 33.2% is below the 24-month average of 33.3% at the start of the COVID-19 crisis. The share of high-LTV loans also reacted to the crisis in 2008, falling sharply after the start of the recession. In contrast, the current downward trend in high-LTV loans started well before the COVID-19 crisis and was seemingly unimpacted by the start of the crisis.

RiskSpan-VQI-report

Though the current downward trend is likely due to increased refinance activity as mortgage rates continue to crater, the chart seems upside down relative to what you might have predicted.

The COVID-19 Crisis is Different

What can the VQI tell us about the similarities and differences between December 2007 and February 2020? When you look closely, quite a bit.

  1. The loans experiencing the crisis in 2020 are less risky.

By almost all measures, the loans that entered the downturn beginning in December 2007 were riskier than the loans outstanding in February 2020. There are fewer low-FICO loans, fewer loans with high debt-to-income ratios, fewer loans for second homes, and fewer cash-out refinances. Trends aside, the absolute level of these risky characteristics—characteristics that are classically considered in mortgage credit and loss models—is significantly lower. While that is no guarantee the loans will fare better through this current crisis and recovery, we can reasonably expect better outcomes this time around.

  1. The 2020 crisis did not immediately change underwriting / lending.

One of the more surprising VQI trends is the non-reaction of many of the risk layers to the start of the COVID-19 crisis. FICO, LTV, and DTI all seem to be continuing a downward trend that began well before the first coronavirus diagnosis. The VQI is merely continuing a trend started back in January 2019. (The current “drop” has brought the VQI back to the trendline.) Because the crisis was not born of the mortgage sector and has not yet stifled demand for mortgage-backed assets, we have yet to see any dramatic shifts in lending practices (a stark contrast with 2007-2008). Dramatic tightening of lending standards can lead to reduced home buying demand, which can put downward pressure on home prices. The already-tight lending standards in place before the COVID-19 crisis, coupled with the apparent non-reaction by lenders, may help to stabilize the housing market.

The VQI was not designed to gauge the unknowns of a public health crisis. It does not directly address the lessons learned from the Great Recession, including the value of modification and forbearance in maintaining stability in the market. It does not account for the role of government and the willingness of policy makers to intervene in the economy (and in the housing markets specifically). Despite not being a crystal ball, the VQI nevertheless remains a valuable tool for credit modelers seeking to view mortgage originations from different times in their proper perspective.

—————

Analytical and Data Assumptions

Population assumptions:

  • Issuance Data for Fannie Mae and Freddie Mac.
  • Loans originated more than three months prior to issuance are excluded because the index is meant to reflect current market conditions.
  • Loans likely to have been originated through the HARP program, as identified by LTV, MI coverage percentage, and loan purpose are also excluded. These loans do not represent credit availability in the market, as they likely would not have been originated today if not for the existence of HARP.

Data Assumptions:

  • Freddie Mac data goes back to December 2005. Fannie Mae data only goes back to December 2014.
  • Certain Freddie Mac data fields were missing prior to June 2008.

GSE historical loan performance data release in support of GSE Risk Transfer activities was used to help back-fill data where it was missing.

 

 

[1] Note that the VQI’s baseline of 100 reflects underwriting standards as of January 2003.

 


Edge: Bank Buyouts in Ginnie Mae Pools

Ginnie Mae prepayment speeds saw a substantial uptick in July, with speeds in some cohorts more than doubling. Much of this uptick was due to repurchases of delinquent loans. In this short post, we examine those buyouts for bank and non-bank servicers. We also offer a method for quantifying buyout risk going forward.

For background, GNMA servicers have the right (but not the obligation) to buy delinquent loans out of a pool if they have missed three or more payments. The servicer buys these loans at par and can later re-securitize them if they start reperforming. Re-securitization rules vary based on whether the loan is naturally delinquent or in a forbearance program. But the reperforming loan will be delivered into a pool with its original coupon, which almost always results in a premium-priced pool. This delivery option provides a substantial profit for the servicer that purchased the loan at par.

To purchase the loan out of the pool, the servicer must have both cash and sufficient balance sheet liquidity. Differences in access to funding can drive substantial differences buyout behavior between well-capitalized bank servicers and more thinly capitalized non-bank servicers. Below, we compare recent buyout speeds between banks and non-banks and highlight some entities whose behavior differs substantially from that of their peer group.[1]

In July, Wells Fargo’s GNMA buyouts had an outsized impact on total CPR in GNMA securities. Wells, the largest GNMA bank servicer, exhibits extraordinary buyout efficiency relative to other servicers, buying out 99 percent of eligible loans. Wells’ size and efficiency, coupled with a large 60-day delinquency in June (8.6%), caused a large increase in “involuntary prepayments” and drove total overall CPR substantially higher in July. This effect was especially apparent in some moderately seasoned multi-lender pools. For example, speeds on 2012-13 GN2 3.5 multi-lender pools accelerated from low 20s CPR in June to mid-40s in July, nearly converging to the cheapest-to-deliver 2018-19 production GN2 3.5 and wiping out any carry advantage in the sector.

FactorDate VS CPR
Figure 1: Prepayment speeds in GN2 3.5 multi-lender pools: 2012-13 vintage in blue, 2018-19 vintage in black.

This CPR acceleration in 2012-13 GN2 3.5s was due entirely to buyouts, with the sector buyouts rising for 5 CBR to 29 CBR.[2] In turn, this increase was driven almost entirely by Wells, which accounted for 25% of the servicing in some pools.

FactorDate VS CPR
Figure 2: Buyout speeds in GN2 3.5 multi-lender pools. 2012-13 vintage in blue, 2018-19 vintage in black

In the next table, we summarize performance for the top ten GNMA bank servicers. The table shows loan-level roll rates from June to July for loans that started June 60-days delinquent. Loans that rolled to the DQ90+ bucket were not bought out of the pool by the servicer, despite being eligible for it. We use this 90+ delinquency bucket to calculate each servicer’s buyout efficiency, defined as the percentage of delinquent loans eligible for buyout that a servicer actually repurchases.

Roll Rates for Bank Servicers, for July 2020 Reporting Date

roll rates

Surprisingly, many banks exhibit very low buyout efficiencies, including Flagstar, Citizens, and Fifth Third. Navy Federal and USAA (next table) show muted buyout performance due to their high VA concentration.

Next, we summarize roll rates and buyout efficiency for the top ten GNMA non-bank servicers.

Roll Rates for Ginnie Mae Non-bank Servicers, for July 2020 Reporting Date

roll rates

Not surprisingly, non-banks as a group are much less efficient at buying out eligible loans, but Carrington stands out.

Looking forward, how can investors quantify the potential CBR exposure in a sector? Investors can use Edge to estimate the upcoming August buyouts within a sector by running a servicer query to separate a set of pools or cohort into its servicer-specific delinquencies.[3] Investors can then apply that servicer’s 60DQ->90DQ roll rate plus the servicer’s buyout efficiency to estimate a CBR.[4] This CBR will contribute to the overall CBR for a pool or set of pools.

Given the significant premium at which GNMA passthroughs are trading, the profits from repurchase and re-securitization are substantial. While we expect repurchases will continue to play an outsized role in GNMA speeds, this analysis illustrates the extent to which this behavior can vary from servicer to servicer, even within the bank and non-bank sectors. Investors can mitigate this risk by quantifying the servicer-specific 60-day delinquency within their portfolio to get a clearer view of the potential impact from buyouts.

If you interested in seeing variations on this theme, contact us. Using Edge, we can examine any loan characteristic and generate a S-curve, aging curve, or time series.


 

 

[1] This post builds on our March 24 write-up on bank versus non-bank delinquencies, link here. For this analysis, we limited our analysis to loans in 3% pools and higher, issued 2010-2020. Please see RiskSpan for other data cohorts.

[2] CBR is the Conditional Buyout Rate, the buyout analogue of CPR.

[3] In Edge, select the “Expanded Output” to generate servicer-by-servicer delinquencies.

[4] RiskSpan now offers loan-level delinquency transition matrices. Please email techsupport@riskspan.com for details.

 


Is Free Public Data Worth the Cost?

No such thing as a free lunch.

The world is full of free (and semi-free) datasets ripe for the picking. If it’s not going to cost you anything, why not supercharge your data and achieve clarity where once there was only darkness?

But is it really not going to cost you anything? What is the total cost of ownership for a public dataset, and what does it take to distill truly valuable insights from publicly available data? Setting aside the reliability of the public source (a topic for another blog post), free data is anything but free. Let us discuss both the power and the cost of working with public data.

To illustrate the point, we borrow from a classic RiskSpan example: anticipating losses to a portfolio of mortgage loans due to a hurricane—a salient example as we are in the early days of the 2020 hurricane season (and the National Oceanic and Atmospheric Administration (NOAA) predicts a busy one). In this example, you own a portfolio of loans and would like to understand the possible impacts to that portfolio (in terms of delinquencies, defaults, and losses) of a recent hurricane. You know this will likely require an external data source because you do not work for NOAA, your firm is new to owning loans in coastal areas, and you currently have no internal data for loans impacted by hurricanes.

Know the Data.

The first step in using external data is understanding your own data. This may seem like a simple task. But data, its source, its lineage, and its nuanced meaning can be difficult to communicate inside an organization. Unless you work with a dataset regularly (i.e., often), you should approach your own data as if it were provided by an external source. The goal is a full understanding of the data, the data’s meaning, and the data’s limitations, all of which should have a direct impact on the types of analysis you attempt.

Understanding the structure of your data and the limitations it puts on your analysis involves questions like:

  • What objects does your data track?
  • Do you have time series records for these objects?
  • Do you only have the most recent record? The most recent 12 records?
  • Do you have one record that tries to capture life-to-date information?

Understanding the meaning of each attribute captured in your data involves questions like:

  • What attributes are we tracking?
  • Which attributes are updated (monthly or quarterly) and which remain static?
  • What are the nuances in our categorical variables? How exactly did we assign the zero-balance code?
  • Is original balance the loan’s balance at mortgage origination, or the balance when we purchased the loan/pool?
  • Do our loss numbers include forgone interest?

These same types of questions also apply to understanding external data sources, but the answers are not always as readily available. Depending on the quality and availability of the documentation for a public dataset, this exercise may be as simple as just reading the data dictionary, or as labor intensive as generating analytics for individual attributes, such as mean, standard deviation, mode, or even histograms, to attempt to derive an attribute’s meaning directly from the delivered data. This is the not-free part of “free” data, and skipping this step can have negative consequences for the quality of analysis you can perform later.

Returning to our example, we require at least two external data sets:  

  1. where and when hurricanes have struck, and
  2. loan performance data for mortgages active in those areas at those times.

The obvious choice for loan performance data is the historical performance datasets from the GSEs (Fannie Mae and Freddie Mac). Providing monthly performance information and loss information for defaulted loans for a huge sample of mortgage loans over a 20-year period, these two datasets are perfect for our analysis. For hurricanes, some manual effort is required to extract date, severity, and location from NOAA maps like these (you could get really fancy and gather zip codes covered in the landfall area—which, by leaving out homes hundreds of miles away from expected landfall, would likely give you a much better view of what happens to loans actually impacted by a hurricane—but we will stick to state-level in this simple example).

Make new data your own.

So you’ve downloaded the historical datasets, you’ve read the data dictionaries cover-to-cover, you’ve studied historical NOAA maps, and you’ve interrogated your own data teams for the meaning of internal loan data. Now what? This is yet another cost of “free” data: after all your effort to understand and ingest the new data, all you have is another dataset. A clean, well-understood, well-documented (you’ve thoroughly documented it, haven’t you?) dataset, but a dataset nonetheless. Getting the insights you seek requires a separate effort to merge the old with the new. Let us look at a simplified flow for our hurricane example:

  • Subset the GSE data for active loans in hurricane-related states in the month prior to landfall. Extract information for these loans for 12 months after landfall.
  • Bucket the historical loans by the characteristics you use to bucket your own loans (LTV, FICO, delinquency status before landfall, etc.).
  • Derive delinquency and loss information for the buckets for the 12 months after the hurricane.
  • Apply the observed delinquency and loss information to your loan portfolio (bucketed using the same scheme you used for the historical loans).

And there you have it—not a model, but a grounded expectation of loan performance following a hurricane. You have stepped out of the darkness and into the data-driven light. And all using free (or “free”) data!

Hyperbole aside, nothing about our example analysis is easy, but it plainly illustrates the power and cost of publicly available data. The power is obvious in our example: without the external data, we have no basis for generating an expectation of losses after a hurricane. While we should be wary of the impacts of factors not captured by our datasets (like the amount and effectiveness of government intervention after each storm – which does vary widely), the historical precedent we find by averaging many storms can form the basis for a robust and defensible expectation. Even if your firm has had experience with loans in hurricane-impacted areas, expanding the sample size through this exercise bolsters confidence in the outcomes. Generally speaking, the use of public data can provide grounded expectations where there had been only anecdotes.

But this power does come at a price—a price that should be appreciated and factored into the decision whether to use external data in the first place. What is worse than not knowing what to expect after a hurricane? Having an expectation based on bad or misunderstood data. Failing to account for the effort required to ingest and use free data can lead to bad analysis and the temptation to cut corners. The effort required in our example is significant: the GSE data is huge, complicated, and will melt your laptop’s RAM if you are not careful. Turning NOAA PDF maps into usable data is not a trivial task, especially if you want to go deeper than the state level. Understanding your own data can be a challenge. Applying an appropriate bucketing to the loans can make or break the analysis. Not all public datasets present these same challenges, but all public datasets present costs. There simply is no such thing as a free lunch. The returns on free data frequently justify these costs. But they should be understood before unwittingly incurring them.


Chart of the Month: Not Just the Economy — Asset Demand Drives Prices

Within weeks of the March 11th declaration of COVID-19 as a global pandemic by the World Health Organization, rating agencies were downgrading businesses across virtually every sector of the economy. Not surprisingly, these downgrades were felt most acutely by businesses that one would reasonably expect to be directly harmed by the ensuing shutdowns, including travel and hospitality firms and retail stores. But the downgrades also hit food companies and other areas of the economy that tend to be more recession resistant. 

An accompanying spike in credit spreads was even quicker to materialize. Royal Caribbean’s and Marriott’s credit spreads tripled essentially overnight, while those of other large companies increased by twofold or more. 

But then something interesting happened. Almost as quickly as they had risen, most of these spreads began retreating to more normal levels. By mid-June, most spreads were at or lower than where they were prior to the pandemic declaration. 

What business reason could plausibly explain this? The pandemic is ongoing and aggregate demand for these companies’ products does not appear to have rebounded in any material way. People are not suddenly flocking back to Marriott’s hotels or Wynn’s resorts.    

The story is indeed one of increased demand. But rather than demand for the companies’ productswe’re seeing an upswing in demand for these companies’ debt. What could be driving this demand? 

Enter the Federal Reserve. On March 23rd, The Fed announced that its Secondary Market Corporate Credit Facility (SMCCF) would begin purchasing investment-grade corporate bonds in the secondary market, first through ETFs and directly in a later phase. 

And poof! Instant demand. And instant price stabilization. All the Fed had to do was announce that it would begin buying bonds (it hasn’t actually started buying yet) for demand to rush back in, push prices up and drive credit spreads down.  

To illustrate how quickly spreads reacted to the Fed’s announcement, we tracked seven of the top 20 companies listed by S&P across different industries from early March through mid-June. The chart below plots swap spreads for a single bond (with approximately five years to maturity) from each of the following companies: 

  • Royal Caribbean Cruises (RCL)
  • BMW 
  • The TJX Companies (which includes discount retailers TJ Maxx, Marshalls, and HomeGoods, among others) 
  • Marriott 
  • Wynn Resorts 
  • Kraft Foods 
  • Ford Motor Company

Credit Spreads React to Fed More than Downgrades

We sourced the underlying data for these charts from two RiskSpan partners: S&P, which provided the timing of the downgrades, and Refinitiv, which provided time-series spread data.  

The companies we selected don’t cover every industry, of course, but they cover a decent breadth. Incredibly, with the lone exception of Royal Caribbean, swap spreads for every one of these companies are either better than or at the same level as where they were pre-pandemic. 

As alluded to above, this recovery cannot be attributed to some miraculous improvement in the underlying economic environment. Literally the only thing that changed was the Fed’s announcement that it would start buying bonds. The fact that Royal Caribbean’s spreads have not fully recovered seems to suggest that the perceived weakness in demand for cruises in the foreseeable future remains strong enough to overwhelm any buoying effect of the impending SMCCF investment. For all the remaining companies, the Fed’s announcement appears to be doing the trick. 

We view this as clear and compelling evidence that the Federal Reserve in achieving its intended result of stabilizing asset prices, which in turn should help ease corporate credit.


Get Started
Log in

Linkedin   

risktech2024