Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin 

Category: Article

Are Recast Loans Skewing Agency Speeds?

In a previous blog, we highlighted large curtailments on loans, behavior that was driving a prepayment spike on some new-issue pools. Any large curtailment should also result in shortening the remaining term of the loan because the mortgage payment is nearly always “level-pay” for loans in a conventional pool. And we see that behavior for all mortgages experiencing large curtailments.

However, we noted that nearly half of these loans showed a subsequent extension of their remaining term back to where it would have been without the curtailment.1 This extension occurred anywhere between zero and sixteen months after the curtailment, with a median of one month after the large payment. We presume these maturity extensions are a loan “recast,” which is explained well in a recent FAQ from Rocket Mortgage. In summary, a recast allows the borrower to lower their monthly payment after making a curtailment above some threshold, typically at least $10,000 extra principal.

Some investors may not be aware that a recast loan may remain in the trust, especially since the terms of the loan are being changed without a buyout.2 Further, since the extension lowers the monthly payment, the trust will receive principal more slowly ex curtailment than under the original terms of the loan. This could possibly affect buyers of the pool after the curtailment and before the recast.

While the number of recast loans is small, we found it interesting that the loan terms are changed without removing the loans from the pool. We identified nearly 7,800 loans that were issued between 2021 Q4 and 2022 Q1 and had both a curtailment greater than $10,000 and a subsequent re-extension of loan term.

Of these loans, the typical time to term-recast is zero to two months, with 1% of the loans recasting a year or more after the curtailment.

Some of these loans reported multiple curtailments and recasts, with loan 9991188863 in FR QD1252 extending on three separate occasions after three large curtailments. It seems the door is always open to extension.

For loans that recast their maturities after a curtailment, 85% had extensions between 10 and 25 years.

Large curtailments are uncommon and term-recasts comprise roughly half of loans in our sample with large curtailments, so term recasts will typically have only a small effect on pool cash flows, extending the time of principal receipt ex curtailment and possibly changing borrower behavior.3 For large pools, any effect will be typically exceeded by prepayments due to turnover.

However, for some smaller pools the WAM extension due to recast is noticeable. We identified dozens of pools whose WAM extended after a recast of underlying loan(s). The table below shows just a few examples. All of these pools are comparatively small, which is to be expected since just one or two individual loan recasts can have an outsized effect on a small pool’s statistics.

Pool IDFactor DateCurrent FaceExtension (months)
FR QD76177/202220,070,7376
FR QD00061/202215,682,7755
FN CB336711/202214,839,9195
FR QD57367/202210,916,9596
FN BU05814/202210,164,0006
FR QD44926/20223,113,53216
FN BV20765/20223,165,50918
FR QD60137/20223,079,25022



Takeaways from SFVegas 2023

The most highly attended conference in recent years brought together leaders from government, capital markets, and tech institutions to discuss the current state and future of the securitization markets.

SFVegas remains the optimal environment for fostering healthy dialogue aimed at making markets more efficient and transparent by creating innovative, new solutions.  RiskSpan is delighted to be engaged in this dialogue.  

Here are our key takeaways from the conference.

Loan Innovation

Sticky inflation and high interest rates are creating a macroeconomic environment that is particularly conducive to bringing new residential mortgage products to market. Market demand for HELOCs and other second-lien products is driving innovation around these offerings and accelerating their acceptance. ARM production is growing rapidly and is at some of the highest levels in over a decade.

Product Innovation is moving forward with both consumers and investors in mind. Consumers are in search of access to better financing while investors seek new ways to participate in these markets.

Technology-Accelerated (R)evolution

Data is driving the dialogue. New scoring tools (FICO10T and Vantage Score 4.0), new ESG-related data and better disclosures are creating a much more transparent investment process

Cloud-native applications continue to make analytics processing cheaper and differentiate how investors and their counterparties seek relative value. Efficiency in data management and analytics separates winner and losers.

Accelerated adoption of AI-driven solutions will drive market operational efficiency in the coming years. The adoption and use cases are just beginning to be uncovered. 

New Investors, New Ideas

New investors are bringing fresh capital to the market with new ideas on how to maximize risk-adjusted returns. Investors backed by private equity are seeking new returns in virtually every category of structured markets: MSRs, BPLs and CLOs. Interest in these classes will only grow in the coming years as more investors seek to maximize returns in private assets.

The international investor community remains strong as global asset allocation is shifting towards the U.S. and fewer opportunities exist in overseas markets


RiskSpan sits at the intersection of all of these trends by helping structured finance investors of every type to leverage technology and data solutions that uncover market opportunities, mitigate risks and deliver new products

Great conference! Get in touch with us to learn more about how RiskSpan help clients simplify, scale, and transform their structured finance analytics!


The Curious Case of Curtailments

With more than 90% of mortgages out-of-the-money from a refinancing standpoint, the MBS market has rightly focused on activities that affect discounts, including turnover and to a much lesser extent cash-out refinancings. In this analysis we examine the source of fast speeds on new issue loans and pools.

As we dig deeper on turnover, we notice a curious behavior related to curtailments that has existed for several years but gone largely ignored in recent refi-dominated environments. Curtailment activity, especially higher-than-expected curtailments on new-production mortgages, has steadily gotten stronger in more recent vintages.

For this analysis we define a curtailment as any principal payment that is larger than the contractual monthly payment but smaller than the remaining balance of the loan, which is more typically classified as payoff due to either a refinancing or house sale. In the first graph, we show curtailment speeds for new loans with note rates that were not refinanceable on a rate/term basis.1 As you can see, the 2022 vintage shows a significant uptick in curtailments in the second month. Other recent vintages show lower but still significant early-month curtailments, whereas pre-2018 vintages show very little early curtailment activity.

Digging deeper, we separate the loans by purpose: purchase vs. refi. Curtailment speeds are significantly higher among purchase loans than among refis in the first six months, with a noticeable spike at months two and three.

Focusing on purchase loans, we notice that the behavior is most noticeable for non-first-time homebuyers (non-FTHB) and relatively absent with FTHBs. The 2022-vintage non-FTHB paid nearly 6 CPR in their second month of borrowing.

What drives this behavior? While it’s impossible to say for certain, we believe that homeowners purchasing new homes are using proceeds from the sale of the previous home to partially pay off their new loan, with the sale of the previous loan coming a month or so after the close of the first loan.

How pervasive is this behavior? We looked at purchase loans originated in 2022 where the borrower was not a first-time home buyer and noted that 0.5% of the loans account for nearly 75% of the total curtailment activity on a dollar basis. That means these comparatively high, early speeds (6 CPR and higher on some pools) are driven by a small number of loans, with that vast majority of loans showing no significant curtailments in the early months.

High-curtailment loans show large payments relative to their original balances, ranging from 5% to 85% of the unpaid balance with a median value of 25%. We found no pattern with regard to either geography or seller/servicer. Looking at mortgage note rates, 80% of these high-curtailment loans were at 3.5% or lower and only 10% of these borrowers had a positive refinancing incentive at all. Only 1.5% had incentives above 25bp, with a maximum incentive of just 47bp. These curtailments are clearly not explained by rate incentive.

The relatively rarity of these curtailments means that, while in aggregate non-FTHBs are paying nearly 6 CPR in the early months, actual results within pools may vary greatly. In the chart below, we show pool speeds for 2022-vintage majors/multi-lenders, plotted against the percentage of the pool’s balance associated with non-FTHB purchases. We controlled for refi incentive by looking at pools that were out of the money by 0bp to 125bp. As the percentage of non-FTHBs in a pool increases, so does early prepayment speed, albeit with noise around the trend.

We observe that a very small percentage of non-FTHB borrowers are making large curtailment payments in the first few months after closing and that these large payments translate into a short-term pop in speeds on new production at- or out-of-the-money pools. Investors looking to take advantage of this behavior on discount MBS should focus on pools with high non-FTHB borrowers.


A Practical Approach to Climate Risk
Assessment for Mortgage Finance

Note: The following is the introduction from RiskSpan’s contribution to a series of essays on Climate Risk and the Housing Market published this month by the Mortgage Bankers Association’s Research Institute for Housing America.

Significant uncertainty exists about how climate change will occur, how all levels of government will intervene or react to chronic risks like sea level rise, and how households, companies, and financial markets will respond to various signals that will create movements in prices, demographics, and economic activity even before climate risk manifests. This paper lays out a pragmatic framework for assessing these risks from the perspective of a mortgage company. We evaluate available public and proprietary data sources and address data limitations, such as different sources providing a different view of risk for a particular property. We propose a sensitivity analysis approach to quantify risk and mitigate the uncertainties in measuring and responding to climate change.

Global temperatures will continue to increase over the next 50 years regardless of the actions people and governments take. The impacts of that warming are expected to accumulate and become more severe and frequent over time, causing stress throughout our economy. Regulators are clearly signaling that climate risk analysis will need to become a regular part of risk management activities. But detailed, industry-specific guidance has not been defined. FHFA and the regulated entities have yet to release a climate risk framework. They clearly recognize the threat to the housing finance system, however, and are actively working towards accounting for these risks.

Most executives and boards have become conceptually familiar with the physical and transition risks of climate change. But significant questions remain around how these concepts translate into specific, quantifiable business, asset, regulatory, legal, and reputation risks in the housing finance industry. Further complicating matters, climate science continues to evolve and there is limited historical data to understand how the effects of climate change will trickle into the housing market.

Sean Becketti1 describes the myriad ways climate change and natural hazard risk can permeate the housing and housing finance industries as well as some of the ways to mitigate its effects. However, quantifying these risks and inserting them into mortgage credit and prepayment models comes with significant challenges. No “best practices” have emerged for incorporating these into traditional model frameworks.

This paper puts forth a practical framework to incorporate climate risk into existing enterprise risk management practices for the housing finance industry. The framework incorporates suggestions to prepare for coming regulatory requirements on climate risk and, more importantly, proactively managing and mitigating this risk. Our approach is based on over two years of research and field work RiskSpan has conducted with its clients, and the resulting models RiskSpan has developed to deliver insights into these risks.

The paper is organized into two main sections:

  1. Prescribed Climate Scenarios and Emerging Regulatory Requirements
  2. A Practical Approach to Climate Risk Assessment for Mortgage Finance

Layering climate risk into enterprise risk management is likely to be a multiyear process. This paper focuses on steps to take in the initial one to two years after climate risk has been prioritized for investment of time and resources by corporate leadership. As explained in an MBA white paper from June 2022,2 “Existing risk management practices, structures, and relationships are already capturing potential risks from climate change.” The aim of this paper is to investigate specific ways in which existing credit, operational, and market risk frameworks can be leveraged to address this challenge, rather than seeking to reinvent the wheel.


Video: Mortgage Market Evolution

As any mortgage market veteran will attest, the distribution and structure of the mortgage market is constantly in flux. When rates fall, at-the-money coupons become premiums, staffing at originators rises, the volume of refis increase, and the distribution and seasoning of coupons change.

And then the cycle turns. Rates rise. Premiums become discounts. Originators cut staff and prepay speeds plummet. But this too changes, and longtime participants will recognize echoes of 1994 or 1999-2000 in today’s washout.

The brief video animation below tracks the evolution of the mortgage market since 2006, with an eye on distribution and seasoning of borrowers.

Contact us

Agency Social Indices & Prepay Speeds

Do borrowers in “socially rich” pools respond to refinance incentives differently than other borrowers? 

The decision by Fannie and Freddie to release social index disclosure data in November 2022 makes it possible for investors to direct their capital in support of first-time homebuyers, historically underserved borrowers, and people who purchase homes in traditionally underserved areas. Because socially conscious investors likely also have interest in understanding how these social pools are likely to perform, we were curious to examine and learn whether mortgage pools with higher social ratings behaved differently than pools with lower social ratings (and if a difference existed, how significant it was). To the extent that pools rich in social factors perform better (i.e., prepay more slowly) than pools generally, we expect investors to put an even higher premium on them. This in turn should result in lower rates for the borrowers whose loans contribute to pools with higher social scores. 

The data is new and we are still learning things, but we are beginning to discern some differences in prepay speeds.

Definitions 

First, a quick refresher on Fannie’s and Freddie’s social index terminology: 

  • Social Criteria Share (SCS): The percentage of loans in a given pool that meet at least one of the “social” criteria. The criteria are low-income, minority, and first-time homebuyers; homes in low-income areas, minority tracts, high-needs rural areas; homes in designated disaster areas and manufactured housing. As of December 2022, 42.12 percent of loans in the average pool satisfy at least one of these criteria. 
  • Social Density Score (SDS): A measure of how many criteria the average loan in a given pool satisfies. For simplicity, the index consolidates the criteria into three categories – those pertaining to income, those pertaining to the borrower, and those pertaining to the property. A pool’s SDS can be zero, 1, 2, or 3 depending on the number of categories within which the loan satisfies at least one criterion. The average SDS as of December 2022 is 0.62 (out of 3). 

Do social index scores impact prepay speeds? 

While it remains too early to answer this question with a great deal of certainty, historical performance data appears to show that pools with below-average social index scores prepay faster than more “social” bonds. 

We first looked at a high-level, simplistic relationship between prepayments and Social Density Score. In Figure 1, below, pools with below-average Social Density Scores (blue line) prepay faster than both pools with above-average SDS (black line) and pools with the very highest SDS (green line) when they are incentivized by interest rates to do so. (Note that very little difference exists among the curves when borrowers are out of the money to refi.)  


Fig. 1: Speeds by Prepay Incentive and Social Density Score 

See how easy RiskSpan’s Edge Platform makes it for you to do these analyses yourself.

Request a Trial

We note a similar trend when it comes to Social Criteria Share (see Fig. 2, below).  


Fig. 2: Speeds by Prepay Incentive and Social Criteria Share 

Social Pool Performance Relative to Spec Pools 

Investors pay up for mortgage pools with specified characteristics. We thought it worthwhile to compare how certain types of spec pools perform relative to socially rich pools with no other specified characteristics. 

Figure 3, below, compares the performance of non-spec pools with above-average Social Criteria Share (orange line) vs. spec pools for low-FICO (blue line), high-LTV (black line) and max $250k (green line) loans. 

Note that, notwithstanding a lack of any other specific characteristics that investors pay up for, the high-SCS pools exhibit a somewhat better convexity profile than the max-700 FICO and min-95 LTV pools and slightly worse convexity (in most refi incentive buckets) than max-250k pools. 


Fig. 3: Speeds by Prepay Incentive and Social Criteria Share: Socially Rich (Non-Spec) Pools vs. Selected Spec Pools

We observe a similar effect when we compare non-spec pools with an above-average Social Density Score to the same spec pools (Fig. 4, below).   


Fig. 4: Speeds by Prepay Incentive and Social Density Score: Socially Rich (Non-Spec) Pools vs. Selected Spec Pools 

See how social index scores affect speeds relative to other spec pools.

Contact us


5 foundational steps for investors to move towards loan-level analyses

Are you curious about how your organization can uplevel the accuracy of your MSR cost forecasting? The answer lies in leveraging the full spectrum of your data and running analyses at the loan level rather than cohorting. But what does it take to make the switch to loan-level analytics? Our team has put together a short set of recommendations and considerations for how to tackle an otherwise daunting project…

It begins with having the data. Most investors have access to loan-level data, but it’s not always clean. This is especially true of origination data. If you’re acquiring a pool – be it a seasoned pool or a pool right after origination – you don’t have the best origination data to drive your model. You also need a data store, like Snowflake, that can generate loan-loan level output to drive your analytics and models.  

The second factor is having models that work at the loan level – models that have been calibrated using loan-level performance and that are capable of generating loan-level output. One of the constraints of several existing modeling frameworks developed by vendors is they were created to run at a rep line level and don’t necessarily work very well for loan-level projections.

The third requirement is a compute farm. It is virtually impossible to run loan-level analytics if you’re not on the cloud because you need to distribute the computational load. And your computational distribution requirements will change from portfolio to portfolio based on the type of analytics that you are running, based on the types of scenarios that you are running, and based on the models you are using. The cloud is needed not just for CPU power but also for storage. This is because once you go to the loan level, every loan’s data must be made available to every processor that’s performing the calculation. This is where having the kind of shared databases, which are native to a cloud infrastructure, becomes vital. You simply can’t replicate it using an on-premise setup of computers in your office or in your own data center. Adding to this, it’s imperative for mortgage investors to remember the significance of integration and fluidity. When dealing with loan-level analytics, your systems—the data, the models, the compute power—should be interlinked to ensure seamless data flow. This will minimize errors, improve efficiency, and enable faster decision-making.

Fourth—and an often-underestimated component—is having intuitive user interfaces and visualization tools. Analyzing loan-level data is complex, and being able to visualize this data in a comprehensible manner can make all the difference. Dashboards that present loan performance, risk metrics, and other key indicators in an easily digestible format are invaluable. These tools help in quickly identifying patterns, making predictions, and determining the next strategic steps.

Fifth and finally, constant monitoring and optimization are crucial. The mortgage market, like any other financial market, evolves continually. Borrower behaviors change, regulatory environments shift, and economic factors fluctuate. It’s essential to keep your models and analytics tools updated and in sync with these changes. Regular back-testing of your models using historical data will ensure that they remain accurate and predictive. Only by staying ahead of these variables can you ensure that your loan-level analysis remains robust and actionable in the ever-changing landscape of mortgage investment.


Temporary Buydowns are Back. What Does This Mean for Speeds?

Mortgage buydowns are having a deja-vu moment. Some folks may recall mortgages with teaser rates in the pre-crisis period. Temporary buydowns are similar in concept. Recent declines notwithstanding, mortgage rates are still higher than they have been in years. Housing remains pricey. Would-be home buyers are looking for any help they can get. While on the other hand, with an almost non-existent refi market, mortgage originators are trying to find innovative ways to keep the production machine going. Conditions are ripe for lender and/or builder concessions that will help close the deal.

Enter the humble “temporary” mortgage interest rate buydown. A HousingWire article last month addressed the growing trend. It’s hard to turn on the TV without being bombarded with ads for Rocket Mortgage’s “Inflation Buster” program. Rocket Mortgage doesn’t use the term temporary buydown in its TV spots, but that is what it is.

Buydowns, in general, refer to when a borrower pays “points” upfront to reduce the mortgage rate to a level where they can afford the monthly payment. The mortgage rate has been “bought down” from its original rate for the entire life of the mortgage by paying a lumpsum upfront. Temporary Buydowns, on the other hand, come in various shapes and sizes, but the most common ones are a “2 – 1” (a 2-percent interest rate reduction in the first year and a 1-percent reduction in year two) and a “1 – 0” (a 1-percent interest rate reduction in the first year only). In these situations, the seller, or the builder, or the lender or a combination thereof put-up money to cover the difference in interest rate payments between the original mortgage rate and the reduced mortgage rate. In the 2-1 example above, the mortgage rate is reduced by 2% for the first year and then steps up by 1% in the second year and then steps up by another 1% in the 3rd year to reach the actual mortgage rate at origination. So, the interest portion of the monthly mortgage payments are “subsidized” for the first two years and then revert to the full monthly payment. Given the inflated rental market, these programs can make purchasing more advantageous than renting (for home seekers trying to decide between the two options). They can also make purchasing a home more affordable (temporarily, at least) for would-be buyers who can’t afford the monthly payment at the prevailing mortgage rate. It essentially buys them time to refinance into a lower rate should interest rates fall over the subsidized time frame or they may be expecting increased income (raises, business revenue) in the future which will allow them to afford the unsubsidized monthly payment.

Temporary buydowns present an interesting situation for prepayment and default modelers. Most borrowers with good credit behave similarly to refinance incentives, barring loan size and refi cost issues. While permanent buydowns tend to exhibit slower speeds when they come in the money by a small amount since the borrower needs to make a cost/benefit decision about recouping the upfront money they put down and the refi costs associated with the new loan. Their breakeven point is going to be lower by 25bps or 50bps from their existing mortgage rate. So, their response to mortgage rates dropping will be slower than borrowers with similar mortgage rates who didn’t pay points upfront. Borrowers with temporary buydowns will be very sensitive to any mortgage rate drops and will refinance at the first opportunity to lock in a lower rate before the “subsidy” expires. Hence, such mortgages are expected to prepay at higher speeds then other counterparts with similar rates. In essence, they behave like ARMs when they approach their reset dates.

When rates stay static or increase, temporary buydowns will behave like their counterparts except when they get close to the reset dates and will see faster speeds. Two factors would contribute to this phenomenon. The most obvious reason is that temporary buydown borrowers will want to refinance into the lowest rate available at the time of reset (perhaps an ARM).  The other possibility is that some of these borrowers may not be able refi because of DTI issues and may default. Such borrowers may also be deemed “weaker credits” because of the subsidy that they received. This increase in defaults would elevate their speeds (increased CBRs) relative to their counterparts.

So, for the reasons mentioned above, temporary buydown mortgages are expected to be the faster one among the same mortgage rate group. In the table below we separate borrowers with the same mortgage rate into 3 groups: 1) those that got a normal mortgage at the prevailing rate and paid no points, 2) those that paid points upfront to get a permanent lower rate and 3) those who got temporary lower rates subsidized by the seller/builder/lender. Obviously, the buydowns occurred in higher rate environments but we are considering 3 borrower groups with the same mortgage rate regardless of how they got that rate. We are assuming that all 3 groups of borrowers currently have a 6% mortgage. We present the expected prepay behavior of all 3 groups in different mortgage rate environments:

*Turnover++ means faster due to defaults or at reset
 Rate Rate Shift 6% (no pts)

Buydown to 6%(borrower-paid)

Buydown to 6% (lender-paid)  
7.00% +100 Turnover Turnover Turnover++*  
6.00% Flat Turnover Turnover Faster (at reset)  
5.75% -25 Refi Turnover Refi  
5.00% -100 Refi (Faster) Refi (Fast) Refi (Fastest)  

Overall, temporary buydowns are likely to exhibit the most rate sensitivity. As their mortgage rates reset higher, they will behave like ARMs and refi into any other lower rate option (5/1 ARM) or possibly default. In the money, they will be the quickest to refi.

Contact Us

Bumpy Road Ahead for GNMA MBS?

In a recent webinar, RiskSpan’s Fowad Sheikh engaged in a robust discussion with two of his fellow industry experts, Mahesh Swaminathan of Hilltop Securities and Mike Ortiz of DoubleLine Group, to address the likely road ahead for Ginnie Mae securities performance.


The panel sought to address the following questions:

  • How will the forthcoming, more stringent originator/servicer financial eligibility requirements affect origination volumes, buyouts, and performance?
  • Who will fill the vacuum left by Wells Fargo’s exiting the market?
  • What role will falling prices play in delinquency and buyout rates?
  • What will be the impact of potential Fed MBS sales.

This post summarizes some the group’s key conclusions. A recording of the webinar in its entirety is available here.

GET STARTED

Wells Fargo’s Departure

To understand the the likely impact of Wells Fargo’s exit, it is first instructive to understand the declining market share of banks overall in the Ginnie Mae universe. As the following chart illustrates, banks as a whole account for just 11 percent of Ginnie Mae originations, down from 39 percent as recently as 2015.

Drilling down further, the chart below plots Wells Fargo’s Ginnie Mae share (the green line) relative to the rest of the market. As the chart shows, Wells Fargo accounts for just 3 percent of Ginnie Mae originations today, compared to 15 percent in 2015. This trend of Wells Fargo’s declining market share extends all the way back to 2010, when it accounted for some 30 percent of Ginnie originations.

As the second chart below indicates, Wells Fargo’s market share, even among banks has also been on a steady decline.

GeT A Free Trial or Demo

Three percent of the overall market is meaningful but not likely to be a game changer either in terms of origination trends or impact on spreads. Wells Fargo, however, continues to have an outsize influence in the spec pool market. The panel hypothesized that Wells’s departure from this market could open the door to other entities claiming that market share. This could potentially affect prepayment speeds – especially if Wells is replaced by non-bank servicers, which the panel felt was likely given the current non-bank dominance of the top 20 (see below) – since Wells prepays have traditionally been slightly better than the broader market.

The panel raised the question of whether the continuing bank retreat from Ginnie Mae originations would adversely affect loan quality. As basis for this concern, they cited the generally lower FICO scores and higher LTVs that characterize non-bank-originated Ginnie Mae mortgages (see below). 

These data notwithstanding, the panel asserted that any changes to credit quality would be restricted to the margins. Non-bank servicers originate a higher percentage of lower-credit-quality loans (relative to banks) not because non-banks are actively seeking those borrowers out and eschewing higher-credit-quality borrowers. Rather, banks tend to restrict themselves to borrowers with higher credit profiles. Non-banks will be more than happy to lend to these borrowers as banks continue to exit the market.

Effect of New Eligibility Requirements

The new capital requirements, which take effect a year from now, are likely to be less punitive than they appear at first glance. With the exception of certain monoline entities – say, those with almost all of their assets concentrated in MSRs – the overwhelming majority of Ginnie Mae issuers (banks and non-banks alike) are going to be able meet them with little if any difficulty.

Ginnie Mae has stated that, even if the new requirements went into effect tomorrow, 95 percent of its non-bank issuers would qualify. Consequently, the one-year compliance period should open the door for a fairly smooth transition.

To the extent Ginnie Mae issuers are unable to meet the requirements, a consolidation of non-bank entities is likely in the offing. Given that these institutions will likely be significant MSR investors, the potential increase in MSR sales could impact MSR multiples and potentially disrupt the MSR market, at least marginally.

Potential Impacts of Negative HPA

Ginnie Mae borrowers tend to be more highly leveraged than conventional borrowers. FHA borrowers can start with LTVs as high as 97.5 percent. VA borrowers, once the VA guarantee fee is rolled in, often have LTVs in excess of 100 percent. Similar characteristics apply to USDA loans. Consequently, borrowers who originated in the past two years are more likely to default as they watch their properties go underwater. This is potentially good news for investors in discount coupons (i.e., investors who benefit from faster prepay speeds) because these delinquent loans will be bought out quite early in their expected lives.

More seasoned borrowers, in contrast, have experienced considerable positive HPA in recent years. The coming forecasted decline should not materially impact these borrowers’ performance. Similarly, if HPD in 2023 proves to be mild, then a sharp uptick in delinquencies is unlikely, regardless of loan vintage or LTV. Most homeowners make mortgage payments because they wish to continue living in their house and do not seriously consider strategic defaults. During the financial crisis, most borrowers continued making good on their mortgage obligations even as their LTVs went as high as the 150s.

Further, the HPD we are likely to encounter next year likely will not have the same devastating effect as the HPD wave that accompanied the financial crisis. Loans on the books today are markedly different from loans then. Ginnie Mae loans that went bad during the crisis disproportionately included seller-financed, down-payment-assistance loans and other programs lacking in robust checks and balances. Ginnie Mae has instituted more stringent guidelines in the years since to minimize the impact of bad actors in these sorts of programs.

This all assumes, however, that the job market remains robust. Should the looming recession lead to widespread unemployment, that would have a far more profound impact on delinquencies and buyouts than would HPD.

Fed Sales

The Fed’s holdings (as of 9/21, see chart below) are concentrated around 2 percent and 2.5 percent coupons. This raises the question of what the Fed’s strategy is likely to be for unwinding its Ginnie Mae position.

Word on the street is that Fed sales are highly unlikely to happen in 2022. Any sales in 2023, if they happen at all, are not likely before the second half of the year. The panel opined that the composition of these sales is likely to resemble the composition of the Fed’s existing book – i.e., mostly 2s, 2.5s, and some 3s. They have the capacity to take a more sophisticated approach than a simple pro-rata unwinding. Whether they choose to pursue that is an open question.

The Fed was a largely non-economic buyer of mortgage securities. There is every reason to believe that it will be a non-economic seller, as well, when the time comes. The Fed’s trading desk will likely reach out to the Street, ask for inquiry, and seek to pursue an approach that is least disruptive to the mortgage market.

Conclusion

On closer consideration, many of these macro conditions (Wells’s exit, HPD, enhanced eligibility requirements, and pending Fed sales) that would seem to portend an uncertain and bumpy road for Ginnie Mae investors, may turn out to be more benign than feared.

Conditions remain unsettled, however, and these and other factors certainly bear watching as Ginnie Mae market participants seek to plot a prudent course forward.


Optimizing Analytics Computational Processing 

We met with RiskSpan’s Head of Engineering and Development, Praveen Vairavan, to understand how his team set about optimizing analytics computational processing for a portfolio of 4 million mortgage loans using a cloud-based compute farm.

This interview dives deeper into a case study we discussed in a recent interview with RiskSpan’s co-founder, Suhrud Dagli.

Here is what we learned from Praveen. 


Speak to an Expert

Could you begin by summarizing for us the technical challenge this optimization was seeking to overcome? 

PV: The main challenge related to an investor’s MSR portfolio, specifically the volume of loans we were trying to run. The client has close to 4 million loans spread across nine different servicers. This presented two related but separate sets of challenges. 

The first set of challenges stemmed from needing to consume data from different servicers whose file formats not only differed from one another but also often lacked internal consistency. By that, I mean even the file formats from a single given servicer tended to change from time to time. This required us to continuously update our data mapping and (because the servicer reporting data is not always clean) modify our QC rules to keep up with evolving file formats.  

The second challenge relates to the sheer volume of compute power necessary to run stochastic paths of Monte Carlo rate simulations on 4 million individual loans and then discount the resulting cash flows based on option adjusted yield across multiple scenarios. 

And so you have 4 million loans times multiple paths times one basic cash flow, one basic option-adjusted case, one up case, and one down case, and you can see how quickly the workload adds up. And all this needed to happen on a daily basis. 

To help minimize the computing workload, our client had been running all these daily analytics at a rep-line level—stratifying and condensing everything down to between 70,000 and 75,000 rep lines. This alleviated the computing burden but at the cost of decreased accuracy because they couldn’t look at the loans individually. 

What technology enabled you to optimize the computational process of running 50 paths and 4 scenarios for 4 million individual loans?

PV: With the cloud, you have the advantage of spawning a bunch of servers on the fly (just long enough to run all the necessary analytics) and then shutting it down once the analytics are done. 

This sounds simple enough. But in order to use that level of compute servers, we needed to figure out how to distribute the 4 million loans across all these different servers so they can run in parallel (and then we get the results back so we could aggregate them). We did this using what is known as a MapReduce approach. 

Say we want to run a particular cohort of this dataset with 50,000 loans in it. If we were using a single server, it would run them one after the other – generate all the cash flows for loan 1, then for loan 2, and so on. As you would expect, that is very time-consuming. So, we decided to break down the loans into smaller chunks. We experimented with various chunk sizes. We started with 1,000 – we ran 50 chunks of 1,000 loans each in parallel across the AWS cloud and then aggregated all those results.  

That was an improvement, but the 50 parallel jobs were still taking longer than we wanted. And so, we experimented further before ultimately determining that the “sweet spot” was something closer to 5,000 parallel jobs of 100 loans each. 

Only in the cloud is it practical to run 5,000 servers in parallel. But this of course raises the question: Why not just go all the way and run 50,000 parallel jobs of one loan each? Well, as it happens, running an excessively large number of jobs carries overhead burdens of its own. And we found that the extra time needed to manage that many jobs more than offset the compute time savings. And so, using a fair bit of trial and error, we determined that 100-loan jobs maximized the runtime savings without creating an overly burdensome number of jobs running in parallel.  

Get A Demo

You mentioned the challenge of having to manage a large number of parallel processes. What tools do you employ to work around these and other bottlenecks? 

PV: The most significant bottleneck associated with this process is finding the “sweet spot” number of parallel processes I mentioned above. As I said, we could theoretically break it down into 4 million single-loan processes all running in parallel. But managing this amount of distributed computation, even in the cloud, invariably creates a degree of overhead which ultimately degrades performance. 

And so how do we find that sweet spot – how do we optimize the number of servers on the distributed computation engine? 

As I alluded to earlier, the process involved an element of trial and error. But we also developed some home-grown tools (and leveraged some tools available in AWS) to help us. These tools enable us to visualize computation server performance – how much of a load they can take, how much memory they use, etc. These helped eliminate some of the optimization guesswork.   

Is this optimization primarily hardware based?

PV: AWS provides essentially two “flavors” of machines. One “flavor” enables you to take in a lot of memory. This enables you to keep a whole lot of loans in memory so it will be faster to run. The other flavor of hardware is more processor based (compute intensive). These machines provide a lot of CPU power so that you can run a lot of processes in parallel on a single machine and still get the required performance. 

We have done a lot of R&D on this hardware. We experimented with many different instance types to determine which works best for us and optimizes our output: Lots of memory but smaller CPUs vs. CPU-intensive machines with less (but still a reasonably amount of) memory. 

We ultimately landed on a machine with 96 cores and about 240 GB of memory. This was the balance that enabled us to run portfolios at speeds consistent with our SLAs. For us, this translated to a server farm of 50 machines running 70 processes each, which works out to 3,500 workers helping us to process the entire 4-million-loan portfolio (across 50 Monte Carlo simulation paths and 4 different scenarios) within the established SLA.  

What software-based optimization made this possible? 

PV: Even optimized in the cloud, hardware can get pricey – on the order of $4.50 per hour in this example. And so, we supplemented our hardware optimization with some software-based optimization as well. 

We were able to optimize our software to a point where we could use a machine with just 30 cores (rather than 96) and 64 GB of RAM (rather than 240). Using 80 of these machines running 40 processes each gives us 2,400 workers (rather than 3,500). Software optimization enabled us to run the same number of loans in roughly the same amount of time (slightly faster, actually) but using fewer hardware resources. And our cost to use these machines was just one-third what we were paying for the more resource-intensive hardware. 

All this, and our compute time actually declined by 10 percent.  

The software optimization that made this possible has two parts: 

The first part (as we discussed earlier) is using the MapReduce methodology to break down jobs into optimally sized chunks. 

The second part involved optimizing how we read loan-level information into the analytical engine.  Reading in loan-level data (especially for 4 million loans) is a huge bottleneck. We got around this by implementing a “pre-processing” procedure. For each individual servicer, we created a set of optimized loan files that can be read and rendered “analytics ready” very quickly. This enables the loan-level data to be quickly consumed and immediately used for analytics without having to read all the loan tapes and convert them into a format that analytics engine can understand. Because we have “pre-processed” all this loan information, it is immediately available in a format that the engine can easily digest and run analytics on.  

This software-based optimization is what ultimately enabled us to optimize our hardware usage (and save time and cost in the process).  

Contact us to learn more about how we can help you optimize your mortgage analytics computational processing.


Get Started
Log in

Linkedin