Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: RS Edge

Incorporating Covid-Era Mortgage Data Without Skewing Your Models

What we observed during Covid represents a radical departure from what we observed pre-Covid. To what extent do these observations impact long-term trends observed for mortgage performance? Should these data fundamentally impact the way in which we think about the effects borrower, loan and macroeconomic characteristics have on mortgage performance? Or do we need to simply account for them as a short-term blip?


The process of modeling mortgage defaults and prepayments typically begins with identifying long-term trends and reference values. These aid in creating the baseline forecasts that undergird the model in its most simplistic form. Modelers then begin looking for deviations from this baseline created by specific loan, borrower, and property characteristics, as well as by key macroeconomic variables.

Identifying these relationships enables modelers to begin quantifying the extent to which micro factors like income, credit score, and loan-to-value ratios interact with macro indicators like the unemployment rate to cause prepayments and defaults to depart from their baseline. Data observations aggregated over extended periods give a comprehensive picture possible of these relationships.

In practice, the human behavior underlying these and virtually all economic models tends to change over time. Modelers account for this by making short-term corrections based on observations from the most recent time periods. This approach of tweaking long-term trends based on recent performance works reasonably well under most circumstances. One could reasonably argue, however, that tweaking existing models using performance data collected during the Covid-19 era presents a unique set of challenges.

What was observed during Covid represents a radical departure from what was observed pre-Covid. To what extent do these observations impact long-term trends and reference values. Should these data fundamentally impact the way in which we think about the effects borrower, loan and macroeconomic characteristics have on mortgage performance? Or do we need to simply account for them as a short-term blip?

SPEAK TO AN EXPERT

How Covid-era mortgage data differs

When it comes to modeling mortgage performance, we generally think of three sets of factors: 1) macroeconomic conditions, 2) loan and borrower characteristics, and 3) property characteristics. In determining how to account for Covid-era data in our modeling, we first must attempt to evaluate its impact on these factors. Three macroeconomic factors have played an especially significant role recently. First, as reflected in the chart below, we experienced a significant home-price decline during the 2008 financial crisis but a steady increase since then. Covid Era

Second, mortgage rates continued to decline for the most part during the crisis and beyond. There were brief periods when they increased, but they remained low by and large. Covid Era

The third piece is the unemployment rate. Unemployment spiked to around 10 percent during the financial crisis and then slowly declined. Covid Era

When home prices declined in the past, we typically saw the government attempt to respond to it by reducing interest rates. This created something of a correlation between home prices and mortgage rates. Looking at this from a purely statistical viewpoint, the only thing the historical data shows is that falling home prices bring about a decline in mortgage rates. (And rising home prices bring about higher interest rates, though to a far lesser degree.) We see something similar with unemployment. Falling unemployment is correlated with rising home prices.

But then Covid arrives and with it some things we had not observed previously. All the “known” correlations among these macroeconomic variables broke down. For example, the unemployment rate spikes to 15 percent within just a couple of months and yet has no negative impact at all on home prices. Home prices, in fact, continue to rise, supported by the very generous unemployment benefits provided during Covid pandemic.

This greatly complicates the modeling. Here we had these variable relationships that appeared steady over a period of decades, and all of our modeling was being done (knowingly or unknowingly) relying on these correlations, and suddenly all these correlations are breaking down.

What does this mean for forecasting prepayments? The following chart shows prepayments over time by vintage. We see extremely high prepayment rates between early 2020 (the start of the pandemic) and early 2022 (when rates started rising). This makes sense.

Covid Era

Look at what happens to our forecasts, however, when rates begin to increase. The following chart reflects the models predicting a much steeper drop-off in prepayments than what was actually observed for a July 2021 issuance Fannie Mae major of coupon 2.0. These mortgage loans with no refinance incentive are prepaying faster than what would be expected based on the historical data.

Covid Era

What is causing this departure?

The most plausible explanation relates to an observed increase in cash-out refinances caused by the recent run-up in home prices and resulting in many homeowners suddenly finding themselves with a lot of home equity to tap into.  Pre-Covid , cash-outs accounted for between a third and a quarter of refinances. Now, with virtually no one in the money for a rate-and-term refinance, cash-outs are accounting for over 80 percent of them.

We learn from this that we need to incorporate the amount of home equity gained by borrowers into our prepayment modeling.

 Modeling Credit Performance

Of course, Covid’s impacts were felt even more acutely in delinquency rates than in prepays. As the following chart shows, a borrower that was 1-month delinquent during Covid had a 75 percent probability of being 2-months delinquent the following month.

Covid Era

This is clearly way outside the norm of what was observed historically and compels us to ask some hard questions when attempting to fit a model to this data.

The long-term average of “two to worse” transitions (the percentage of 60-day delinquencies that become 90-day delinquencies (or worse) the following month) is around 40 percent. But we’re now observing something closer to 50 percent. Do we expect this to continue in the future, or do we expect it to revert back to the longer-term average. We observe a similar issue in other transitions, as illustrated below. The rates appear to be stabilizing at higher levels now relative to where they were pre-Covid. This is especially true of more serious delinquencies.

Covid Era

How do we respond to this? What is the best way to go about combining this pre-Covid and post-Covid data?

Principles for handling Covid-era mortgage data

One approach would be to think about Covid data as outliers that should be ignored. At the other extreme, we could simply accept the observed data and incorporate it without any special considerations. A split-the-difference third approach would have us incorporate the new data with some sort of weighting factor for use in future stress scenarios without completely casting aside the long-term reference values that had stood the test of time prior to the pandemic.

This third approach requires us to apply the following guiding principles:

  1. Assess assumed correlations between driving macro variables: For example, don’t allow the model to assume that increasing unemployment will lead to higher home prices just because it happened once during a pandemic.
  2. Choose short-term calibrations carefully. Do not allow models to be unduly influenced by blindly giving too much weight to what has happened in the past two years.
  3. Determine whether the new data in fact reflects a regime shift. How long will the new regime last?
  4. Avoid creating a model that will break down during future unusual periods.
  1. Prepare for other extremes. Incorporate what was learned into future stress testing
  1. Build models that allow sensitivity analyses and are easy to change/tune. Models need to be sufficiently flexible that they can be tuned in response to macroeconomic events in a matter of weeks, rather than taking months or years to design and build an entirely new model.

Covid-era mortgage data presents modelers with a unique challenge. How to appropriately consider it without overweighting it. These general guidelines are a good place to start. For ideas specific to your portfolio, contact a RiskSpan representative.

SPEAK TO AN EXPERT

RiskSpan Unveils New “Reverse ETL” Mortgage Data Mapping and Extract Functionality

ARLINGTON, Va., October 19, 2022 – Subscribers to RiskSpan’s Mortgage Data Management product can now not only leverage machine learning to streamline the intake of loan data from any format, but also define any target format for data extraction and sharing.

A recent enhancement to RiskSpan’s award-winning Edge Platform enables users to take in unformatted datasets from mortgage servicers, sellers and other counterparties and convert them into their preferred data format on the fly for sharing with accounting, client, and other downstream systems.

Analysts, traders, and portfolio managers have long used Edge to take in and store datasets, enabling them to analyze historical performance of custom cohorts using limitless combinations of mortgage loan characteristics and run predictive analytics on segments defined on the fly. With Edge’s novel “Reverse ETL” data extract functionality, these Platform users can now also easily and fully design an export format for exporting their data, creating the functional equivalent of a full integration node for sharing data with literally any system on or off the Edge Platform.   

Market participants tout the revolutionary technology as the end of having to share cumbersome and unformatted CSV files with counterparties. Now, the same smart mapping technology that for years has facilitated the ingestion of mortgage data onto the Edge Platform makes extracting and sharing mortgage data with downstream users just as easy.   

Comprehensive details of this and other new capabilities using RiskSpan’s Edge Platform are available by requesting a no-obligation live demo at riskspan.com.

SCHEDULE A FREE DEMO

This new functionality is the latest in a series of enhancements that is making the Edge Platform’s Data as a Service increasingly indispensable for mortgage loan and MSR traders and investors.

### 

About RiskSpan, Inc. 

RiskSpan is a leading technology company and the most comprehensive source for data management and analytics for residential mortgage and structured products. The company offers cloud-native SaaS analytics for on-demand market risk, credit risk, pricing and trading. With our data science experts and technologists, we are the leader in data as a service and end-to-end solutions for loan-level data management and analytics.

Our mission is to be the most trusted and comprehensive source of data and analytics for loans and structured finance investments.

Rethink loan and structured finance data. Rethink your analytics. Learn more at www.riskspan.com.

Media contact: Timothy Willis

CONTACT US

New Refinance Lag Functionality Affords RiskSpan Users Flexibility in Higher Rate Environments 

ARLINGTON, Va., September 29, 2022 — RiskSpan, a leading technology company and the most comprehensive source for data management and analytics for residential mortgage and structured products, has announced that users of its award-winning Edge Platform can now fine-tune the assumed time lag between a rate-incentivized borrower’s decision to refinance and ultimate payoff. Getting this time lag right unveils a more accurate understanding of the rate incentive that borrowers responded to and thus better predictions of coming prepayments. 

The recent run-up in interest rates has caused the number of rate-incentivized mortgage refinancings to fall precipitously. Newfound operational capacity at many lenders, created by this drop in volume, means that new mortgages can now be closed in fewer days than were necessary at the height of the refi boom. This “lag time” between when a mortgage borrower becomes in-the-money to refinance and when the loan actually closes is an important consideration for MBS traders and analysts seeking to model and predict prepayment performance. 

Rather than confining MBS traders to a single, pre-set lag time assumption of 42 days, users of the Edge Platform’s Historical Performance module can now adjust the lag assumption when building their S-curves to better reflect their view of current market conditions. Using the module’s new Input section for Agency datasets, traders and analysts can further refine their approach to computing refi incentive by selecting the prevailing mortgage rate measure for any given sector (e.g., FH 30Y PMMS, MBA FH 30Y, FH 15Y PMMS and FH 5/1 PMMS) and adjusting the lag time to anywhere from zero to 99 days.   

Comprehensive details of this and other new capabilities are available by requesting a no-obligation live demo below or at riskspan.com

GET A FREE DEMO

This new functionality is the latest in a series of enhancements that is making the Edge Platform increasingly indispensable for Agency MBS traders and investors.  

###

About RiskSpan, Inc. 

RiskSpan offers cloud-native SaaS analytics for on-demand market risk, credit risk, pricing and trading. With our data science experts and technologists, we are the leader in data as a service and end-to-end solutions for loan-level data management and analytics. 

Our mission is to be the most trusted and comprehensive source of data and analytics for loans and structured finance investments. 

Rethink loan and structured finance data. Rethink your analytics. Learn more at www.riskspan.com. 

Media contact: Timothy Willis

CONTACT US

How Do You Rate on Fannie Mae’s New Social Index?

Quick take-aways

  • HMDA data contains nearly every factor needed to replicate Fannie Mae’s Single Family Social Index. We use this data to explore how the methodology would look if the Fannie Mae Social Index were applied to other market participants.
  • The Agencies and Ginnie Mae are not the only game in town when it comes socially responsible lending. Non-agency loans would also perform reasonably well under Fannie Mae’s proposed Social Index.
  • Not surprisingly, Ginnie Mae outperforms all other “purchaser types” under the framework, buoyed by its focus on low-income borrowers and underserved communities. The gap between Ginnie and the rest of the market can be expected to expand in low-refi environments.
  • With a few refinements to account for socially responsible lending beyond low-income borrowers, Fannie Mae’s framework can work as a universally applicable social measure across the industry.

Fannie Mae’s new “Single Family Social Index

Last week, Fannie Mae released a proposed methodology for its Single Family Social Index.” The index is designed to provide “socially conscious investors” a means of “allocat[ing] capital in support of affordable housing and to provide access to credit for underserved individuals.”

The underlying methodology is simple enough. Each pool of mortgages receives a score based on how many of its loans meet one or more specified “social criteria” across three dimensions: borrower income, borrower characteristics and property location/type. Fannie Mae succinctly illustrates the defined criteria and framework in the following overview deck slide.


Social Index Figure 1: Source: Designing for Impact — A Proposed Methodology for Single-Family Social Disclosure


Each of the criteria is binary (yes/no) which facilitates the scoring. Individual loans are simply rated based on the number of boxes they check. Pools are measured in two ways: 1) a “Social Criteria Share,” which identifies the percentage of loans that meet any of the criteria, and 2) a “Social Density Score,” which assigns a “Social Score” of 0 thru 3 to each individual loan based on how many of the three dimensions (borrower income, borrower characteristics, and property characteristics) it covers and then averaging that score across all the loans in the pool.

If other issuers adopt this methodology, what would it look like?

The figure below is one of many charts and tables provided by Fannie Mae that illustrate how the Index works. This figure shows the share of acquisitions meeting one or more of the Social Index criteria (i.e., the overall “Social Criteria Share.” We have drawn a box approximately around the 2020 vintage,[1] which appears to have a Social Criteria Share of about 52% by loan count. We will refer back to this value later as we seek to triangulate in on a Social Criteria Share for other market participants.

SPEAK TO AN EXPERT

Graph Figure 2: Source: Designing for Impact — A Proposed Methodology for Single-Family Social Disclosure


We can get a sense of other issuers’ Social Criteria Share by looking at HMDA data. This dataset provides everything we need to re-create the Index at a high-level, with the exception of a flag for first time home buyers. The process involves some data manipulation as several Index criteria require us to connect to two census-tract level data sources published by FHFA.

HMDA allows us break down the loan population by purchaser type, which gives us an idea of each loan’s ultimate destination—Fannie, Freddie, Ginnie, etc. The purchaser type does not capture this for every loan, however, because originators are only obligated to report loans that are closed and sold during the same calendar year.  

The two tables below reflect two different approaches to approximating the population of Fannie, Freddie, and Ginnie loans. The left-hand table compares the 2020 origination loan count based on HMDA’s Purchaser Type field with loan counts based on MBS disclosure data pulled from RiskSpan’s Edge Platform.

The right-hand table enhances this definition by first re-categorizing as Ginnie Mae all FHA/VA/USDA loans with non-agency purchaser types. It also looks at the Automated Underwriting System field and re-maps all owner-occupied loans previously classified as “Other or NA” to Fannie (DU AUS) or Freddie (LP/LPA AUS).


Social Index



The adjusted purchaser type approach used in the right-hand table reallocates a considerable number of “Other or NA” loans from the left-hand table. The approach clearly overshoots the Fannie Mae population, as some loans underwritten using Fannie’s automated underwriting system likely wind up at Freddie and other segments of the market. This limitation notwithstanding, we believe this approximation lends a more accurate view of the market landscape than does the unadjusted purchaser type approach. We consequently rely primarily on the adjusted approach in this analysis.

Given the shortcomings in aligning the exact population, the idea here is not to get an exact calculation of the Social Index metrics via HMDA, but to use HMDA to give us a rough indication of how the landscape would look if other issuers adopted Fannie’s methodology. We expect this to provide a rough rank-order understanding of where the richest pools of ‘Social’ loans (according to Fannie’s methodology) ultimately wind up. Because the ultimate success of a social scoring methodology can truly be measured only to the extent it is adopted by other issuers, having a universally useful framework is crucial.

The table below estimates the Social Criteria Share by adjusted purchaser using seven of Fannie Mae’s eight social index criteria.[2] Not surprisingly, Ginnie, Fannie, and Freddie boast the highest overall shares. It is encouraging to note, however, that other purchaser types also originate significant percentages of socially responsible loans. This suggests that Fannie’s methodology could indeed be applied more universally. The table looks at each factor separately and could warrant its own blog post entirely to dissect, so take a closer look at the dynamics.[3]


Social Index


Ginnie Mae’s strong performance on the Index comes as no surprise. Ginnie pools, after all, consist primarily of FHA loans, which skew toward the lower end of the income spectrum, first-time borrowers, and traditionally underserved communities. Indeed, more than 56 percent of Ginnie Mae loans tick at least one box on the Index. And this does not include first-time homebuyers, which would likely push that percentage even higher.

Income’s Outsized Impact

Household income contributes directly or indirectly to most components of Fannie’s Index. Beyond the “Low-income” criterion (borrowers below 80 percent of adjusted median income), nearly every other factor favors income levels be below 120 percent of AMI. Measuring income is tricky, especially outside of the Agency/Ginnie space. The non-Agency segment serves many self-employed borrowers, borrowers who qualify based on asset (rather than income) levels, and foreign national borrowers. Nailing down precise income has historically proven challenging with these groups.

Given these dynamics, one could reasonably posit that the 18 percent of PLS classified as “low-income” is actually inflated by self-employed or wealthier borrowers whose mortgage applications do not necessarily reflect all of their income. Further refinements may be needed to fairly apply the Index framework to this and market segments that pursue social goals beyond expanding credit opportunities for low-income borrowers. This could just be further definitions on how to calculate income (or alternatives to the income metric when not available) and certain exclusions from the framework altogether (foreign national borrowers, although these may be excluded already based on the screen for second homes).

Positive effects of a purchase market

The Social Criteria Share is positively correlated with purchase loans as a percentage of total origination volume (even before accounting for the FTHB factor). This relationship is apparent in Fannie Mae’s time series chart near the top of this post. Shares clearly drop during refi waves.

Our analysis focuses on 2020 only. We made this choice because of HMDA reporting lags and the inherent facility of dealing with a single year of data. The table below breaks down the HMDA analysis (referenced earlier) by loan purpose to give us a sense for what our current low-refi environment could look like. (Rate/term refis are grouped together with cash-out refis.) As the table below indicates, Ginnie Mae’s SCS for refi loans is about the same as it is for GSE refi loans — it’s really on purchase loans where Ginnie shines. This implies that Ginnie’s SCS will improve even further in a purchase rate environment.


Social Index


Accounting for First-time Homebuyers

As described above, our methodology for estimating the Social Criteria Share omits loans to first-time homebuyers (because the HMDA data does not capture it). This likely accounts for the roughly 6 percentage point difference between our estimate of Fannie’s overall Social Criteria Share for 2020 (approximately 46 percent) and Fannie Mae’s own calculation (approximately 52 percent).

To back into the impact of the FTHB factor, we can pull in data about the share of FTHBs from RiskSpan’s Edge platform. The chart above that looks a Purchase vs. Refi tells us the SCS share without the FTHB factor for purchase loans. Using MBS data sources, we can obtain the share of 2020 originations that were FTHBs. If we assume that FTHB loans look the same as purchase loans overall in terms of how many other Social Index boxes they check, then we can back into the overall SCS incorporating all factors in Fannie’s methodology.

Applying this approach to Ginnie Mae, we conclude that, because 29 percent of Ginnie’s purchase loans (one minus 71 percent) do not tick any of the Index’s boxes, 29 percent of FTHB loans (which account for 33 percent of Ginnie’s overall population) also do not tick any Index boxes. Taking 29 percent of this 33 percent results in an additional 9.6 percent that should be tacked on to Ginnie Mae’s pre-FTHB share, bringing it up to 66 percent.


Social Index


Validating this estimation approach is the fact it increases Fannie Mae’s share from 46 percent (pre-FTHB) to 52 percent, which is consistent with the historical graph supplied by Fannie Mae (see Figure 2, above). Our FTHB approach implies that 92 percent of Ginnie Mae purchase loans meet one or more of the Index criteria. One could reasonably contend that Ginnie Mae FTHB loans might be more likely than Ginnie purchase loans overall to satisfy other social criteria (i.e., that 92 percent is a bit rich), in which case the 66 percent share for Ginnie Mae in 2020 might be overstated. Even if we mute this FTHB impact on Ginnie, however, layering FTHB loans on top of a rising purchase-loan environment would likely put today’s Ginnie Mae SCS in the low 80s.




[1] The chart is organized by acquisition month, our analysis of HMDA looks at 2020 originations, so we’ve tried to push the box slightly to the right to reflect the 1–3-month lag between origination and acquisition. Additionally, we think the chart and numbers throughout Fannie’s document are just Fixed Rate 30 loans, our analysis includes all loans. We did investigate what our numbers would look like if filtered to Fixed 30 and it would only increase the SCS slightly across the board.

[2] As noted above, we are unable to discern first-time homebuyer information from the HMDA data.

[3] We can compare the Fannie numbers for each factor to published rates in their documentation representing the time period 2017 forward. The only metric where we stand out as being meaningfully off is the percentage of loans in minority census tracts. We took this flag from FHFA’s Low-Income Area File for 2020 which defines a minority census tract having a ‘…minority population of at least 30 percent and a median income of less than 100 percent of the AMI.’ It is not 100% clear that this is what Fannie Mae is using in its definition.


Live Demo of RiskSpan’s Award-Winning Edge Platform–3

Wednesday, August 24th | 1:00 p.m. EDT

Live Demo of RiskSpan’s award-winning Edge Platform. Learn more and ask questions at our bi-weekly, 45-minute demo.

Historical Performance Tool: Slice and dice historical loan performance in the Agency and PLRMBS universe to find outperforming cohorts.

Predictive Loan-Level Pricing and Risk Analytics: Produce loan-level pricing and risk on loans, MSRs, and structured products in minutes – with behavioral models applied at the loan-level, and other assumptions applied conveniently to inform bids and hedging.

Loan Data Management: Let RiskSpan’s data scientists consolidate and enhance your data across origination and servicing platforms, make it analytics-ready, and maintain if for ongoing trend analysis.


About RiskSpan:

RiskSpan offers cloud-native SaaS analytics for on-demand market risk, credit risk, pricing and trading. With our data science experts and technologists, we are the leader in data as a service and end-to-end solutions for loan-level data management and analytics.

Our mission is to be the most trusted and comprehensive source of data and analytics for loans and structured finance investments.

Rethink loan and structured finance data. Rethink your analytics. Learn more at www.riskspan.com.

Presenters

Joe Makepeace

Director, RiskSpan

Jordan Parker

Sales Executive, RiskSpan


RiskSpan Introduces Multi-Scenario Yield Table 

ARLINGTON, Va., August 4, 2022

RiskSpan, a leading provider of residential mortgage and structured product data and analytics, has announced a new Multi-Scenario Yield Table feature within its award-winning Edge Platform.  

REITs and other mortgage loan and MSR investors leverage the Multi-Scenario Yield Table to instantaneously run and compare multiple scenario analyses on any individual asset in their portfolio. 

An interactive, self-guided demo of this new functionality can be viewed here. 

Comprehensive details of this and other new capabilities are available by requesting a no-obligation live demo at riskspan.com. 

Request a No-Obligation Live Demo

With a single click from the portfolio screen, Edge users can now simultaneously view the impact of as many as 20 different scenarios on outputs including price, yield, WAL, dv01, OAS, discount margin, modified duration, weighted average CRR and CDR, severity and projected losses. The ability to view these and other model outputs across multiple scenarios in a single table eliminates the tedious and time-consuming process of running scenarios individually and having to manually juxtapose the resulting analytics.  

Entering scenarios is easy. Users can make changes to scenarios right on the screen to facilitate quick, ad hoc analyses. Once these scenarios are loaded and assumptions are set, the impacts of each scenario on price and other risk metrics are lined up in a single, easily analyzed data table. 

Analysts who determine that one of the scenarios is producing more reasonable results than the defined base case can overwrite and replace the base case with the preferred scenario in just two clicks.   

The Multi-Scenario Yield Table is the latest in a series of enhancements that is making the Edge Platform increasingly indispensable for mortgage loan and MSR portfolio managers. 


 About RiskSpan, Inc.  

RiskSpan offers cloud-native SaaS analytics for on-demand market risk, credit risk, pricing and trading. With our data science experts and technologists, we are the leader in data as a service and end-to-end solutions for loan-level data management and analytics. 

Our mission is to be the most trusted and comprehensive source of data and analytics for loans and structured finance investments. 

Rethink loan and structured finance data. Rethink your analytics. Learn more at www.riskspan.com.

Media contact: Timothy Willis 


It’s time to move to DaaS — Why it matters for loan and MSR investors

Data as a service, or DaaS, for loans and MSR investors is fast becoming the difference between profitable trades and near misses.

Granularity of data is creating differentiation among investors. To win at investing in loans and mortgage servicing rights requires effectively managing a veritable ocean of loan-level data. Buried within every detailed tape of borrower, property, loan and performance characteristics lies the key to identifying hidden exposures and camouflaged investment opportunities. Understanding these exposures and opportunities is essential to proper bidding during the acquisition process and effective risk management once the portfolio is onboarded.

Investors know this. But knowing that loan data conceals important answers is not enough. Even knowing which specific fields and relationships are most important is not enough. Investors also must be able to get at that data. And because mortgage data is inherently messy, investors often run into trouble extracting the answers they need from it.

For investors, it boils down to two options. They can compel analysts to spend 75 percent of their time wrangling unwieldy data – plugging holes, fixing outliers, making sure everything is mapped right. Or they can just let somebody else worry about all that so they can focus on more analytical matters.

Don’t get left behind — DaaS for loan and MSR investors

It should go without saying that the “let somebody else worry about all that” approach only works if “somebody else” possesses the requisite expertise with mortgage data. Self-proclaimed data experts abound. But handing the process over to an outside data team lacking the right domain experience risks creating more problems than it solves.

Ideally, DaaS for loan and MSR investors consists of a data owner handing off these responsibilities to a third party that can deliver value in ways that go beyond simply maintaining, aggregating, storing and quality controlling loan data. All these functions are critically important. But a truly comprehensive DaaS provider is one whose data expertise is complemented by an ability to help loan and MSR investors understand whether portfolios are well conceived. A comprehensive DaaS provider helps investors ensure that they are not taking on hidden risks (for which they are not being adequately compensated in pricing or servicing fee structure).

True DaaS frees up loan and MSR investors to spend more time on higher-level tasks consistent with their expertise. The more “blocking and tackling” aspects of data management that every institution that owns these assets needs to deal with can be handled in a more scalable and organized way. Cloud-native DaaS platforms are what make this scalability possible.

Scalability — stop reinventing the wheel with each new servicer

One of the most challenging aspects of managing a portfolio of loans or MSRs is the need to manage different types of investor reporting data pipelines from different servicers. What if, instead of having to “reinvent the wheel” to figure out data intake every time a new servicer comes on board, “somebody else” could take care of that for you?

An effective DaaS provider is one not only that is well versed in building and maintain loan data pipes from servicers to investors but also has already established a library of existing servicer linkages. An ideal provider is one already set-up to onboard servicer data directly onto its own DaaS platform. Investors achieve enormous economies of scale by having to integrate with a single platform as opposed to a dozen or more individual servicer integrations. Ultimately, as more investors adopt DaaS, the number of centralized servicer integrations will increase, and greater economies will be realized across the industry.

Connectivity is only half the benefit. The DaaS provider not only intakes, translates, maps, and hosts the loan-level static and dynamic data coming over from servicers. The DaaS provider also takes care of QC, cleaning, and managing it. DaaS providers see more loan data than any one investor or servicer. Consequently, the AI tools an experienced DaaS provider uses to map and clean incoming loan data have had more opportunities to learn. Loan data that has been run through a DaaS provider’s algorithms will almost always be more analytically valuable than the same loan data processed by the investor alone.  

Investors seeking to increase their footprint in the loan and MSR space obviously do not wish to see their data management costs rise in proportion to the size of their portfolios. Outsourcing to a DaaS provider that specializes in mortgages, like RiskSpan, helps investors build their book while keeping data costs contained.

Save time and money – Make better bids

For all these reasons, DaaS is unquestionably the future (and, increasingly, the present) of loan and MSR data management. Investors are finding that a decision to delay DaaS migration comes with very real costs, particularly as data science labor becomes increasingly (and often prohibitively) expensive.

The sooner an investor opts to outsource these functions to a DaaS provider, the sooner that investor will begin to reap the benefits of an optimally cost-effective portfolio structure. One RiskSpan DaaS client reported a 50 percent reduction in data management costs alone.

Investors continuing to make do with in-house data management solutions will quickly find themselves at a distinct bidding disadvantage. DaaS-aided bidders have the advantage of being able to bid more competitively based on their more profitable cost structure. Not only that, but they are able to confidently hone and refine their bids based on having a better, cleaner view of the portfolio itself.

Rethink your mortgage data. Contact RiskSpan to talk about how DaaS can simultaneously boost your profitability and make your life easier.

REQUEST A DEMO

Live Demo of RiskSpan’s Award-Winning Edge Platform

Wednesday, July 27th | 1:00 p.m. EDT

Register for the next Live Demo of RiskSpan’s award-winning Edge Platform. Learn more and ask questions at our bi-weekly, 45-minute demo.

Historical Performance Tool: Slice and dice historical loan performance in the Agency and PLRMBS universe to find outperforming cohorts.

Predictive Loan-Level Pricing and Risk Analytics: Produce loan-level pricing and risk on loans, MSRs, and structured products in minutes – with behavioral models applied at the loan-level, and other assumptions applied conveniently to inform bids and hedging.

Loan Data Management: Let RiskSpan’s data scientists consolidate and enhance your data across origination and servicing platforms, make it analytics-ready, and maintain if for ongoing trend analysis.


About RiskSpan:

RiskSpan offers cloud-native SaaS analytics for on-demand market risk, credit risk, pricing and trading. With our data science experts and technologists, we are the leader in data as a service and end-to-end solutions for loan-level data management and analytics.

Our mission is to be the most trusted and comprehensive source of data and analytics for loans and structured finance investments.

Rethink loan and structured finance data. Rethink your analytics. Learn more at www.riskspan.com.

Presenters

Joe Makepeace

Director, RiskSpan

Jordan Parker

Sales Executive, RiskSpan


RiskSpan Introduces Media Effect Measure for Prepayment Analysis, Predictive Analytics for Managed Data 

ARLINGTON, Va., July 14, 2022

RiskSpan, a leading provider of residential mortgage  and structured product data and analytics, has announced a series of new enhancements in the latest release of its award-winning Edge Platform.

Comprehensive details of these new capabilities are available byrequesting a no-obligation demo at riskspan.com.

Speak to An Expert

Media Effect – It has long been accepted that prepayment speeds see an extra boost as media coverage alerts borrowers to refinancing opportunities. Now, Edge lets traders and modelers measure the media effect present in any active pool of Agency loans—highlighting borrowers most prone to refinance in response to news coverage—and plot the empirical impact on any cohort of loans. Developed in collaboration with practitioners, it measures rate novelty by comparing rate environment at a given time to rates over the trailing five years. Mortgage portfolio managers and traders who subscribe to Edge have always been able to easily stratify mortgage portfolios by refinance incentive. With the new Media Effect filter/bucket, market participants fine tune expectations by analyzing cohorts with like media effects.

Predictive Analytics for Managed Data – Edge subscribers who leverage RiskSpan’s Data Management service to aggregate and prep monthly loan and MSR data can now kick off predictive analytics for any filtered snapshot of that data. Leveraging RiskSpan’s universe of forward-looking analytics, subscribers can generate valuations, market risk metrics to inform hedging, credit loss accounting estimates and credit stress test outputs, and more. Sharing portfolio snapshots and analytics results across teams has never been easier.

These capabilities and other recently released Edge Platform functionality will be on display at next week’s SFVegas 2022 conference, where RiskSpan is a sponsor. RiskSpan will be featured at Booth 38 in the main exhibition hall. RiskSpan professionals will also be available to respond to questions on July 19th following their panels, “Market Beat: Mortgage Servicing Rights” and “Technology Trends in Securitization.”


About RiskSpan, Inc. 

RiskSpan offers cloud-native SaaS analytics for on-demand market risk, credit risk, pricing and trading. With our data science experts and technologists, we are the leader in data as a service and end-to-end solutions for loan-level data management and analytics.

Our mission is to be the most trusted and comprehensive source of data and analytics for loans and structured finance investments.

Rethink loan and structured finance data. Rethink your analytics. Learn more at www.riskspan.com.


Why Accurate Loan Pool and MSR Cost Forecasting Requires Loan-by-Loan Analytics

When it comes to forecasting loan pool and MSR cash flows, the practice of creating “rep lines,” or cohorts, of loans with similar characteristics for analytical purposes has its roots in the Agency MBS market. One of the most attractive and efficient features of Agencies is the TBA market. This market allows originators and issuers to sell large pools of mortgages that have not even been originated yet. This is possible because all parties understand what these future loans will look like. All these loans will all have enough in common as to be effectively interchangeable with one another.  

Institutions that perform the servicing on such loans may reasonably feel they can extend the TBA logic to their own analytics. Instead of analyzing a hundred similar loans individually, why not just lump them into one giant meta-loan? Sum the balances, weight-average the rates, terms, and other features, and you’re good to go. 

Why the industry still resorts to loan cohorting when forecasting loan pool and MSR cash flows

The simplest explanation for cohort-level analytics lies in its simplicity. Rep lines amount to giant simplifying assumptions. They generate fewer technological constraints than a loan-by-loan approach does. Condensing an entire loan portfolio down to a manageable number of rows requires less computational capacity. This takes on added importance when dealing with on-premise software and servers. It also facilitates the process of assigning performance and cost assumptions. 

What is more, as OAS modeling has evolved to dominate the loans and MSR landscape, the stratification approach necessary to run Monte Carlo and other simulations lends itself to cohorting. Lumping loans into like groups also greatly simplifies the process of computing hedging requirements. 

Advantages of loan-level over cohorting when forecasting cash flows

Treating loan and MSR portfolios like TBA pools, however, has become increasingly problematic as these portfolios have grown more heterogeneous. Every individual loan has a story. Even loans that resemble each other in terms of rate, credit score, LTV, DTI, and documentation level have unique characteristics. Some of these characteristics – climate risk, for example – are not easy to bucket. Lumping similar loans into cohorts also runs the risk of underestimating tail risk. Extraordinarily high servicing/claims costs on just one or two outlier loans on a bid tape can be enough to adversely affect the yield of an entire deal. 

Conversely, looking at each loan individually facilitates the analysis of portfolios with expanded credit boxes. Non-banks, which do not usually have the benefit of “knowing” their servicing customers through depository or other transactional relationships, are particularly reliant on loan-level data to understand individual borrower risks, particularly credit risks. Knowing the rate, LTV, and credit score of a bundled group of loans may be sufficient for estimating prepayment risk. But only a more granular, loan-level analysis can produce the credit analytics necessary to forecast reliably and granularly what a servicing portfolio is really going to cost in terms of collections, loss mitigation, and claims expenses.  

Loan-level analysis also eliminates the reliance on stratification limitations. It facilitates portfolio composition analysis. Slicing and dicing techniques are much more simply applied to loans individually than to cohorts. Looking at individual loans also reduces the risk of overrides and lost visibility into convexity pockets. 

Loan-Level MSR Analytics

Potential challenges and other considerations 

So why hasn’t everyone jumped onto the loan-level bandwagon when forecasting loan pool and MSR cash flows? In short, it’s harder. Resistance to any new process can be expected when existing aggregation regimes appear to be working fine. Loan-level data management requires more diligence in automated processes. It also requires the data related to each individual loan to be subjected to QC and monitoring. Daily hedging and scenario runs tend to focus more on speed than on accuracy at the macro level. Some may question whether the benefits of such a granular, case-by-case analysis that identifying the most significant loan-level pickups requires actually justifies the cost of such a regime. 

Rethink. Why now? 

Notwithstanding these challenges, there has never been a better time for loan and MSR investors to abandon cohorting and fully embrace loan-level analytics when forecasting cash flows. The emergence of cloud-native technology and enhanced database and warehouse infrastructure along with the ability to outsource the hosting and computational requirements out to third parties creates practically limitless scalability. 

The barriers between loan and MSR experts and IT professionals have never been lower. This, combined with the emergence of a big data culture in an increasing number of organizations, has brought the granular daily analysis promised by loan-level analytics tantalizingly within reach.  

 

For a deeper dive into loan and MSR cost forecasting, view our webinar, “How Much Will That MSR Portfolio Really Cost You?”

 


Get Started
Log in

Linkedin   

risktech2024