Get Started
Get a Demo
Blog Archives

Improving MSR Pricing Using Cloud-Based Loan-Level Analytics — Part II: Addressing Climate Risk

Modeling Climate Risk and Property Valuation Stability

Part I of this white paper series introduced the case for why loan-level (as opposed to rep-line level) analytics are increasingly indispensable when it comes to effectively pricing an MSR portfolio. Rep-lines are an effective means for classifying loans across many important categories. But certain loan, borrower, and property characteristics simply cannot be “rolled up” to the rep-line level as easily as UPB, loan age, interest rate, LTV, credit score, and other factors. This is especially true when it comes to modeling based on available information about a mortgage’s subject property.

Assume for the sake of simplicity that human and automated appraisers do a perfect job of assigning property values for the purpose of computing origination and updated LTVs (they do not, of course, but let’s assume they do). Prudent MSR investors should be less interested in a property’s current value than in what is likely to happen to that value over the expected life of their investment. In other words, how stable is the valuation? How likely are property values within a given zip code, or neighborhood, or street to hold?

The stability of any given property’s value is tied to the macroeconomic prospects of its surrounding community. Historical and forecast trends of the local unemployment rate can be used as a rough proxy for this and are already built into existing credit and prepayment models. But increasingly, a second category of factors is emerging as an important predictor of home price stability, the property’s exposure to climate risk and natural hazard events.

Climate exposure is becoming increasingly difficult to ignore when it comes to property valuation. And accounting for it is more complicated than simply applying a premium to coastal properties. Climate risk is not just about hurricanes and storm surges anymore. A growing number of inland properties are being identified as at risk not just to wind and water hazards, but to wildfire and other perils as well. The diversity of climate risks means that the problem of quantifying and understanding them will not be solved simply by fixing out-of-date flood plain maps.

MSR investors are exposed to climate risk in ways that whole loan or securities investors are not. When climate events force borrowers into forbearance or other repayment plans, MSR investors not only forego the cash flows associated with missed interest payments that will never be made, but also incur the additional costs of administering the loss mitigation programs and making necessary P&I and escrow advances.   

Overlaying climate scenario analysis on top of traditional credit modeling is unquestionably the future of quantifying mortgage asset exposure. And in many respects, the future is already here. Regulatory guidance is forthcoming requiring public companies to quantify their exposure to climate risk across three categories: acute physical risk, chronic physical risk, and economic transition risk.

Acute Risk

Acute climate risk describes a property’s exposure to individual catastrophic events. As a result of climate change, these events are expected to increase in frequency and severity. The property insurance space already has analytical tools in place to quantify property damage to hazard risks such as:

  • Hurricane, including wind, storm surge, and precipitation-induced flooding
  • Flooding, including “fluvial” and “pluvial” – on- and off-plan flooding
  • Wildfire
  • Severe thunderstorm, including exposure to tornadoes, hail, and straight-line wind, and
  • Earthquake – though not tied to climate change, earthquakes remain a massively underinsured risk that can impact MSR holders

Acute risks are of particular concern for MSR holders as disaster events have proven to increase both mortgage delinquency and prepayment. The chart below illustrates these impacts after hurricane Katrina.

Chronic Risk

Chronic risk characterizes a property’s exposure to adverse conditions brought on by longer-term concerns. These include frequent flooding, sea level rise, drought hazards, heat stress, and water shortages. These effects could erode home values or put entire communities at risk over a longer period. Models currently in use forecast these risks over 20- and 25-year periods.

Transition Risk

Transition risk describes exposure to changing policies, practices or technologies that arise from a broader societal move to reduce its carbon footprint. These include increases in the direct cost of homeownership (e.g., taxes, insurance, code compliance, etc.), increased energy and other utility costs, and localized employment shocks as businesses and industry leave high-risk areas. Changing property insurance requirements (by the GSEs, for example) could further impact property valuations in affected neighborhoods.

———–

Converting acute, chronic and transition risks into mortgage modeling scenarios can only be done effectively at the loan level. Rep-lines cannot adequately capture them. As with most prepayment and credit modeling, accounting for climate risk is an exercise in scenario analysis. Building realistic scenarios involves taking several factors into account.

Scenario Analysis

Quantifying physical risks (whether acute or chronic) entails identifying:

  • Which physical hazard types the property is exposed to
  • How each hazard type threatens the property[1]
  • The materiality of each hazard; and
  • The most likely timeframes over which these hazards could manifest

Factoring climate risk into MSR pricing requires translating the answers to the questions above into mortgage modeling scenarios that function as credit and prepayment model inputs. The following table is an example of how RiskSpan overlays the impact of an acute event – specifically a category 5 hurricane in South Florida — on home price, delinquency, turnover and macroeconomic conditions.

 

 

Applying this framework to an MSR portfolio requires integration with an MSR cash flow engine. MSR cash flows and the resulting valuation are driven by the manner in which the underlying delinquency and prepayment models are affected. However, at least two other factors affect servicing cash flows beyond simply the probability of the asset remaining on the books. Both of these are likely impacted by climate risk.

  • Servicing Costs: Rising delinquency rates are always accompanied by corresponding increases in the cost of servicing. An example of the extent to which delinquencies can affect servicing costs was presented in our previous paper. MSR pricing models take this into account by applying a different cost of servicing to delinquent loans. Some believe, however, that servicing loans that enter delinquency in response to a natural disaster can be even more expensive (all else equal) than servicing a loan that enters delinquency for other reasons. Reasons for this range from the inherent difficulty of reaching displaced persons to the layering impact of multiple hardships such events tend to bring upon households at once.[2]
  • Recapture Rate: The data show that prepayment rates consistently spike in the wake of natural disasters. What is less clear is whether there is a meaningful difference in the recapture rate for these prepayments. Anecdotally, recapture appears lower in the case of natural disaster, but we do not have concrete data on which to base assumptions. This is clearly only relevant to MSR investors that also have an origination arm with which to capture loans that refinance.

Climate risk encompasses a wide range of perils, each of which affects MSR values in a unique way. Hurricanes, wildfires, and droughts differ not only in their geography but in the specific type of risk they pose to individual properties. Even if there were a way of assigning every property in an MSR portfolio a one-size-fits-all quantitative score, computing a “weighted average climate risk” value and applying it to a rep-line would be problematic. Such an average would be denuded of any nuance specific to individual perils. Peril-specific data is critical to being able to make the LTV, delinquency, turnover and macroeconomic assumption adjustments outlined above.

And there is no way around it. Doing all this requires a loan-by-loan analysis. RiskSpan’s Edge Platform was purpose built to analyze mortgage portfolios at the loan level and is becoming the industry’s go-to solution for measuring and managing exposures to market, credit and climate events.

Contact us to learn more.


[1] Insurability of hazards varies widely, even before insurance requirements are considered.

[2] In addition, because servicers normally staff for business-as-usual levels of delinquencies, a large acute event will create a significant spike in the demand for servicer personnel. If a servicer’s book is heavily concentrated in the Southeast, for example, a devastating storm could result in having to triple the number of people actively servicing the portfolio.


Improving MSR Pricing Using Cloud-Native Loan-Level Analytics (Part II)

Improving MSR Pricing Using Cloud-Native Loan-Level Analytics (Part II)

  1. MSR investors are more exposed to acute climate risk than whole loan or securities investors are. MSR investors are not in a favorable position to recoup cash flows lost to climate disruptions.
  2. Climate risk can be acute, chronic, or transitional. Each affects MSR values in a different way.
  3. Integrating climate scenario analysis into traditional credit and prepayment modeling – both of which are critical to modeling MSR cash flows and pricing — requires a loan-by-loan approach.
  4. Climate risk cannot be adequately expressed or modeled using a traditional rep-line approach.


An Emerging Climate Risk Consensus for Mortgages?

That climate change poses a growing—and largely unmeasured—risk to housing and mortgage investors is not news. As is often the case with looming threats whose timing and magnitude are only vaguely understood, increased natural hazard risks have most often been discussed anecdotally and in broad generalities. This, however, is beginning to change as the reality of these risks becomes increasingly clear to an increasing number of market participants and industry-sponsored research begins to emerge.

This past week’s special report by the Mortgage Bankers Association’s Research Institute for Housing America, The Impact of Climate Change on Housing and Housing Finance, raises a number of red flags about our industry’s general lack of preparedness and the need for the mortgage industry to take climate risk seriously as a part of a holistic risk management framework. Clearly this cannot happen until appropriate risk scenarios are generated and introduced into credit and prepayment models.

One of the puzzles we are focusing on here at RiskSpan is an approach to creating climate risk stress testing that can be easily incorporated into existing mortgage modeling frameworks—at the loan level—using home price projections and other stress model inputs already in use. We are also partnering with firms who have been developing climate stress scenarios for insurance companies and other related industries to help ensure that the climate risk scenarios we create are consistent with the best and most recently scientific research available.

Also on the short-term horizon is the implementation of FEMA’s new NFIP premiums for Risk Rating 2.0. Phase I of this new framework will begin applying to all new policies issued on or after October 1, 2021. (Phase II kicks in next April.) We wrote about this change back in February when these changes were slated to take effect back in the spring. Political pressure, which delayed the original implementation may also impact the October date, of course. We’ll be keeping a close eye on this and are preparing to help our clients estimate the likely impact of FEMA’s new framework on mortgages (and the properties securing them) in their portfolios.

Finally, this past week’s SEC statement detailing the commission’s expectations for climate-related 10-K disclosures is also garnering significant (and warranted) attention. By reiterating existing guidelines around disclosing material risks and applying them specifically to climate change, the SEC is issuing an unmistakable warning shot at filing companies who fail to take climate risk seriously in their disclosures.

Contact us (or just email me directly if you prefer) to talk about how we are incorporating climate risk scenarios into our in-house credit and prepayment models and how we can help incorporate this into your existing risk management framework.  



Prepayment Spikes in Ida’s Wake – What to Expect

It is, of course, impossible to view the human suffering wrought by Hurricane Ida without being reminded of Hurricane Katrina’s impact 16 years ago. Fortunately, the levees are holding and Ida’s toll appears likely to be less severe. It is nevertheless worth taking a look at what happened to mortgages in the wake of New Orleans’s last major catastrophic weather event as it is reasonable to assume that prepayments could follow a similar pattern (though likely in a more muted way).

Following Katrina, prepayment speeds for pools of mortgages located entirely in Louisiana spiked between November 2005 and June 2006. As the following graph shows, prepayment speeds on Louisiana properties (the black curve) remained elevated relative to properties nationally (the blue curve) until the end of 2006. 

Comparing S-curves of Louisiana loans (the black curve in the chart below) versus all loans (the green curve) during the spike period (Nov. 2005 to Jun. 2006) reveals speeds ranging from 10 to 20 CPR faster across all refinance incentives. The figure below depicts an S-curve for non-spec 100% Louisiana pools and all non-spec pools with a weighted average loan age of 7 to 60 months during the period indicated.

The impact of Katrina on Louisiana prepayments becomes even more apparent when we consider speeds prior to the storm. As the S-curves below show, non-specified 100% Louisiana pools (the black curve) actually paid slightly slower than all non-spec pools between November 2003 and October 2005.

As we pointed out in June, a significant majority of prepayments caused by natural disaster events are likely to be voluntary, as opposed to the result of default as one might expect. This is because mortgages on homes that are fully indemnified against these perils are likely to be prepaid using insurance proceeds. This dynamic is reflected in the charts below, which show elevated voluntary prepayment rates running considerably higher than the delinquency spike in the wake of Katrina. We are able to isolate voluntary prepayment activity by looking at the GSE Loan Level Historical Performance datasets that include detailed credit information. This enables us to confirm that the prepay spike is largely driven by voluntary prepayments. Consequently, recent covid-era policy changes that may reduce the incidence of delinquent loan buyouts from MBS are unlikely to affect the dynamics underlying the prepayment behavior described above.

RiskSpan’s Edge Platform enables users to identify Louisiana-based loans and pools by drilling down into cohort details. The example below returns over $1 billion in Louisiana-only pools and $70 billion in Louisiana loans as of the August 2021 factor month.


Edge also allows users to structure more specified queries to identify the exposure of any portfolio or portfolio subset. Edge, in fact, can be used to examine any loan characteristic to generate S-curves, aging curves, and time series.  Contact us to learn more.



Is the housing market overheated? It depends where you are.

Mortgage credit risk modeling has evolved slowly in the last few decades. While enhancements leveraging conventional and alternative data have improved underwriter insights into borrower income and assets, advances in data supporting underlying property valuations have been slow. With loan-to-value ratios being such a key driver of loan performance, the stability of a subject property’s value is arguably as important as the stability of a borrower’s income.

Most investors rely on current transaction prices to value comparable properties, largely ignoring the risks to the sustainability of those prices. Lacking the data necessary to identify crucial factors related to a property value’s long-term sustainability, investors generally have little choice but to rely on current snapshots. To address this problem, credit modelers at RiskSpan are embarking on an analytics journey to evaluate the long-term sustainability of a property’s value.

To this end, we are working to pull together a deep dataset of factors related to long-term home price resiliency. We plan to distill these factors into a framework that will enable homebuyers, underwriters, and investors to quickly assess the risk inherent to the property’s physical location. The data we are collecting falls into three broad categories:

  • Regional Economic Trends
  • Climate and Natural Hazard Risk
  • Community Factors

Although regional home price outlook sometimes factors into mortgage underwriting, the long-term sustainability of an individual home price is seldom, if ever, taken into account. The future value of a secured property is arguably of greater importance to mortgage investors than its value at origination. Shouldn’t they be taking an interest in regional economic condition, exposure to climate risk, and other contributors to a property valuation’s stability?

We plan to introduce analytics across all three of these dimensions in the coming months. We are particularly excited about the approach we’re developing to analyze climate and natural hazard risk. We will kick things off, however, with basic economic factors. We are tracking the long-term sustainability of house prices through time by tracking economic fundamentals at the regional level, starting with the ratio of home prices to median household income.

Economic Factors

Housing is hot. Home prices jumped 12.7% nationally in 2020, according to FHFA’s house price index[1]. Few economists are worried about a new housing bubble, and most attribute this rise to supply and demand dynamics. Housing supply is low and rising housing demand is a function of demography –millennials are hitting 40 and want a home of their own.

But even if the current dynamic is largely driven by low supply, there comes a certain point at which house prices deviate too much from area median household income to be sustainable. Those who bear the most significant exposure to mortgage credit risk, such as GSEs and mortgage insurers, track regional house price dynamics to monitor regions that might be pulling away from fundamentals.

Regional home-price-to-income ratio is a tried-and-true metric for judging whether a regional market is overheating or under-valued. We have scored each MSA by comparing its current home-price-to-income ratio to its long-term average. As the chart below illustrating this ratio’s trend shows, certain MSAs, such as New York, consistently have higher ratios than other, more affordable MSAs, such as Chicago.

Because comparing one MSA to another in this context is not particularly revealing, we instead compare each MSA’s current ratio to the long-term ratio for itself. MSAs where that ratio exceeds its long-term average are potentially over-heated, while MSAs under that ratio potentially have more room to grow. In the table below highlighting the top 25 MSAs based on population, we look at how the home-price-to-household-income ratio deviates from its MSA long-term average. The metric currently suggests that Dallas, Denver, Phoenix, and Portland are experiencing potential market dislocation.

Loans originated during periods of over-heating have a higher probability of default, as illustrated in the scatterplot below. This plot shows the correlation between the extent of the house-price-to-income ratio’s deviation from its long-term average and mortgage default rates. Each dot represents all loan originations in a given MSA for a given year[1]. Only regions with large deviations in house price to income ratio saw explosive default rates during the housing crisis. This metric can be a valuable tool for loan and SFR investors to flag metros to be wary of (or conversely, which metros might be a good buy).

Although admittedly a simple view of regional economic dynamics driving house prices (fundamentals such as employment, housing starts per capita, and population trends also play important roles) median income is an appropriate place to start. Median income has historically proven itself a valuable tool for spotting regional price dislocations and we expect it will continue to be. Watch this space as we continue to add these and other elements to further refine how we measure property value stability and its likely impact on mortgage credit.


[1] FHFA Purchase Only USA NSA % Change over last 4 quarters

Contact us to learn more.



Climate Terms the Housing Market Needs to Understand

The impacts of climate change on housing and holders of mortgage risk are very real and growing. As the frequency and severity of perils increases, so does the associated cost – estimated to have grown from $100B in 2000 to $450B 2020 (see chart below). Many of these costs are not covered by property insurance, leaving homeowners and potential mortgage investors holding the bag. Even after adjusting for inflation and appreciation, the loss to both investors and consumers is staggering. 

Properly understanding this data might require adding some new terms to your personal lexicon. As the housing market begins to get its arms around the impact of climate change to housing, here are a few terms you will want to incorporate into your vocabulary.

  1. Natural Hazard

In partnership with climate modeling experts, RiskSpan has identified 21 different natural hazards that impact housing in the U.S. These include familiar hazards such as floods and earthquakes, along with lesser-known perils, such as drought, extreme temperatures, and other hydrological perils including mudslides and coastal erosion. The housing industry is beginning to work through how best to identify and quantify exposure and incorporate the impact of perils into risk management practices more broadly. Legacy thinking and risk management would classify these risks as covered by property insurance with little to no downstream risk to investors. However, as the frequency and severity increase, it is becoming more evident that risks are not completely covered by property & casualty insurance.

We will address some of these “hidden risks” of climate to housing in a forthcoming post.

  1. Wildland Urban Interface

The U.S. Fire Administration defines Wildland Urban Interface as “the zone of transition between unoccupied land and human development. It is the line, area, or zone where structures and other human development meet or intermingle with undeveloped wildland or vegetative fuels.” An estimated 46 million residences in 70,000 communities in the United States are at risk for WUI fires. Wildfires in California garner most of the press attention. But fire risk to WUIs is not just a west coast problem — Florida, North Carolina and Pennsylvania are among the top five states at risk. Communities adjacent to and surrounded by wildland are at varying degrees of risk from wildfires and it is important to assess these risks properly. Many of these exposed homes do not have sufficient insurance coverage to cover for losses due to wildfire.

  1. National Flood Insurance Program (NFIP) and Special Flood Hazard Area (SFHA)

The National Flood Insurance Program provides flood insurance to property owners and is managed by the Federal Emergency Management Agency (FEMA). Anyone living in a participating NFIP community may purchase flood insurance. But those in specifically designated high-risk SFPAs must obtain flood insurance to obtain a government-backed mortgage. SFHAs as currently defined, however, are widely believed to be outdated and not fully inclusive of areas that face significant flood risk. Changes are coming to the NFIP (see our recent blog post on the topic) but these may not be sufficient to cover future flood losses.

  1. Transition Risk

Transition risk refers to risks resulting from changing policies, practices or technologies that arise from a societal move to reduce its carbon footprint. While the physical risks from climate change have been discussed for many years, transition risks are a relatively new category. In the housing space, policy changes could increase the direct cost of homeownership (e.g., taxes, insurance, code compliance, etc.), increase energy and other utility costs, or cause localized employment shocks (i.e., the energy industry in Houston). Policy changes by the GSEs related to property insurance requirements could have big impacts on affected neighborhoods.

  1. Physical Risk

In housing, physical risks include the risk of loss to physical property or loss of land or land use. The risk of property loss can be the result of a discrete catastrophic event (hurricane) or of sustained negative climate trends in a given area, such as rising temperatures that could make certain areas uninhabitable or undesirable for human housing. Both pose risks to investors and homeowners with the latter posing systemic risk to home values across entire communities.

  1. Livability Risk

We define livability risk as the risk of declining home prices due to the desirability of a neighborhood. Although no standard definition of “livability” exists, it is generally understood to be the extent to which a community provides safe and affordable access to quality education, healthcare, and transportation options. In addition to these measures, homeowners also take temperature and weather into account when choosing where to live. Finding a direct correlation between livability and home prices is challenging; however, an increased frequency of extreme weather events clearly poses a risk to long-term livability and home prices.

Data and toolsets designed explicitly to measure and monitor climate related risk and its impact on the housing market are developing rapidly. RiskSpan is at the forefront of developing these tools and is working to help mortgage credit investors better understand their exposure and assess the value at risk within their businesses.

Contact us to learn more.



Why Mortgage Climate Risk is Not Just for Coastal Investors

When it comes to climate concerns for the housing market, sea level rise and its impacts on coastal communities often get top billing. But this article in yesterday’s New York Times highlights one example of far-reaching impacts in places you might not suspect.

Chicago, built on a swamp and virtually surrounded by Lake Michigan, can tie its whole existence as a city to its control and management of water. But as the Times article explains, management of that water is becoming increasingly difficult as various dynamics related to climate change are creating increasingly large and unpredictable fluctuations in the level of the lake (higher highs and lower lows). These dynamics are threatening the city with more frequency and severe flooding.

The Times article connects water management issues to housing issues in two ways: the increasing frequency of basement flooding caused by sewer overflow and the battering buildings are taking from increased storm surge off the lake. Residents face increasing costs to mitigate their exposure and fear the potentially negative impact on home prices. As one resident puts it, “If you report [basement flooding] to the city, and word gets out, people fear it’s going to devalue their home.”

These concerns — increasing peril exposure and decreasing valuations — echo fears expressed in a growing number of seaside communities and offer further evidence that mortgage investors cannot bank on escaping climate risk merely by avoiding the coasts. Portfolios everywhere are going to need to begin incorporating climate risk into their analytics.



Hurricane Season a Double-Whammy for Mortgage Prepayments

As hurricane (and wildfire) season ramps up, don’t sleep on the increase in prepayment speeds after a natural disaster event. The increase in delinquencies might get top billing, but prepays also increase after events—especially for homes that were fully insured against the risk they experienced. For a mortgage servicer with concentrated geographic exposure to the event area, this can be a double-whammy impacting their balance sheet—delinquencies increase servicing advances, prepays rolling loans off the book. Hurricane Katrina loan performance is a classic example of this dynamic.



Non-Agency Delinquencies Fall Again – Still Room for Improvement

Serious delinquencies among non-Agency residential mortgages continue marching downward during the first half of 2021 but remain elevated relative to their pre-pandemic levels.

Our analysis of more than two million loans held in private-label mortgage-backed securities found that the percentage of loans at least 60 days past due fell again in May across vintages and FICO bands. While performance differences across FICO bands were largely as expected, comparing pre-crisis vintages with mortgages originated after 2009 revealed some interesting distinctions.

The chart below plots serious delinquency rates (60+ DPD) by FICO band for post-2009 vintages. Not surprisingly, these rates begin trending upward in May and June of 2020 (two months after the economic effects of the pandemic began to be felt) with the most significant spikes coming in July and August – approaching 20 percent at the low end of the credit box and less than 5 percent among prime borrowers.

Since last August’s peak, serious delinquency rates have fallen most precipitously (nearly 8 percentage points) in the 620 – 680 FICO bucket, compared with a 5-percentage point decline in the 680 – 740 bucket and a 4 percentage point drop in the sub-620 bucket. Delinquency rates have come down the least among prime (FICO > 740) mortgages (just over 2 percentage points) but, having never cracked 5 percent, these loans also had the shortest distance to go.

Serious delinquency rates remain above January 2020 levels across all four credit buckets – approximately 7 percentage points higher in the two sub-680 FICO buckets, compared with the 680 – 740 bucket (5 percentage points higher than in January 2020) and over-740 bucket (2 percentage points higher).

So-called “legacy” vintages (consisting of mortgage originated before the 2008-2009 crisis) reflect a somewhat different performance profile, though they follow a similar pattern.

The following chart plots serious delinquency rates by FICO band for these older vintages. Probably because these rates were starting from a relatively elevated point in January 2020, their pandemic-related spike were somewhat less pronounced, particularly in the low-FICO buckets. These vintages also appear to have felt the spike about a month earlier than did the newer issue loans.

Serious delinquency rates among these “legacy” loans are considerably closer to their pre-pandemic levels than are their new-issue counterparts. This is especially true in the sub-prime buckets. Serious delinquencies in the sub-620 FICO bucket actually were 3 percentage points lower last month than they were in January 2020 (and nearly 5 percentage points lower than their peak in July 2020). These differences are less pronounced in the higher-FICO buckets but are still there.

Comparing the two graphs reveals that the pandemic had the effect of causing new-issue low-FICO loans to perform similarly to legacy low-FICO loans, while a significant gap remains between the new-issue prime buckets and their high-FICO pre-2009 counterparts. This is not surprising given the tightening that underwriting standards (beyond credit score) underwent after 2009.

Interested in cutting non-Agency performance across any of several dozen loan-level characteristics? Contact us for a quick, no-pressure demo.


Leveraging ML to Enhance the Model Calibration Process

Last month, we outlined an approach to continuous model monitoring and discussed how practitioners can leverage the results of that monitoring for advanced analytics and enhanced end-user reporting. In this post, we apply this idea to enhanced model calibration.

Continuous model monitoring is a key part of a modern model governance regime. But testing performance as part of the continuous monitoring process has value that extends beyond immediate governance needs. Using machine learning and other advanced analytics, testing results can also be further explored to gain a deeper understanding of model error lurking within sub-spaces of the population.

Below we describe how we leverage automated model back-testing results (using our machine learning platform, Edge Studio) to streamline the calibration process for our own residential mortgage prepayment model.

The Problem:

MBS prepayment models, RiskSpan’s included, often provide a number of tuning knobs to tweak model results. These knobs impact the various components of the S-curve function, including refi sensitivity, turnover lever, elbow shift, and burnout factor.

The knob tuning and calibration process is typically messy and iterative. It usually involves somewhat-subjectively selecting certain sub-populations to calibrate, running back-testing to see where and how the model is off, and then tweaking knobs and rerunning the back-test to see the impacts. The modeler may need to iterate through a series of different knob selections and groupings to figure out which combination best fits the data. This is manually intensive work and can take a lot of time.

As part of our continuous model monitoring process, we had already automated the process of generating back-test results and merging them with actual performance history. But we wanted to explore ways of taking this one step further to help automate the tuning process — rerunning the automated back-testing using all the various permutations of potential knobs, but without all the manual labor.

The solution applies machine learning techniques to run a series of back-tests on MBS pools and automatically solve for the set of tuners that best aligns model outputs with actual results.

We break the problem into two parts:

  1. Find Cohorts: Cluster pools into groups that exhibit similar key pool characteristics and model error (so they would need the same tuners).

TRAINING DATA: Back-testing results for our universe of pools with no model tuning knobs applied

  1. Solve for Tuners: Minimize back-testing error by optimizing knob settings.

TRAINING DATA: Back-testing results for our universe of pools under a variety of permutations of potential tuning knobs (Refi x Turnover)

  1. Tuning knobs validation: Take optimized tuning knobs for each cluster and rerun pools to confirm that the selected permutation in fact returns the lowest model errors.

Part 1: Find Cohorts

We define model error as the ratio of the average modeled SMM to the average actual SMM. We compute this using back-testing results and then use a hierarchical clustering algorithm to cluster the data based on model error across various key pool characteristics.

Hierarchical clustering is a general family of clustering algorithms that build nested clusters by either merging or splitting observations successively. The hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the root cluster that contains all samples, while the leaves represent clusters with only one sample. [1]

Agglomerative clustering is an implementation of hierarchical clustering that takes the bottom-up approach (merging approach). Each observation starts in its own cluster, and clusters are then successively merged together. There are multiple linkage criteria that could be chosen from. We have used Ward linkage criteria.

Ward linkage strategy minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach.[2]

Part 2: Solving for Tuners

Here our training data is expanded to be a set of back-test results to include multiple results for each pool under different permutations of tuning knobs.  

Process to Optimize the Tuners for Each Cluster

Training Data: Rerun the back-test with permutations of REFI and TURNOVER tunings, covering all reasonably possible combinations of tuners.

  1. These permutations of tuning results are fed to a multi-output regressor, which trains the machine learning model to understand the interaction between each tuning parameter and the model as a fitting step.
    • Model Error and Pool Features are used as Independent Variables
    • Gradient Tree Boosting/Gradient Boosted Decision Trees (GBDT)* methods are used to find the optimized tuning parameters for each cluster of pools derived from the clustering step
    • Two dependent variables — Refi Tuner and Turnover Tuner – are used
    • Separate models are estimated for each cluster
  2. We solve for the optimal tuning parameters by running the resulting model with a model error ratio of 1 (no error) and the weighted average cluster features.

* Gradient Tree Boosting/Gradient Boosted Decision Trees (GBDT) is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. When a decision tree is a weak learner, the resulting algorithm is called gradient boosted trees, which usually outperforms random forest. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of arbitrary differentiable loss function. [3]

*We used scikit-learn’s GBDT implementation to optimize and solve for best Refi and Turnover tuner. [4]

Results

The resultant suggested knobs show promise in improving model fit over our back-test period. Below are the results for two of the clusters using the knobs that suggested by the process. To further expand the results, we plan to cross-validate on out-of-time sample data as it comes in.

Conclusion

These advanced analytics show promise in their ability to help streamline the model calibration and tuning process by removing many of the time-consuming and subjective components from the process altogether. Once a process like this is established for one model, applying it to new populations and time periods becomes more straightforward. This analysis can be further extended in a number of ways. One in particular we’re excited about is the use of ensemble models—or a ‘model of models’ approach. We will continue to tinker with this approach as we calibrate our own models and keep you apprised on what we learn.


Get Started
Get A Demo