Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Category: White Paper

Incorporating Climate Risk into ERM: A Mortgage Risk Manager’s Guide

Climate risk is becoming impossible to ignore in the mortgage space.

President Biden’s May 2021 Executive Order makes clear that quantifying and mitigating climate risk will be a priority for the federal government’s housing finance agencies (HUD, FHFA, FHA, VA). It’s just a matter of time before the increased emphasis on this risk makes its way to others in the eco-system (Government-Sponsored Enterprises, Servicers, Lenders, Investors). The SEC will be coming out with climate-related requirements for the securities markets. In early 2021, a proposed rule amendment “to enhance registrant disclosures regarding issuers’ climate-related risks and opportunities” was added to their regulatory agenda with an expected release in 2022. Other agencies, including the OCC, are issuing draft guidance, or requesting feedback on climate-related risks.  Boards are taking notice, and, if you haven’t heard from yours on the topic, you will soon.

But where can you start?

Bear in mind there are a couple of critical questions you need to think about regarding your organizational response to climate risk. Most executives and boards are now familiar with the concepts of physical and transition risks of climate change, but how will these risks manifest in your organization through business, asset, regulatory, legal, and reputation risk? How will these risks impact residential housing prices, attractiveness of communities, building codes, insurance costs, and zoning laws, and the valuation of mortgages and other financial instruments that are a derivative value of residential properties and the economic strength of communities? What will be the response from homeowners, insurers, builders, investors, and public policy of local, state, and federal governments that could impact asset valuation? It’s not an easy problem to solve!

A growing body of academic literature has developed around home price dynamics, mortgage performance, and the general perception of climate risk as a market influencer. Published findings focus primarily on the effect of physical risks on mortgage performance and home prices. A recurring theme in the literature is that while individual climate events can be highly disruptive on local real estate and mortgage markets, values tend to rebound quickly (Bin and Landry, 2013) with the specter of another such event not appearing to weigh down prices significantly. On top of that, short-run effects of supply issues and competitive effects, such as attractive housing features and locations, complicate housing price dynamics. People still want to live on coasts and rivers, in hot and dry desert locations, and in earthquake- and wildfire-exposed areas that are prone to natural catastrophes and increasing impacts from climate change. So attractive are these areas, the marginal effect of a home being in an area that is projected to be underwater may actually increase home prices, without controlling for distance to the shore. This may be a consequence of the premium value associated with waterfront views (Baldauf et al., 2020). But just because impacts so far have been minimal, does not mean future impacts will follow the same trend.

While prices have rebounded quickly after events in the past and housing prices still command a premium for waterfront views, there is evidence that buyers are starting to discount values for coastal properties exposed to sea level rise (Bernstein et al. 2018).  In the future, where there is increasing chances that climate change will cause permanent change to usable land due to any number of hazards without effective resilience improvements, there may be a smaller or no rebound in prices leaving the holders of exposed real and financial assets with a loss. Or, conversely, the value of waterfront homes may even begin to experience a rapid decline if mortgage holders begin to suspect that the value (and usability) of their properties could decline substantially over the life of their mortgages.

Further discussion of the academic literature and a bibliography can be found in the note at the end of this article.

Significant uncertainty exists about how climate change will occur, over what timeframe these changes will occur, how all levels of government will intervene or react to chronic risks like sea level rise, and how households, companies, and financial markets will respond to various signals that will create movements in prices, demographics, and economic activity even before climate risk manifests. What is known is that global temperatures will continue to warm over the next 50 years regardless of the actions people and governments take, and the impacts of that warming will accumulate and become more severe and frequent over time, requiring a definitive action plan for dealing with this issue.


Little differentiation in scenarios in 20 years. Risks will manifest differently over different timeframes.Global surface temperature change relative to 1850-1900


 

The standards by which organizations will be expected to deal with climate risk will evolve as the climate continues to change and more capabilities are developed to address these issues. An important first step is the need to contextualize these risks with respect to other risks to your business. One immediate need is to address near-term board and regulatory reporting requirements, as well as voluntary public disclosure, as pressure by stakeholders to understand what actions are being taken by companies to address climate change builds.

There is no easy answer, but we offer a way to bring the issue into focus and plan for a thoughtful response as the risks and standards evolve. We are tackling the problem by understanding the risks the organization faces and evaluate those through scenarios and sensitivity analysis. We recommend against over-engineering a solution; instead, design a framework that allows you to monitor and track risk over time. We propose a practical approach, one that’s incrementally phased and integrates risk management through time, enabling pause, adjustment, assessment, and changes in course as needed.


Suggested Approach for Incorporating Climate Risk into ERMSuggested Approach for Incorporating Climate Risk into ERM


We present five key components to consider when incorporating a climate and natural hazard risk dimension into an existing ERM framework.

Evaluate the Risk Landscape

As a starting point, evaluating the risk landscape entails identifying which climate-related risks have the potential to affect investment return. Climate-related financial risks can be categorized into physical and transition risks.

Physical risks can be acute or chronic. Acute physical risks include extreme events like hurricane, floods, and wildfire. Chronic physical risks refer to a property’s exposure to sea level rise, excessive heat, or drought, for example. Investors who understand these terms and scenarios – including how uncertainty is modeled, emphasizing the directional relationship and order of magnitude of changes rather than exact quantification — are at a competitive advantage.

Transition risks and the secondary effects of physical risks can arise from changes in policy, legal, technology, or market actions that come about from a movement to reduce carbon emissions.

Some important and guiding questions for both physical and transition risk include:

What are the acute and chronic physical hazard types that pose a financial risk?

How will these risks manifest as potential financial loss to mortgage investments?

How material are the possible losses?

How might these risks evolve over time?

Note that climate science continues to evolve, especially as it relates to longer-term impacts, and there is limited historical data to understand how the effects of climate change will trickle into the housing market. Risk assessments must be based on a range of scenarios and include plausible narratives that are not bound by historical observations. The scenario approach applies to studying both acute and chronic physical risks, and the scenarios used in assessing acute or chronic risks may be conceptualized differently.

Select Climate-Related Risks that Impact Mortgage Finance

Visualizing the exposure of various mortgage stakeholders to different forms of climate risk can be accomplished using a table like the following.


Figure


Establish Risk Measurement Approach

Quantifying the financial impact of physical and transition risk is critical to evaluating a portfolio’s potential exposure. From a mortgage loan perspective, loan-level and portfolio-level analyses provide both standalone and marginal views of risk.

Translating hazard risk into a view of financial loss on a mortgage instrument can be accomplished within traditional mortgage model estimations using 1) a combination of property-specific damage estimates from natural hazard and climate risk models, and 2) formulated macroeconomic scenarios guided by academic research and regulatory impacts. And because chronic effects can affect how acute risks manifest, a more nuanced view of how acute risks and chronic risks relate to one another is necessary to answer questions about financial risk.

Mortgage investors can better understand natural hazard risk measures by taking a page from how property insurers account for it. For example, the worst-case “tail loss” potential of a given portfolio is often put in context of the type of events that are at the tail of risk for the industry as a whole – in other words, a 1-in-100-year loss to the portfolio versus a loss to portfolio for a 1-in-100-year industry event. Extending this view to mortgages entails considering the type of events that could occur over the average life of a loan.

To address chronic and transition risk, selecting appropriate macroeconomic scenarios also provides a financial view of the possible impact on a mortgage portfolio. These scenarios may be grounded in published climate projections, asset-specific data collection, or different scenario narratives outlining how these risks could manifest locally.

Defining a Risk Appetite Framework

Inventorying the complete range of potential climate-related risks provides structure and organization around which risks have the largest or most severe impact and creates a framework for ranking them by appropriate criteria. A risk appetite and limit framework defines the type and quantity of natural catastrophe and climate change risk that an enterprise is willing to hold in relation to equity, assets, and other financial exposure measures at a selected probability of occurrence.  The operational usefulness of these frameworks are enhanced when defining the appetite and limits in reference to the risk measures the company selects in addition to straight notional values.

The loss exposure for a particular risk will drive operations differently across business lines based on risk preferences. From the viewpoint of mortgage activities, these operations include origination, servicing, structuring, and pricing. For instance, it may be undesirable to have more than $100 million of asset valuation at risk across the enterprise and apportion that limit to business units based upon the return of the asset in relation to the risk generated from business activity. In this way, the organization has a quantitative way for balancing business goals with risk management goals.

The framework can also target appropriate remediation and hedging strategies in light of the risk priorities. Selecting a remediation strategy requires risk reporting and monitoring across different lines of business and a knowledge of the cost and benefits attributed to physical and transition risks.

Incorporate Findings into Risk Governance

Entities can adapt policies, processes, and responsibilities in the existing ERM framework based on their quantified, prioritized, and articulated risk. This could come in the form of changes to stakeholder reporting from internal management committees, board, and board committees to external financial, investor, public, and regulatory reporting.

Because regulatory requirements and industry best practices are still being formed, it is important to continuously monitor these and ensure that policies align with evolving guidance.

Monitor and Manage Risk Within Risk Appetite and Limits

Implementation of an ERM framework with considerations for natural catastrophe and climate risk may appear different across different lines of businesses and risk management processes. For this reason, it is important that dashboards, reporting frameworks, and exposure control processes be designed to fit in with current reporting within individual lines of businesses.

A practical first step is to establish monitoring specifically to detect adverse selections issues—i.e., ensuring that you are not acquiring a book of business with disproportionately high levels of climate risk or one that adds risk to areas of existing exposure within your portfolio. The object is to manage the portfolio, so risk remains within the agreed appetite and limit framework.  This type of monitoring will become increasingly critical as other market participants start to incorporate climate risk into their own asset screening and pricing decisions. Firms that fail to monitor for climate risk will ultimately be the firms that bear it.

All of this ultimately comes down to identifying natural catastrophe and climate risks, quantifying them through property and loan-specific modeling and scenarios, ranking the risks along different criteria, and tailoring reporting to different operations in the enterprise with an eye for changing regulatory requirements and risk governance policies. An enterprise view is needed given climate risks correlate across multiple asset classes, and where it is determined that differences in risk tolerance are desired, the framework described provides a coherent and quantitative basis for differences.  Successfully negotiating these elements is more easily described than actually carried out, particularly in large financial institutions consisting of businesses with widely divergent risk tolerances.  But we appear to be reaching a point where further deferral is no longer an option. The time to begin planning and implementing these frameworks is now.

GET STARTED

Note on academic research and works referenced

Some empirical research has been conducted examining outcomes following natural hazard events, specifically their impact on mortgage loan performance. Kousky et al. (2020) show evidence that property damage from an extreme event increases short-term mortgage delinquencies and forbearance rates. This effect is mitigated by the presence of flood insurance, which enables borrowers to use insurance proceeds to pay off loans or sell damaged homes once they’ve received compensation and move away from the impacted area. A rebound effect, observed in home prices, occurs in loan performance as well. Delinquencies, while elevated just after the disaster, tend to quickly revert to pre-disaster levels (Fannie Mae, 2017). Extending beyond single-event analysis, delinquencies in hurricane-prone areas have been shown to be higher than delinquency rates in other areas, controlling for other risk factors (Rossi, 2020). The projected rise in hurricane intensity and incidence can therefore lead to higher default risk, which in turn leads to higher losses to investors in mortgage credit risk.

Studies on chronic risks like sea level rise reveal the risk to have a moderate effect on housing prices, stratified by climate “denier” and climate “believer” borrowers (Baldauf et al., 2020). All else equal, areas with owners who perceive a climate threat to their properties may demand a discount on prices. Similarly, Bernstein et al. (2018) show housing price discounts of up to 7% for counties more worried about sea level rise than unworried counties. Risk perception for climate change is subject to a number of biases (Kousky et al., 2020). As such, distortion created by these biases can contribute to inaccurate home pricing. Evidence suggests that regulatory floodplain properties are overvalued, but pricing is inconsistent. Borrowers who are well-informed and sophisticated may fully reflect flood risk information in their pricing (Hino and Burke, 2021). These effects can vary by consumer disclosure requirements as well, which lead to discussion about information gaps on climate risk.

Yet, there is notable research on the salience of events, where house prices following the occurrence of an extreme event have been shown to have persistent effects on home prices. Ortega and Taspinar (2018) show a permanent price decline in the 5 years following Hurricane Sandy for properties in flood zones, regardless of the damage experienced. While properties damaged by the hurricane showed a rebound in home prices right after the event, all properties affected by the storm converged to the same home price penalty. Eichholtz et al. (2019) primarily study commercial real estate properties in New York, with corroborating studies in Boston and Chicago, and find negative price effects from flood-risk exposure post-Hurricane Sandy due to sophisticated investors adjusting their valuations downward. Increased attention to climate change from the occurrence of extreme events may cause long-term price effects as communities begin evaluating the possible risks they face after weathering a catastrophic event.


For further reading, see:

Markus Baldauf, Lorenzo Garlappi, Constantine Yannelis, Does Climate Change Affect Real Estate Prices? Only If You Believe In It, The Review of Financial Studies, Volume 33, Issue 3, March 2020, Pages 1256–1295, https://doi.org/10.1093/rfs/hhz073

Eichholtz, Piet M. A.; Steiner, Eva; Yönder, Erkan “Where, When, and How Do Sophisticated Investors Respond to Flood Risk?,” June 2019. PDF

Bernstein, Asaf and Gustafson, Matthew and Lewis, Ryan, Disaster on the Horizon: The Price Effect of Sea Level Rise (May 4, 2018). Journal of Financial Economics (JFE), Forthcoming, Available at SSRN: https://ssrn.com/abstract=3073842 

Bin, O., & Landry, C. E. (2013). Changes in implicit flood risk premiums: Empirical evidence from the housing market. Journal of Environmental Economics and Management, 65(3), 361–376. HYPERLINK “https://protect-us.mimecast.com/s/SL58C5ylW5F05NOpXUzgQhi?domain=doi.org

Hinoa and Burke, The effect of information about climate risk on
property values (March 18, 2021). PDF

Ortega, Francesc and Taspinar, Suleyman, Rising Sea Levels and Sinking Property Values: The Effects of Hurricane Sandy on New York’s Housing Market (March 29, 2018). Available at SSRN: https://ssrn.com/abstract=3074762 or http://dx.doi.org/10.2139/ssrn.3074762

Clifford Rossi. “Assessing the impact of hurricane frequency and intensity on mortgage default risk,” June 2020. PDF

Markus Baldauf, Lorenzo Garlappi, Constantine Yannelis, Does Climate Change Affect Real Estate Prices? Only If You Believe In It, The Review of Financial Studies, Volume 33, Issue 3, March 2020, Pages 1256–1295, https://doi.org/10.1093/rfs/hhz073

Carolyn Kousky, Howard Kunreuther, Michael LaCour-Little & Susan Wachter (2020) Flood Risk and the U.S. Housing Market, Journal of Housing Research, 29:sup1, S3-S24, DOI: 10.1080/10527001.2020.1836915

Carolyn Kousky, Mark Palim & Ying Pan (2020) Flood Damage and Mortgage Credit Risk: A Case Study of Hurricane Harvey, Journal of Housing Research, 29:sup1, S86-S120, DOI: 10.1080/10527001.2020.1840131

Verisk 2021: How Current Market Conditions Could Impact U.S. Hurricane Season 2021

RiskSpan 2018: Houston Strong: Communities Recover from Hurricanes. Do Mortgages?


Improving MSR Pricing Using Cloud-Based Loan-Level Analytics — Part II: Addressing Climate Risk

Modeling Climate Risk and Property Valuation Stability

Part I of this white paper seriesKey Takeaways introduced the case for why loan-level (as opposed to rep-line level) analytics are increasingly indispensable when it comes to effectively pricing an MSR portfolio. Rep-lines are an effective means for classifying loans across many important categories. But certain loan, borrower, and property characteristics simply cannot be “rolled up” to the rep-line level as easily as UPB, loan age, interest rate, LTV, credit score, and other factors. This is especially true when it comes to modeling based on available information about a mortgage’s subject property.

Assume for the sake of simplicity that human and automated appraisers do a perfect job of assigning property values for the purpose of computing origination and updated LTVs (they do not, of course, but let’s assume they do). Prudent MSR investors should be less interested in a property’s current value than in what is likely to happen to that value over the expected life of their investment. In other words, how stable is the valuation? How likely are property values within a given zip code, or neighborhood, or street to hold?

The stability of any given property’s value is tied to the macroeconomic prospects of its surrounding community. Historical and forecast trends of the local unemployment rate can be used as a rough proxy for this and are already built into existing credit and prepayment models. But increasingly, a second category of factors is emerging as an important predictor of home price stability, the property’s exposure to climate risk and natural hazard events.

Climate exposure is becoming increasingly difficult to ignore when it comes to property valuation. And accounting for it is more complicated than simply applying a premium to coastal properties. Climate risk is not just about hurricanes and storm surges anymore. A growing number of inland properties are being identified as at risk not just to wind and water hazards, but to wildfire and other perils as well. The diversity of climate risks means that the problem of quantifying and understanding them will not be solved simply by fixing out-of-date flood plain maps.

MSR investors are exposed to climate risk in ways that whole loan or securities investors are not. When climate events force borrowers into forbearance or other repayment plans, MSR investors not only forego the cash flows associated with missed interest payments that will never be made, but also incur the additional costs of administering the loss mitigation programs and making necessary P&I and escrow advances.

Overlaying climate scenario analysis on top of traditional credit modeling is unquestionably the future of quantifying mortgage asset exposure. And in many respects, the future is already here. Regulatory guidance is forthcoming requiring public companies to quantify their exposure to climate risk across three categories: acute physical risk, chronic physical risk, and economic transition risk.

Acute Risk

Acute climate risk describes a property’s exposure to individual catastrophic events. As a result of climate change, these events are expected to increase in frequency and severity. The property insurance space already has analytical tools in place to quantify property damage to hazard risks such as:

  • Hurricane, including wind, storm surge, and precipitation-induced flooding
  • Flooding, including “fluvial” and “pluvial” – on- and off-plan flooding
  • Wildfire
  • Severe thunderstorm, including exposure to tornadoes, hail, and straight-line wind, and
  • Earthquake – though not tied to climate change, earthquakes remain a massively underinsured risk that can impact MSR holders

Acute risks are of particular concern for MSR holders as disaster events have proven to increase both mortgage delinquency and prepayment. The chart below illustrates these impacts after hurricane Katrina.

Chronic Risk

Chronic risk characterizes a property’s exposure to adverse conditions brought on by longer-term concerns. These include frequent flooding, sea level rise, drought hazards, heat stress, and water shortages. These effects could erode home values or put entire communities at risk over a longer period. Models currently in use forecast these risks over 20- and 25-year periods.

Transition Risk

Transition risk describes exposure to changing policies, practices or technologies that arise from a broader societal move to reduce its carbon footprint. These include increases in the direct cost of homeownership (e.g., taxes, insurance, code compliance, etc.), increased energy and other utility costs, and localized employment shocks as businesses and industry leave high-risk areas. Changing property insurance requirements (by the GSEs, for example) could further impact property valuations in affected neighborhoods.

———–

Converting acute, chronic and transition risks into mortgage modeling scenarios can only be done effectively at the loan level. Rep-lines cannot adequately capture them. As with most prepayment and credit modeling, accounting for climate risk is an exercise in scenario analysis. Building realistic scenarios involves taking several factors into account.

Scenario Analysis

Quantifying physical risks (whether acute or chronic) entails identifying:

  • Which physical hazard types the property is exposed to
  • How each hazard type threatens the property[1]
  • The materiality of each hazard; and
  • The most likely timeframes over which these hazards could manifest

Factoring climate risk into MSR pricing requires translating the answers to the questions above into mortgage modeling scenarios that function as credit and prepayment model inputs. The following table is an example of how RiskSpan overlays the impact of an acute event – specifically a category 5 hurricane in South Florida — on home price, delinquency, turnover and macroeconomic conditions.

 

Chart

 

Chart

Applying this framework to an MSR portfolio requires integration with an MSR cash flow engine. MSR cash flows and the resulting valuation are driven by the manner in which the underlying delinquency and prepayment models are affected. However, at least two other factors affect servicing cash flows beyond simply the probability of the asset remaining on the books. Both of these are likely impacted by climate risk.

  • Servicing Costs: Rising delinquency rates are always accompanied by corresponding increases in the cost of servicing. An example of the extent to which delinquencies can affect servicing costs was presented in our previous paper. MSR pricing models take this into account by applying a different cost of servicing to delinquent loans. Some believe, however, that servicing loans that enter delinquency in response to a natural disaster can be even more expensive (all else equal) than servicing a loan that enters delinquency for other reasons. Reasons for this range from the inherent difficulty of reaching displaced persons to the layering impact of multiple hardships such events tend to bring upon households at once.[2]
  • Recapture Rate: The data show that prepayment rates consistently spike in the wake of natural disasters. What is less clear is whether there is a meaningful difference in the recapture rate for these prepayments. Anecdotally, recapture appears lower in the case of natural disaster, but we do not have concrete data on which to base assumptions. This is clearly only relevant to MSR investors that also have an origination arm with which to capture loans that refinance.

Climate risk encompasses a wide range of perils, each of which affects MSR values in a unique way. Hurricanes, wildfires, and droughts differ not only in their geography but in the specific type of risk they pose to individual properties. Even if there were a way of assigning every property in an MSR portfolio a one-size-fits-all quantitative score, computing a “weighted average climate risk” value and applying it to a rep-line would be problematic. Such an average would be denuded of any nuance specific to individual perils. Peril-specific data is critical to being able to make the LTV, delinquency, turnover and macroeconomic assumption adjustments outlined above.

And there is no way around it. Doing all this requires a loan-by-loan analysis. RiskSpan’s Edge Platform was purpose built to analyze mortgage portfolios at the loan level and is becoming the industry’s go-to solution for measuring and managing exposures to market, credit and climate events.

Contact us to learn more.


[1] Insurability of hazards varies widely, even before insurance requirements are considered.

[2] In addition, because servicers normally staff for business-as-usual levels of delinquencies, a large acute event will create a significant spike in the demand for servicer personnel. If a servicer’s book is heavily concentrated in the Southeast, for example, a devastating storm could result in having to triple the number of people actively servicing the portfolio.


Improving the Precision of MSR Pricing Using Cloud-Native Loan-Level Analytics (Part I)

Traditional MSR valuationTake Away approaches based on rep lines and loan characteristics important primarily to prepayment models fail to adequately account for the significant impact of credit performance on servicing cash flows – even on Agency loans. Incorporating both credit and prepayment modeling into an MSR valuation regime requires a loan-by-loan approach—rep lines are simply insufficient to capture the necessary level of granularity. Performing such an analysis while evaluating an MSR portfolio containing hundreds of thousands of loans for potential purchase has historically been viewed as impractical. But thanks to today’s cloud-native technology, loan-level MSR portfolio pricing is not just practical but cost-effective. Introduction Mortgage Servicing Rights (MSRs) entitle the asset owner to receive a monthly fee in return for providing billing, collection, collateral management and recovery services with respect to a pool of mortgages on behalf of the beneficial owner(s) of those mortgages. This servicing fee consists primarily of two components based on the current balance of each loan:  a base servicing fee (commonly 25bps of the loan balance) and an excess servicing fee.  The latter is simply the difference between each loan rate and the sum of the pass-through rate of interest and the base servicing. The value of a portfolio of MSRs is determined by modeling the projected net cash flows to the owner and discounting them to the present using one of two methodologies:

  1. Static or Single-Path Pricing: A single series of net servicing cash flows are generated using current interest and mortgage rates which are discounted to a present value using a discount rate reflecting current market conditions.
  2. Stochastic or Option-Adjusted Spread (OAS) Pricing: Recognizing that interest rates will vary over time, a statistical simulation of interest rates is used to generate many time series (typically 250 to 1,000) of net servicing cash flows.  Each time series of cash flows is discounted at a specified spread over a simulated base curve (generally the LIBOR or Treasury curve) and the resulting present value is averaged across all of the paths.

While these two pricing methodologies have different characteristics and are based on very different conceptual frameworks, they both strongly depend on the analyst’s ability to generate reliable forecasts of net servicing cashflows.  As the focus of this white paper is to discuss the key factors that determine the net cashflows, we are indifferent here as to the ultimate methodology used to convert those cashflows into a present value and for simplicity will look to project a single path of net cash flows.  RiskSpan’s Edge platform supports both static and OAS pricing and RiskSpan’s clients use each and sometimes both to value their mortgage instruments.

Modeling Mortgage Cash Flows Residential mortgages are complex financial instruments. While they are, at their heart, a fixed income instrument with a face amount and a fixed or a floating rate of interest, the ability of borrowers to voluntarily prepay at any time adds significant complexity.  This prepayment option can be triggered by an economic incentive to refinance into a lower interest rate, by a decision to sell the underlying property or by a change in life circumstances leading the borrower to pay off the mortgage but retain the property. The borrower also has a non-performance option. Though not usually exercised voluntarily, forbearance options made available to borrowers in response to Covid permitted widespread voluntary exercise of this option without meaningful negative consequences to borrowers. This non-performance option ranges from something as simple as a single late payment up to cessation of payments entirely and forfeiture of the underlying property. Forbearance (a payment deferral on a mortgage loan permitted by the servicer or by regulation, such as the COVID-19 CARES Act) became a major factor in understanding the behavior of mortgage cash flows in 2020. Should a loan default, ultimate recovery depends on a variety of factors, including the loan-to-value ratio, external credit support such as primary mortgage insurance as well as costs and servicer advances paid from liquidation proceeds. Both the prepayment and credit performance of mortgage loans are estimated with the use of statistical models which draw their structure and parameters from an extremely large dataset of historical performance.  As these are estimated with reference to backward-looking experience, analysts often adjust the models to reflect their experience adjusted for future expectations. Investors in GSE-guaranteed mortgage pass-through certificates are exposed to voluntary and, to a far less extent, involuntary (default) prepayments of the underlying mortgages.  If the certificates were purchased at a premium and prepayments exceed expectations, the investor’s yield will be reduced.  Conversely, if the certificates were purchased at a discount and prepayments accelerated, the investor’s yield will increase.  Guaranteed pass-through certificate investors are not exposed to the credit performance of the underlying loans except to the extent that delinquencies may suppress voluntary prepayments. Involuntary prepayments and early buyouts of delinquent loans from MBS pools are analogous to prepayments from a cash flow perspective when it comes to guaranteed Agency securities. Investors in non-Agency securities and whole loans are exposed to the same prepayment risk as guaranteed pass-through investors are, but they are also exposed to the credit performance of each loan. And MSR investors are exposed to credit risk irrespective of whether the loans they service are guaranteed. Here is why. The mortgage servicing fee can be simplistically represented by an interest-only (IO) strip carved off of the interest payments on a mortgage. Net MSR cash flows are obtained by subtracting a fixed servicing cost. Securitized IOs are exposed to the same factors as pass-through certificates, but their sensitivity to those factors is magnitudes greater because a prepayment constitutes the termination of all further cash flows – no principal is received.  Consequently, returns on IO strips are very volatile and sensitive to interest rates via the borrower’s prepayment incentive. While subtracting fixed costs from the servicing fee is still a common method of generating net MSR cash flows, it is a very imprecise methodology, subject to significant error. The largest component of this error arises from the fact that servicing cost is highly sensitive to the credit state of a mortgage loan. Is the loan current, requiring no intervention on the part of the servicer to obtain payment, or is the loan delinquent, triggering additional, and potentially costly, servicer processes that attempt to restore the loan to current? Is it seriously delinquent, requiring a still higher level of intervention, or in default, necessitating a foreclosure and liquidation effort? According to the Mortgage Bankers Association, the cost of servicing a non-performing loan ranged from eight to twenty times the cost of servicing a performing loan during the ten-year period from 2009 to 1H2019 (Source: Servicing Operations Study and Forum; PGR 1H2019). Using 2014 as both the mid-point of this ratio and of the time period under consideration, the direct cost of servicing a performing loan was $156, compared to $2,000 for a non-performing loan. Averaged across both performing and non-performing loans, direct servicing costs were $171 per loan, with an additional cost of $31 per loan arising from unreimbursed expenditures related to foreclosure, REO and other costs, plus an estimated $58 per loan of corporate administration expense, totaling $261 per loan. The average loan balance of FHLMC and FNMA loans in 2014 was approximately $176,000, translating to an annual base servicing fee of $440. The margins illustrated by these figures demonstrate the extreme sensitivity of net servicing cash flows to the credit performance of the MSR portfolio. After prepayments, credit performance is the most important factor determining the economic return from investing in MSRs.  A 1% increase in non-performing loans from the 10yr average of 3.8% results in a $20 per loan net cash flow decline across the entire portfolio.  Consequently, for servicers who purchase MSR portfolios, careful integration of credit forecasting models into the MSR valuation process, particularly for portfolio acquisitions, is critical. RiskSpan’s MSR engine integrates both prepayment and credit models, permitting the precise estimation of net cash flows to MSR owners. The primary process affecting the cash inflow to the servicer is prepayment; when a loan prepays, the servicing fee is terminated. The cash outflow side of the equation depends on a number of factors:

  1. First and foremost, direct servicing cost is extremely sensitive to loan performance. The direct cost of servicing rises rapidly as delinquency status becomes increasingly severe. Direct servicing cost of a 30-day delinquent loan varies by servicer but can be as high as 350% of a performing loan. These costs rise to 600% of a performing loan’s cost at 60 days delinquent.
  2. Increasing delinquency causes other costs to escalate, including the cost of principal and interest as well as tax and escrow advances, non-reimbursable collateral protection, foreclosure and liquidation expenses. Float decreases, reducing interest earnings on cash balances.

    Improving-MSR-Pricing-GraphSource: Average servicing cost by delinquency state as supplied by several leading servicers of Agency and non-Agency mortgages.


RiskSpan’s MSR platform incorporates the full range of input parameters necessary to fully characterize the positive and negative cash flows arising from servicing. Positive cash flows include the servicing and other fees collected directly from borrowers as well as various types of ancillary and float income. Major contributors to negative cash flows include direct labor costs associated with performing servicing activities as well as unreimbursed foreclosure and liquidation costs, compensating interest and costs associated with financing principal, interest and escrow advances on delinquent loans. The net cash flows determined at the loan level are aggregated across the entire MSR portfolio and the client’s preferred pricing methodology is applied to calculate a portfolio value.


Improving-MSR-Pricing-Graph


Aggregation of MSR Portfolio Cash Flows – Loan-by-Loan vs “Rep Lines”

Historically, servicer net cash flows were determined using a simple methodology in which the base servicing fee was reduced by the servicing cost, and forecast prepayments were projected using a prepayment model. The impact of credit performance on net cash flows was explicitly considered by only a minority of practitioners.

Because servicing portfolios can contain hundreds of thousands or millions of loans, the computational challenge of generating net servicing cash flows was quite high. As the industry moved increasingly towards using OAS pricing and risk methodologies to evaluate MSRs, this challenge was multiplied by 250 to 1,000, depending on the number of paths used in the stochastic simulation.

In order to make the computational challenge more tractable, loans in large portfolios have historically been allocated to buckets according to the values of the characteristics of each loan that most explained its performance. In a framework that considered prepayment risk to be the major factor affecting MSR value, the superset of characteristics that mattered were those that were inputs to the prepayment model. This superset was then winnowed down to a handful of characteristics that were considered most explanatory. Each bucket would be converted to a “rep line” that represented the average of the values for each loan that were input into the prepayment models.


Improving-MSR-Pricing-Graph


Medium-sized servicers historically might have created 500 to 1,500 rep lines to represent their portfolio. Large servicers today may use tens of thousands.

The core premise supporting the distillation of a large servicing portfolio into a manageable number of rep lines is that each bucket represents a homogenous group of loans that will perform similarly, so that the aggregated net cash flows derived from the rep lines will approximate the performance of the sum of all the individual loans to a desired degree of precision.

The degree of precision obtained from using rep lines was acceptable for valuing going-concern portfolios, particularly if variations in the credit of individual loans and the impact of credit on net cash flows were not explicitly considered.  Over time, movement in MSR portfolio values would be driven mostly by prepayments, which themselves were driven by interest rate volatility. If the modeled value diverged sufficiently from “fair value” or a mark provided by an external provider, a valuation adjustment might be made and reported, but this was almost always a result of actual prepayments deviating from forecast.

Once an analyst looks to incorporate credit performance into MSR valuation, the number of meaningful explanatory loan characteristics grows sharply.  Not only must one consider all the variables that are used to project a mortgage’s cash flows according to its terms (including prepayments), but it also becomes necessary to incorporate all the factors that help one project exercise of the “default option.” Suddenly, the number of loans that could be bucketed together and be considered homogenous with respect to prepayment and credit performance would drop sharply; the number of required buckets would increase dramatically –to the point where the number of rep lines begins to rival the number of loans. The sheer computational power needed for such complex processing has only recently become available to most practitioners and requires a scalable, cloud-native solution to be cost effective.

Two significant developments have forced mortgage servicers to more precisely project net mortgage cash flows:

  1. As the accumulation of MSRs by large market participants through outright purchase, rather than through loan origination, has been growing dramatically, imprecision in valuation became less tolerable as it could result in the servicer bidding too low or too high for a servicing package.
  2. FASB Accounting Standard 2016-13 obligated entities holding “financial assets and net investment in leases that are not accounted for at fair value through net income” to estimate “incurred losses,” or estimated futures losses over the life of the asset. While the Standard does not necessarily apply to MSRs because most MSR investors account for the asset at fair value and flow fair value mark-to-market through income, it did lead to a statement from the major regulators:

“If a financial asset does not share risk characteristics with other financial assets, the new accounting standard requires expected credit losses to be measured on an individual asset basis.” 

(Source: Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, National Credit Union Administration, and Office of the Comptroller of the Currency. “Joint Statement on the New Accounting Standard on Financial Instruments – Credit Losses.” June 17, 2016.).

The result of these developments is that a number of large servicers are revisiting their bucketing methodologies and considering using loan-level analyses to better incorporate the impact of credit on MSR value, particularly when purchasing new packages of MSRs. By enabling MSR investors to re-combine and re-aggregate cash flow results on the fly, loan-level projections open the door to a host of additional, scenario-based analytics. RiskSpan’s cloud-native Edge Platform is uniquely positioned to support these emerging methodologies because it was envisioned and built from the ground up as a loan-level analytical engine. The flexibility afforded by its parallel computing framework allows for complex net-cash-flow calculations on hundreds of thousands of individual mortgage loans simultaneously. The speed and scalability this affords makes the Edge Platform ideally suited for pricing even the largest portfolios of MSR assets and making timely trading decisions with confidence.


In Part II of this series, we will delve into property-level risk characteristics—factors that are not easily rolled up into portfolio rep lines and must be evaluated at the loan level—impact credit risk and servicing cash flows. We will also quantify the impact of a loan-level analysis incorporating these factors on an MSR valuation.

Contact us to learn more.


Managing Market Risk for Crypto Currencies

 

Contents

Overview

Asset Volatility vs Asset Sensitivity to Benchmark (Beta)

Portfolio Asset Covariance

Value at Risk (VaR)

Bitcoin Futures: Basis and Proxies

Intraday Value at Risk (VaR)

Risk-Based Limits

VaR Validation (Bayesian Approach)

Scenario Analysis

Conclusion


Overview

Crypto currencies have now become part of institutional investment strategies. According to CoinShares, assets held under management by crypto managers reached $57B at the end of Q1 2021.  

Like any other financial asset, crypto investments are subject to market risk monitoring with several approaches evolving. Crypto currencies exhibit no obvious correlation to other assets classes, risk factors  or economic variables. However, crypto currencies have exhibited high price volatility and have enough historical data to implement a robust market risk process. 

In this paper we discuss approaches to implementing market risk analytics for a portfolio of crypto assets. We will look at betas to benchmarks, correlations, Value at Risk (VaR) and historical event scenarios. 

Value at Risk allows risk managers to implement risk-based limits structures, instead of relying on traditional notional measures. The methodology we propose enables consolidation of risk for crypto assets with the rest of the portfolio. We will also discuss the use of granular time horizons for intraday limit monitoring. 

Asset Volatility vs Asset Sensitivity to Benchmark (Beta)

For exchange-traded instruments, beta measures the sensitivity of asset price returns relative to a benchmark. For US-listed large cap stocks, beta is generally computed relative to the S&P 500 index. For crypto currencies, several eligible benchmark indices have emerged that represent the performance of the overall crypto currency market.

We analyzed several currencies against S&P’s Bitcoin Index (SPBTC). SPBTC is designed to track the performance of the original crypto asset, Bitcoin. As market capitalization for other currencies grows, it would be more appropriate to switch to a dynamic multi-currency index such as Nasdaq’s NCI. At the time of this paper, Bitcoin constituted 62.4% of NCI.

Traditionally, beta is calculated over a variable time frame using least squares fit on a linear regression of benchmark return and asset return. One of the issues with calculating betas is the variability of the beta itself. In order to overcome that, especially given the volatility of crypto currencies, we recommend using a rolling beta.

Due to the varying levels of volatility and liquidity of various crypto currencies, a regression model may not always be a good fit. In addition to tracking fit through R-squared, it is important to track confidence level for the computed betas.

Crypto-VolitilityFigure 1 History of Beta to S&P Bitcoin Index with Confidence Intervals

The chart above shows rolling betas and confidence intervals for four crypto currencies between January 2019 and July 2021. Beta and confidence interval both vary over time and periods of high volatility (stress) cause a larger dislocation in the value of beta.

Rolling betas can be used to generate a hierarchical distribution of expected asset values.

Portfolio Asset Covariance

Beta is a useful measure to track an asset’s volatility relative to a single benchmark. In order to numerically analyze the risk exposure (variance) of a portfolio with multiple crypto assets, we need to compute a covariance matrix. Portfolio risk is a function not only of each asset’s volatility but also of the cross-correlation among them.

Crypto-PortfolioFigure 2 Correlations for 11 currencies (calculated using observations from 2021)

The table above shows a correlation matrix across 11 crypto assets, including Bitcoin.

Like betas, correlations among assets change over time. But correlation matrices are more unwieldy to track over time than betas are. For this reason, hierarchical models provide a good, practical framework for time-varying covariance matrices.

Value at Risk (VaR)

The VaR for a position or portfolio can be defined as some threshold Τ (in dollars) where the existing position, when faced with market conditions resembling some given historical period, will have P/L greater than Τ with probability k. Typically, k  is chosen to be 99% or 95%.

To compute this threshold Τ, we need to:

  1. Set a significance percentile k, a market observation period, and holding period n.
  2. Generate a set of future market conditions (scenarios) from today to period n.
  3. Compute a P/L on the position for each scenario

After computing each position’s P/L, we sum the P/L for each scenario and then rank the scenarios’ P/Ls to find the the k th percentile (worst) loss. This loss defines our VaR Τ at the the k th percentile for observation-period length n.

Determining what significance percentile k and observation length n to use is straightforward and often dictated by regulatory rules. For example, 99th percentile 10-day VaR is used for risk-based capital under the Market Risk Rule. Generating the scenarios and computing P/L under these scenarios is open to interpretation. We cover each of these, along with the advantages and drawbacks of each, in the next two sections.

To compute VaR, we first need to generate projective scenarios of market conditions. Broadly speaking, there are two ways to derive this set of scenarios:

  1. Project future market conditions using historical (actual) changes in market conditions
  2. Project future market conditions using a Monte Carlo simulation framework

In this paper, we consider a historical simulation approach.

RiskSpan projects future market conditions using actual (observed) n-period changes in market conditions over the lookback period. For example, if we are computing 1-day VaR for regulatory capital usage under the Market Risk Rule, RiskSpan takes actual daily changes in risk factors. This approach allows our VaR scenarios to account for natural changes in correlation under extreme market moves. RiskSpan finds this to be a more natural way of capturing changing correlations without the arbitrary overlay of how to change correlations in extreme market moves. This, in turn, will more accurately capture VaR. Please note that newer crypto currencies may not have enough data to generate a meaningful set of historical scenarios. In these cases, using a benchmark adjusted by a short-term beta may be used as an alternative.

One key consideration for the historical simulation approach is the selection of the observation window or lookback period. Most regulatory guidelines require at least a one-year window. However, practitioners also recommend a shorter lookback period for highly volatile assets. In the chart below we illustrate how VaR for our portfolio of crypto currencies changes for a range of lookback periods and confidence intervals. Please note that VaR is expressed as a percentage of portfolio market value.

Use of an exponentially weighted moving average methodology can be used to overcome the challenges associated with using a shorter lookback period. This approach emphasizes recent observations by using exponentially weighted moving averages of squared deviations. In contrast to equally weighted approaches, these approaches attach different weights to the past observations contained in the observation period. Because the weights decline exponentially, the most recent observations receive much more weight than earlier observations.

Crypto-VaRFigure 3 Daily VaR as % of Market Value calculated using various historical observation periods

VaR as a single number does not represent the distribution of P/L outcomes. In addition to computing VaR under various confidence intervals, we also compute expected shortfall, worst loss, and standard deviation of simulated P/L vectors. Worst loss and standard deviation are self-explanatory while the calculation of expected shortfall is described below.

Expected shortfall is the average of all the P/L figures to the left of the VaR figure. If we have 1,000 simulated P/L vectors, and the VaR is the 950th worst case observation, the expected shortfall is the average of P/Ls from 951 to 1000.

Crypto-VaR

The table below presents VaR-related metrics as a percentage of portfolio market value under various lookback periods.

Crypto-VaRFigure 4 VaR for a portfolio of crypto assets computed for various lookback periods and confidence intervals

Bitcoin Futures: Basis and Proxies

One of the most popular trades for commodity futures is the basis trade. This is when traders build a strategy around the difference between the spot price and futures contract price of a commodity. This exists in corn, soybean, oil and of course Bitcoin. For the purpose of calculating VaR, specific contracts may not provide enough history and risk systems use continuous contracts. Continuous contracts introduce additional basis as seen in the chart below. Risk managers need to work with the front office to align risk factor selection with trading strategies, without compromising independence of the risk process.

Crypto-BasisFigure 5 BTC/Futures basis difference between generic and active contracts

Intraday Value

The highly volatile nature of crypto currencies requires another consideration for VaR calculations. A typical risk process is run at the end of the day and VaR is calculated for a one-day or longer forecasting period. But Crypto currencies, especially Bitcoin, can also show significant intraday price movements.

We obtained intraday prices for Bitcoin (BTC) from Gemini, which is ranked third by volume. This data was normalized to create time series to generate historical simulations. The chart below shows VaR as a percentage of market value for Bitcoin (BTC) for one-minute, one-hour and one-day forecasting periods. Our analysis shows that a Bitcoin position can lose as much as 3.5% of its value in one hour (99th percentile VaR).

Crypto-Intraday

 

Risk-Based Limits 

Right from the inception of Value at Risk as a concept it has been used by companies to manage limits for a trading unit. VaR serves as a single risk-based limit metric with several advantages and a few challenges:

Pros of using VaR for risk-based limit:

  • VaR can be applied across all levels of portfolio aggregation.
  • Aggregations can be applied across varying exposures and strategies.
  • Today’s cloud scale makes it easy to calculate VaR using granular risk factor data.

VaR can be subject to model risk and manipulation. Transparency and use of market risk factors can avoid this pitfall.

Ability to calculate intra-day VaR is key for a risk-based limit implementation for crypto assets. Risk managers should consider at least an hourly VaR limit in addition to the traditional daily limits.

VaR Validation (Bayesian Approach)

Standard approaches for back-testing VaR are applicable to portfolios of crypto assets as well.

Given the volatile nature of this asset class, we also explored an approach to validating the confidence interval and percentiles implied from historical simulations. Although this is a topic that deserves its own document, we present a high-level explanation and results of our analysis.

Building an approach first proposed in the Pyfolio library, we generated a posterior distribution using the Pymc3 package from our historically observed VaR simulations.

Sampling routines from Pymc3 were used to generate 10,000 simulations of the 3-year lookback case. A distribution of percentiles (VaR) was then computed across these simulations.

The distribution shows that the mean 95th percentile VaR would be 7.3% vs 8.9% calculated using the historical simulation approach. However, the tail of the distribution indicates a VaR closer to the historical simulation approach. One could conclude that the test indicates that the original calculation still represents the extreme case, which is the motivation behind VaR.

Crypto-Var-ValidationFigure 6 Distribution of percentiles generated from posterior simulations

Scenario Analysis

In addition to standard shock scenarios, we recommend using the volatility of Bitcoin to construct a simulation of outcomes. The chart below shows the change in Bitcoin (BTC) volatility for select events in the last two years. Outside of standard macro events, crypto assets respond to cyber security events and media effects, including social media.

Crypto-Scenario-AnalysisFigure 7 Weekly observed volatility for Bitcoin

Conclusion

Given the volatility of crypto assets, we recommend, to the extent possible, a probability distribution approach. At the very least, risk managers should monitor changes in relationship (beta) of assets.

For most financial institutions, crypto assets are part of portfolios that include other traditional asset classes. A standard approach must be used across all asset classes, which may make it challenging to apply shorter lookback windows for computing VaR. Use of the exponentially weighted moving approach (described above) may be considered.

Intraday VaR for this asset class can be significant and risk managers should set appropriate limits to manage downward risk.

Idiosyncratic risks associated with this asset class have created a need for monitoring scenarios not necessarily applicable to other asset classes. For this reason, more scenarios pertaining to cyber risk are beginning to be applied across other asset classes.  

CONTACT US TO LEARN MORE!

Related Article

Calculating VaR: A Review of Methods


Validating Structured Finance Models

Introduction: Structured Finance Models

Models used to govern the valuation and risk management of structured finance instruments take a variety of forms. Unlike conventional equity investments, structured finance instruments are often customized to meet the unique needs of specific investors. They are tailored to mitigate various types of risks, including interest rate risk, credit risk, market risk and counterparty risks. Therefore, structured finance instruments may be derived from a compilation of loans, stocks, indices, or derivatives. Mortgage-backed securities (MBS) are the most ubiquitous example of this, but structured finance instruments also include:

  • Derivatives
  • Collateralized Mortgage Obligations (CMO)
  • Collateralized Bond Obligations (CBO)
  • Collateralized Debt Obligations (CDO)
  • Credit Default Swaps (CDS)
  • Hybrid Securities

Pricing and measuring the risk of these instruments is typically carried out using an integrated web of models. One set of models might be used to derive a price based on discounted cash flows. Once cash flows and corresponding discounting factors have been established, other models might be used to compute risk metrics (duration and convexity) and financial metrics (NII, etc.).

These models can be grouped into three major categories:

  • Curve Builder and Rate Models: Market rates are fundamental to valuing most structured finance instruments. Curve builders calibrate market curves (treasury yield curve, Libor/Swap Rate curve, or SOFR curve) using the market prices of the underlying bond, future, or swap. Interest rate models take the market curve as an input and generate simulated rate paths as the future evolution of the selected type of the market curve.

  • Projection Models: Using the market curve (or the single simulated rate path), a current coupon projection model projects forward 30-year and 15-year fixed mortgage rates. Macroeconomic models project future home values using a housing-price index (HPI). Prepayment models estimate how quickly loans are likely to pay down based on mortgage rate projections and other macroeconomic projections. And roll-rate models forecast the probability of a loan’s transitioning from one current/default state to another.

  • Cash Flow Models and Risk Metrics: Cash flow models combine the deal information of the underlying structured instrument with related rate projections to derive an interest-rate-path-dependent cash flow.

The following illustrates how the standard discounted cash flow approach works for a mortgage-related structured finance instrument:

Cash Flow Models and Risk Metrics

Most well-known analytic solutions apply this discounted cash flow approach, or some adaptation of it, in analyzing structured finance instruments.

Derivatives introduce an additional layer of complexity that often calls for approaches and models beyond the standard discounted cash flow approach. Swaption and interest rate cap and floors, for example, require a deterministic approach, such as the Black model. For bond option pricing, lattice models or tree structures are commonly used. The specifics of these models are beyond the scope of this presentation, but many of the general model validation principles applied to discounted cash flow models are equally applicable to derivative pricing models.

Validating Curve Builder and Rate Models

Curve Builders

Let’s begin with the example of a curve builder designed for calibrating the on-the-run U.S. Treasury yield curve. To do this, the model takes a list of eligible on-the-run Treasury bonds as the key model inputs, which serves as the fitting knots[1]. A proper interpolator that connects all the fitting knots is then used to smooth the curve and generate monthly or quarterly rates for all maturities up to 30 years. If abnormal increments or decrements are observed in the calibrated yield curve, adjustments are made to alleviate deviations between the fitting knots until the fitted yield curve is stable and smooth. A model validation report should include a thorough conceptual review of how the model carries out this task.

Based on the market-traded securities selected, the curve builder is able to generate an on or off-the-run Treasury yield as well as LIBOR swap curve SOFR curve, or whatever is needed. The curve builder serves as the basis for measuring nominal and option‐adjusted spreads for many types of securities and for applying spreads whenever spread is used to determine model price.

A curve builder’s inputs are therefore a set of market-traded securities. To validate the inputs, we take the market price of the fitting knots for three month-end trading dates and compare them against the market price inputs used in the curve builder. We then calibrate the par rate and spot rate based on the retrieved market price and compare it with the fitted curve generated from the curve builder.

To validate curve builder’s model structure and development, we check the internal transition between the model-provided par rate, spot rate and forward rate on three month-end trading dates. Different compounding frequencies can significantly impact these transitions. We also review the model’s assumptions, limitations and governance activities established by the model owner.

Validating model outputs usually begins by benchmarking the outputs against a similar curve provided by Bloomberg or another reputable challenger system. Next, we perform a sensitivity analysis to check the locality and stability of the forward curve by shocking the input fitting knots and analyzing its impact on the model-provided forward curve. For large shocks (i.e., 300 bp or more) we test boundary conditions, paying particular attention to the forward curve. Normally, we expect to see forwards not becoming negative, as this would breach no-arbitrage conditions.

For the scenario analysis, we test the performance of the curve builder during periods of stress and other significant events, including bond market movement dates, Federal Open Market Committee (FOMC) dates and treasury auction dates. The selected dates cover significant events for Treasury/bond markets and provide meaningful analysis for the validation.

Interest Rate Models

An interest rate model is a mathematical model that is mainly used to describe the future evolution of interest rates. Its principal output is a simulated term structure, which is the fundamental component of a Monte Carlo simulation. Interest rate models typically fall into one of two broad categories:

  • Short-rate models: A short-rate model describes the future evolution of the short rate (instantaneous spot rate, usually written).
  • LIBOR Market Model (LMM): An LMM describes the future evolution of the forward rate, usually written. Unlike the instantaneous spot rate, forward rates can be observed directly from the market, as can their implied volatility.

This blog post provides additional commentary around interest rate model validations.

Conceptual soundness and model theory reviews are conducted based on the specific interest rate model’s dynamics. The model inputs, regardless of the model structure selected, include the selected underlying curve and its corresponding volatility surface as of the testing date. We normally benchmark model inputs against market data from a challenger system and discuss any observed differences.

We then examine the model’s output, which is the set of stochastic paths comprising a variety of required spot rates or forward LIBOR and swap rates, as well as the discount factors consistent with the simulated rates. To check the non-arbitrage condition in the simulated paths, we compare the mean and median path with the underlying curve and comment on the differences. We measure the randomness from the simulated paths and compare it against the interest rate model’s volatility parameter inputs.

Based on the simulated paths, an LMM also provides calibrated ATM swaption volatility. We compare the LMM’s implied ATM swaption volatility with its inputs and the market rates from the challenger system as a review of the model calibration. For the LMM, we also compare the model against history on the correlation of forward swap rates and serial correlation of a forward LIBOR rate. An LMM allows a good choice of structures that generate realistic swap rates, whose correlation is consistent with historical value.

Validating Projection Models

Projection models come in various shapes and sizes.

“Current Coupon” Models

Current coupon models generate mortgage rate projections based on a market curve or a single simulated interest rate path. These projections are a key driver to prepayment projection models and mortgage valuation models. There are a number of model structures that can explain the current coupon projection, ranging from the simple constant-spread method to the recursive forward-simulation method. Since it has been traditionally assumed that the ten-year part of the interest rate curve drives mortgage rates, a common assumption involves holding the spread between current coupon and the ten-year swap or treasury rates constant. However, this simple and intuitive approach has a basic problem: primary market mortgage rates nowadays depend on secondary-market MBS current-coupon yields. Hence, current coupon depends not just on the ten-year part of the curve, but also on other factors that affect MBS current-coupon yields. Such factors include:

  • The shape of the yield curve
  • Tenors on the yield curve
  • Volatilities

A conceptual review of current coupon models includes a discussion around the selected method and comparisons with alternative approaches. To validate model inputs, we focus on the data transition procedures between the curve builder and current coupon model or between the interest rate model and the current coupon model. To validate model outputs, we perform a benchmarking analysis against projections from a challenger approach. We also perform back-testing to measure the differences between model projections and actual data over a testing period, normally 12 months. We use mean absolute error (MAE) to measure the back-testing results. If the MAE is less than 0.5%, we conclude that the model projection falls inside the acceptable range. For the sensitivity analysis, we examine the movements of the current coupon projection under various shock scenarios (including key-rate shocks and parallel shifting) on the rate inputs.

Prepayment Models

Prepayment models are behavioral models that help investor understand and forecast loan portfolio’s likely prepayment behavior and identify the corresponding major drivers.

The prepayment model’s modeling structure is usually econometric in nature. It assumes that the same set of drivers that affected prepayment and default behavior in the past will drive them in the future under all scenarios, even though the period in the past that is most applicable may vary by scenario in the future.

Major drivers are identified and modeled separately as a function of collateral characteristics and macroeconomic variables. Each type of prepayment effect is then scaled based on the past prepayment and default experience of similar collateral. Assumed is that if the resulting model can explain and reasonably fit historical prepayments, then it may be a good model to project the future, subject to a review of the future projections after careful assessment.

Prepayment effects normally include housing turnover, refinancing and burnout[2]. Each prepayment effect is modeled separately and then combined together. A good conceptual review of prepayment modeling methodology will discuss the mathematical fundamentals of the model, including an assessment of the development procedure for each prepayment effect and comparisons with alternative statistical approaches.

Taking for example a model that projects prepayment rates on tradable Agency mortgage collateral (or whole-loan collateral comparable to Agencies) from settlement date to maturity, development data includes the loan-level or pool-level transition data originally from Fannie Mae, Freddie Mac, Ginnie Mae and third-party servicers. Data obtained from third parties is marked as raw data. We review the data processing procedures used to get from the raw data to the development data. These procedures include reviewing data characteristics, data cleaning, data preparation and data transformation processes.

After the development data preparation, variable selection and loan segmentation become key to explaining each prepayment effect. Model developers seek to select a set of collateral attributes with clear and constant evidence of impact to the given prepayment effect. We validate the loan segmentation process by checking whether the historical prepayment rate from different loan segments demonstrates level differences based on the set of collateral attributes selected.

A prepayment model’s implementation process is normally a black box. This increases the importance of the model output review, which includes performance testing, stress testing, sensitivity analysis, benchmarking and back-testing. An appropriate set of validation tests will capture:

  • Sensitivity to collateral and borrower characteristics (loan-to-value, loan size, etc.)
  • Sensitivity to significant assumptions
  • Benchmarking of prepayment projections
  • Performance during various historical events
  • Back-testing
  • Scenario stability
  • Model projections compared with projections from dealers
  • Performance by different types of mortgages, including CMOs and TBAs

A prepayment model sensitivity analysis might take a TBA security and gradually change the value of input variables, one at a time, to isolate the impact of each variable. This procedure provides an empirical understanding of how the model performs with respect to parameter changes. If the prepayment model has customized tuning functionality, we can apply the sensitivity analysis independently to each prepayment effect by setting the other tuning parameters at zero.

For the benchmarking analysis, we compare the model’s cohort-level, short-term conditional prepayment rate (CPR) projection against other dealer publications, including Barclays and J.P. Morgan (as applicable and available). We also compare the monthly CPR projections against those of the challenger model, such as Bloomberg Agency Model (BAM), for the full stack Agency TBA and discuss the difference. Discrepancies identified during the course of a benchmarking analysis may trigger further investigation into the model’s development. However, it doesn’t necessarily mean that the underlying model is in error since the challenger model itself is simply an alternative projection. Differences might be caused by any number of factors, including different development data or modeling methodologies.

Prepayment model back-testing involves selecting a set of market-traded MBS and a set of hypothetical loan cohorts and comparing the actual monthly CPR against the projected CPR over a prescribed time window (normally one year). Thresholds should be established prior to testing and differences that exceed these thresholds should be investigated and discussed in the model validation report.

Validating Cash Flow Models and Risk Metrics

A cash flow model combines the simulated paths from interest rate, prepayment, default, and delinquency models to compute projected cash flows associated with monthly principal and interest payments.

Cash flow model inputs include the underlying instrument’s characteristics (e.g., outstanding balance, coupon rate, maturity date, day count convention, etc.) and the projected vectors associated the CPR, default rate, delinquency, and severity (if applicable). A conceptual review of a cash flow model involves a verification of the data loading procedure to ensure that the instrument’s characteristics are captured correctly within the model. It should also review the underlying mathematical formulas to verify the projected vectors are correctly applied.

Model outputs can be validated via sensitivity analysis. This often involves shocking each input variable, one at a time, and examining its resulting impacts on the monthly remaining balance. Benchmarking can be accomplished by developing a challenger model and compare the resulting cash flows.

Combining the outputs of all the sub-models, a price of the underlying structured finance instrument can be generated (and tested) along with its related risk metrics (duration, convexity, option adjusted spread, etc.).

Using MBS as an example, an option adjusted spread (OAS) analysis is commonly used. Theoretically, OAS is calibrated by matching the model price with the market price. The OAS can be viewed as a constant spread that is applied to the discounting curve when computing the model price. Because it deals with the differences between model price and market price, OAS is particularly useful in MBS valuation. It is particularly helpful in measuring prepayment risk and market risk. A comprehensive analysis reviews the following:

  • Impact of interest rate shocks on a TBA stack in terms of price, OAS, effective duration, and effective convexity.
  • Impact of projected prepayment rate shock on a TBA stack in terms of price, OAS, effective duration, and effective convexity.
  • Impact of projected prepayment rate shock on the option cost (measured as basis point, zero-volatility spread minus OAS).

Beyond OAS, the validation should include independent benchmarking of the model price. Given a sample portfolio that contains the deal information for a list of structured finance instruments, validators derive a model price using the same market rate as the subject model as a basis for comparison. Analyzing the shock profiles enables validators to conclude whether the given discounting cash flow method is generating satisfactory model performance.

Conclusion

Structured finance model validations are complex because they invariably involve testing a complicated array of models, sub-models, and related models. The list of potential sub-models (across all three categories discussed above) significantly exceeds the examples cited.

Validators must design validation tasks specific to each model type in order to adequately assess the risks posed by potential shortcomings associated with model inputs, structure, theory, development, outputs and governance practices.

When it comes to models governing structured finance instruments, validators must identify any model risk not only at the independent sub-model level but at the broader system level, for which the final outputs include model price and risk metrics. This requires a disciplined and integrated approach.


 

 

[1] Knots represent a set of predefined points on the curve

[2] Burnout effect describes highly seasoned mortgage pools in which loans likely to repay have already done so, resulting in relatively slow prepayment speeds despite falling interest rates.

 


Advanced Technologies Offer an Escape Route for Structured Products When Crises Hit

A Chartis Whitepaper in Collaboration with RiskSpan

COVID-19 has highlighted how financial firms’ technology infrastructures and capabilities are often poorly designed for unexpected events – but lessons are being learned. The ongoing revolution in risk-management technology can help firms address their immediate issues in times of crisis.

By taking the steps we outline here, firms can start to position themselves at the leading edge of portfolio and risk management when such events do occur.



Get Started
Log in

Linkedin   

risktech2024