Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: Prepayment Analytics

RiskSpan Named to Inaugural STORM50 Ranking by Chartis Research – Winner of “A.I. Innovation in Capital Markets”

Chartis Research has named RiskSpan to its Inaugural “STORM50” Ranking of leading risk and analytics providers. The STORM report “focuses on the computational infrastructure and algorithmic efficiency of the vast array of technology tools used across the financial services industry” and identifies industry-leading vendors that excel in the delivery of Statistical Techniques, Optimization frameworks, and Risk Models of all types.

STORM50

RiskSpan’s flagship Edge Platform was a natural fit for the designation because of its positioning squarely at the nexus of statistical behavioral modeling (specifically around mortgage credit and prepayment risk) and functionality enabling users to optimize trading and asset management strategies.  Being named the winner of the “A.I. Innovation in Capital Markets” solutions category reflects the work of RiskSpan’s vibrant innovation lab, which includes researching and developing machine learning solutions to structured finance challenges. These solutions include mining a growing trove of alternative/unstructured data sources, anomaly detection in loan-level and other datasets, and natural language processing for constructing deal cash flow models from legal documents.

Learn more about the Edge Platform or contact us to discuss ways we might help you modernize and improve your mortgage and structured finance data and analytics challenges.


Climate Terms the Housing Market Needs to Understand

The impacts of climate change on housing and holders of mortgage risk are very real and growing. As the frequency and severity of perils increases, so does the associated cost – estimated to have grown from $100B in 2000 to $450B 2020 (see chart below). Many of these costs are not covered by property insurance, leaving homeowners and potential mortgage investors holding the bag. Even after adjusting for inflation and appreciation, the loss to both investors and consumers is staggering. 

Properly understanding this data might require adding some new terms to your personal lexicon. As the housing market begins to get its arms around the impact of climate change to housing, here are a few terms you will want to incorporate into your vocabulary.

  1. Natural Hazard

In partnership with climate modeling experts, RiskSpan has identified 21 different natural hazards that impact housing in the U.S. These include familiar hazards such as floods and earthquakes, along with lesser-known perils, such as drought, extreme temperatures, and other hydrological perils including mudslides and coastal erosion. The housing industry is beginning to work through how best to identify and quantify exposure and incorporate the impact of perils into risk management practices more broadly. Legacy thinking and risk management would classify these risks as covered by property insurance with little to no downstream risk to investors. However, as the frequency and severity increase, it is becoming more evident that risks are not completely covered by property & casualty insurance.

We will address some of these “hidden risks” of climate to housing in a forthcoming post.

  1. Wildland Urban Interface

The U.S. Fire Administration defines Wildland Urban Interface as “the zone of transition between unoccupied land and human development. It is the line, area, or zone where structures and other human development meet or intermingle with undeveloped wildland or vegetative fuels.” An estimated 46 million residences in 70,000 communities in the United States are at risk for WUI fires. Wildfires in California garner most of the press attention. But fire risk to WUIs is not just a west coast problem — Florida, North Carolina and Pennsylvania are among the top five states at risk. Communities adjacent to and surrounded by wildland are at varying degrees of risk from wildfires and it is important to assess these risks properly. Many of these exposed homes do not have sufficient insurance coverage to cover for losses due to wildfire.

  1. National Flood Insurance Program (NFIP) and Special Flood Hazard Area (SFHA)

The National Flood Insurance Program provides flood insurance to property owners and is managed by the Federal Emergency Management Agency (FEMA). Anyone living in a participating NFIP community may purchase flood insurance. But those in specifically designated high-risk SFPAs must obtain flood insurance to obtain a government-backed mortgage. SFHAs as currently defined, however, are widely believed to be outdated and not fully inclusive of areas that face significant flood risk. Changes are coming to the NFIP (see our recent blog post on the topic) but these may not be sufficient to cover future flood losses.

  1. Transition Risk

Transition risk refers to risks resulting from changing policies, practices or technologies that arise from a societal move to reduce its carbon footprint. While the physical risks from climate change have been discussed for many years, transition risks are a relatively new category. In the housing space, policy changes could increase the direct cost of homeownership (e.g., taxes, insurance, code compliance, etc.), increase energy and other utility costs, or cause localized employment shocks (i.e., the energy industry in Houston). Policy changes by the GSEs related to property insurance requirements could have big impacts on affected neighborhoods.

  1. Physical Risk

In housing, physical risks include the risk of loss to physical property or loss of land or land use. The risk of property loss can be the result of a discrete catastrophic event (hurricane) or of sustained negative climate trends in a given area, such as rising temperatures that could make certain areas uninhabitable or undesirable for human housing. Both pose risks to investors and homeowners with the latter posing systemic risk to home values across entire communities.

  1. Livability Risk

We define livability risk as the risk of declining home prices due to the desirability of a neighborhood. Although no standard definition of “livability” exists, it is generally understood to be the extent to which a community provides safe and affordable access to quality education, healthcare, and transportation options. In addition to these measures, homeowners also take temperature and weather into account when choosing where to live. Finding a direct correlation between livability and home prices is challenging; however, an increased frequency of extreme weather events clearly poses a risk to long-term livability and home prices.

Data and toolsets designed explicitly to measure and monitor climate related risk and its impact on the housing market are developing rapidly. RiskSpan is at the forefront of developing these tools and is working to help mortgage credit investors better understand their exposure and assess the value at risk within their businesses.

Contact us to learn more.



Why Mortgage Climate Risk is Not Just for Coastal Investors

When it comes to climate concerns for the housing market, sea level rise and its impacts on coastal communities often get top billing. But this article in yesterday’s New York Times highlights one example of far-reaching impacts in places you might not suspect.

Chicago, built on a swamp and virtually surrounded by Lake Michigan, can tie its whole existence as a city to its control and management of water. But as the Times article explains, management of that water is becoming increasingly difficult as various dynamics related to climate change are creating increasingly large and unpredictable fluctuations in the level of the lake (higher highs and lower lows). These dynamics are threatening the city with more frequency and severe flooding.

The Times article connects water management issues to housing issues in two ways: the increasing frequency of basement flooding caused by sewer overflow and the battering buildings are taking from increased storm surge off the lake. Residents face increasing costs to mitigate their exposure and fear the potentially negative impact on home prices. As one resident puts it, “If you report [basement flooding] to the city, and word gets out, people fear it’s going to devalue their home.”

These concerns — increasing peril exposure and decreasing valuations — echo fears expressed in a growing number of seaside communities and offer further evidence that mortgage investors cannot bank on escaping climate risk merely by avoiding the coasts. Portfolios everywhere are going to need to begin incorporating climate risk into their analytics.



Hurricane Season a Double-Whammy for Mortgage Prepayments

As hurricane (and wildfire) season ramps up, don’t sleep on the increase in prepayment speeds after a natural disaster event. The increase in delinquencies might get top billing, but prepays also increase after events—especially for homes that were fully insured against the risk they experienced. For a mortgage servicer with concentrated geographic exposure to the event area, this can be a double-whammy impacting their balance sheet—delinquencies increase servicing advances, prepays rolling loans off the book. Hurricane Katrina loan performance is a classic example of this dynamic.

Hurrican-Season-a-Double-Whammy-for-Mortgage



RiskSpan’s Edge Platform Wins 2021 Buy-Side Market Risk Management Product of the Year

RiskSpan, a leading SaaS provider of risk management, data and analytics has been awarded Buy-Side Market Risk Management Product of the Year for its Edge Platform at Risk.net’s 2021 Risk Markets Technology Awards. The honor marks Edge’s second major industry award in 2021, having also been named the winner of Chartis Research’s Risk-as-a-Service category.

RMTA21-BSMRMPOTYLicensed by some of the largest asset managers and Insurance companies in the U.S., a significant component of the Edge Platform’s value is derived from its ability to serve as a one-stop shop for research, pre-trade analytics, pricing and risk quantification, and reporting. Edge’s cloud-native infrastructure allows RiskSpan clients to scale as needs change and is supported by RiskSpan’s unparalleled team of domain experts — seasoned practitioners who know the needs and pain points of the industry firsthand

Adjudicators cited the platform’s “strong data management and overall technology” and “best-practice quant design for MBS, structured products and loans” as key factors in the designation.

GET A DEMO

Edge’s flexible configurability enables users to create custom views of their portfolio or potential trades at any level of granularity and down to the loan level. The platform enables researchers and analysts to integrate conventional and alternative data from an impressive array of sources to identify impacts that might otherwise go overlooked.

For clients requiring a fully supported risk-analytics-as-a-service offering, the Edge Platform provides a comprehensive data analysis, predictive modeling, portfolio benchmarking and reporting solution tailored to individual client needs.

An optional studio-level tier incorporates machine learning and data scientist support in order to leverage unstructured and alternative datasets in the analysis.


Contact us to learn how Edge’s capabilities can transform your mortgage and structured product analytics. 

Learn more about Edge at https://riskspan.com/edge-platform/ 


The Why and How of a Successful SAS-to-Python Model Migration

A growing number of financial institutions are migrating their modeling codebases from SAS to Python. There are many reasons for this, some of which may be unique to the organization in question, but many apply universally. Because of our familiarity not only with both coding languages but with the financial models they power, my colleagues and I have had occasion to help several clients with this transition.

Here are some things we’ve learned from this experience and what we believe is driving this change.

Python Popularity

The popularity of Python has skyrocketed in recent years. Its intuitive syntax and a wide array of packages available to aid in development make it one of the most user-friendly programming languages in use today. This accessibility allows users who may not have a coding background to use Python as a gateway into the world of software development and expand their toolbox of professional qualifications.

Companies appreciate this as well. As an open-source language with tons of resources and low overhead costs, Python is also attractive from an expense perspective. A cost-conscious option that resonates with developers and analysts is a win-win when deciding on a codebase.

Note: R is another popular and powerful open-source language for data analytics. Unlike R, however, which is specifically used for statistical analysis, Python can be used for a wider range of uses, including UI design, web development, business applications, and others. This flexibility makes Python attractive to companies seeking synchronicity — the ability for developers to transition seamlessly among teams. R remains popular in academic circles where a powerful, easy-to-understand tool is needed to perform statistical analysis, but additional flexibility is not necessarily required. Hence, we are limiting our discussion here to Python.

Python is not without its drawbacks. As an open-source language, less oversight governs newly added features and packages. Consequently, while updates may be quicker, they are also more prone to error than SAS’s, which are always thoroughly tested prior to release.

CONTACT US

Visualization Capabilities

While both codebases support data visualization, Python’s packages are generally viewed more favorably than SAS’s, which tend to be on the more basic side. More advanced visuals are available from SAS, but they require the SAS Visual Analytics platform, which comes at an added cost.

Python’s popular visualization packages — matplotlib, plotly, and seaborn, among others — can be leveraged to create powerful and detailed visualizations by simply importing the libraries into the existing codebase.

Accessibility

SAS is a command-driven software package used for statistical analysis and data visualization. Though available only for Windows operating systems, it remains one of the most widely used statistical software packages in both industry and academia.

It’s not hard to see why. For financial institutions with large amounts of data, SAS has been an extremely valuable tool. It is a well-documented language, with many online resources and is relatively intuitive to pick up and understand – especially when users have prior experience with SQL. SAS is also one of the few tools with a customer support line.

SAS, however, is a paid service, and at a standalone level, the costs can be quite prohibitive, particularly for smaller companies and start-ups. Complete access to the full breadth of SAS and its supporting tools tends to be available only to larger and more established organizations. These costs are likely fueling its recent drop-off in popularity. New users simply cannot access it as easily as they can Python. While an academic/university version of the software is available free of charge for individual use, its feature set is limited. Therefore, for new users and start-up companies, SAS may not be the best choice, despite being a powerful tool. Additionally, with the expansion and maturity of the variety of packages that Python offers, many of the analytical abilities of Python now rival those of SAS, making it an attractive, cost-effective option even for very large firms.

Future of tech

Many of the expected advances in data analytics and tech in general are clearly pointing toward deep learning, machine learning, and artificial intelligence in general. These are especially attractive to companies dealing with large amounts of data.

While the technology to analyze data with complete independence is still emerging, Python is better situated to support companies that have begun laying the groundwork for these developments. Python’s rapidly expanding libraries for artificial intelligence and machine learning will likely make future transitions to deep learning algorithms more seamless.

While SAS has made some strides toward adding machine learning and deep learning functionalities to its repertoire, Python remains ahead and consistently ranks as the best language for deep learning and machine learning projects. This creates a symbiotic relationship between the language and its users. Developers use Python to develop ML projects since it is currently best suited for the job, which in turn expands Python’s ML capabilities — a cycle which practically cements Python’s position as the best language for future development in the AI sphere.

Overcoming the Challenges of a SAS-to-Python Migration

SAS-to-Python migrations bring a unique set of challenges that need to be considered. These include the following.

Memory overhead

Server space is getting cheaper but it’s not free. Although Python’s data analytics capabilities rival SAS’s, Python requires more memory overhead. Companies working with extremely large datasets will likely need to factor in the cost of extra server space. These costs are not likely to alter the decision to migrate, but they also should not be overlooked.

The SAS server

All SAS commands are run on SAS’s own server. This tightly controlled ecosystem makes SAS much faster than Python, which does not have the same infrastructure out of the box. Therefore, optimizing Python code can be a significant challenge during SAS-to-Python migrations, particularly when tackling it for the first time.

SAS packages vs Python packages

Calculations performed using SAS packages vs. Python packages can result in differences, which, while generally minuscule, cannot always be ignored. Depending on the type of data, this can pose an issue. And getting an exact match between values calculated in SAS and values calculated in Python may be difficult.

For example, the true value of “0” as a float datatype in SAS is approximated to 3.552714E-150, while in Python float “0” is approximated to 3602879701896397/255. These values do not create noticeable differences in most calculations. But some financial models demand more precision than others. And over the course of multiple calculations which build upon each other, they can create differences in fractional values. These differences must be reconciled and accounted for.

Comparing large datasets

One of the most common functions when working with large datasets involves evaluating how they change over time. SAS has a built-in function (proccompare) which compares datasets swiftly and easily as required. Python has packages for this as well; however, these packages are not as robust as their SAS counterparts. 

Conclusion

In most cases, the benefits of migrating from SAS to Python outweigh the challenges associated with going through the process. The envisioned savings can sometimes be attractive enough to cause firms to trivialize the transition costs. This should be avoided. A successful migration requires taking full account of the obstacles and making plans to mitigate them. Involving the right people from the outset — analysts well versed in both languages who have encountered and worked through the pitfalls — is key.


What The FHFA’s Forbearance Announcement Means for Agency Prepayments

On Tuesday, the market received a modicum of clarity around Agency prepayments amid the uncertainty of COVID-19, when the FHFA released new guidelines for mortgage borrowers currently in forbearance or on repayment plans who wish to refinance or buy a new home.

Borrowers that use forbearance will most likely opt for a forbearance deferment, which delays the missed P&I until the loan matures. The FHFA announcement temporarily declares that borrowers are eligible to refinance three months after their forbearance ends and they have made three consecutive payments under their repayment plan, payment deferral option, or loan modification.”

With the share of mortgage loans in forbearance accelerating to over 8 percent, according to the MBA, and retail mortgage interest rates remaining at historically low levels, the FHFA’s announcement potentially expands the universe of mortgages in Agency securities eligible for refi. However, mortgage rates must be sufficiently low as to make economic sense to refinance both the unpaid principal balance of the loan and the deferred payments, which accrue at 0%. We estimate that a 6-month forbearance means that rates must be an additional 25bp lower to match the same payment savings as a borrower who doesn’t need to refinance the deferred payments.  In turn, this will slow refinancing on loans with a forbearance deferment versus loans without forbearance, when faced with the same refinancing incentive. This attenuated refi activity is on top of the three-payment delay after forbearance is over, which pushes the exercise of the call option out three months and lowers the probability of exercise. In total, loans in forbearance will both be slower and have better convexity than loans not in forbearance. 

Today’s FHFA release also extends Fannie’s and Freddie’s ability to purchase single-family mortgages currently in forbearance until at least August 31, 2020. 


Webinar: Machine Learning in Building a Prepayment Model

webinar

Machine Learning in Building a Prepayment Model

Join RiskSpan financial model experts Janet Jozwik, Fan Zhang, and Lei Zhao to discuss how machine learning can help simplify prepayment models. They will discuss

  • Data:  Preprocessing the data and determining which variables are important to include in prepayment models
  • Modeling Approach:  Evaluating machine learning approaches
  • Model Performance: Opening the black box and tuning the model to improve performance


About The Hosts

Janet Jozwik

Managing Director – RiskSpan

Janet Jozwik helps manage quantitative modeling and data analysis groups at RiskSpan. Janet has a background in mortgage credit modeling, loss forecasting, and data analysis. Since joining RiskSpan, Janet has focused on loss forecasting and mortgage portfolio analytics for a key client as well as building a credit model using GSE loan-level data. Prior to joining RiskSpan, Janet was a financial economist at Fannie Mae where she specialized in single family credit pricing. Her work directly impacted the national guarantee fee pricing scheme and government programs to support the housing market during and after the financial crisis. Janet has extensive experience in analyzing massive datasets, a deep understanding of the drivers of credit risk, and an expertise in modeling mortgage cash flows. Janet holds an MBA from the University Of Chicago Booth School Of Business and a BA in Economics from Johns Hopkins University. 

Fan Zhang

Director of Model Development

Fan Zhang has 12 years of quantitative finance experience specializing in behavioral modeling, fixed income analysis and, machine learning. At RiskSpan, Fan leads the quantitative modeling team where he is currently driving improvements to prepay modeling and application of cutting edge machine learning methods. Fan was a senior quantitative manager at Capital One where he worked on prepayment, deposit, MSR, auto, interest rate term structure, and economic capital modeling. He was also a senior financial engineer at Fannie Mae managing a team to validate model implementation and risk analytics. Fan holds an MBA from the University of Maryland and a BA in Economics from the University of Michigan.

Lei Zhao

Quantitative Modeling Analyst

Lei Zhao is a key member of the quantitative modeling team at RiskSpan. Lei has done extensive research on clustering methodologies and his postdoctoral research paper has been cited over a hundred times in scholarly publications. Lei holds a Master of Science degree in Financial Engineering from University of California, Los Angeles, and a PhD in Mechanical Engineering from Zhejiang University, China. 


Calculating Value at Risk — A Review of Methods

white paper

Calculating Value at Risk — A Review of Methods

Our white paper explains why a full revaluation method of calculating value at risk (VaR) is the preferred approach for both banks reporting VaR under Market Risk Rule and hedge funds using VaR to report a unified risk measure to clients.



Calculating VaR: A Review of Methods

Calculating VaR

A Review of Methods

CONTRIBUTOR

Don Brown
Co-Head of Quantitative Analytics

TABLE OF CONTENTS

Have questions about calculating VaR?

Talk Scope

Chapter 1
Introduction

Many firms now use Value-at-Risk (“VaR”) for risk reporting. Banks need VaR to report regulatory capital usage under the Market Risk Rule, as outlined in the Fed and OCC regulations and. Additionally, hedge funds now use VaR to report a unified risk measure across multiple asset classes. There are multiple approaches to VaR, so which method should we choose? In this brief paper, we outline a case for full revaluation VaR in contrast to a simulated VaR using a “delta-gamma” approach to value assets.

The VaR for a position or book of business can be defined as some threshold  (in dollars) where the existing position, when faced with market conditions similar to some given historical period, will have P/L greater than  with probability. Typically,  is chosen to be  or. To compute this threshold , we need to

  1. Set a significance percentile , a market observation period, and holding period n.1
  2. Generate a set of future market conditions (“scenarios”) from today to period n.
  3. Compute a P/L on the position for each scenario

After computing each position’s P/L, we sum the P/L for each scenario and then rank the scenarios’ P/L to find the kth percentile (worst) loss.2 This loss defines our VaR T at the kth percentile for observation-period length n. Determining what significance percentile k and observation length n to use is straightforward and is often dictated by regulatory rules, for example 99th percentile 10-day VaR is used for risk-based capital under the Market Risk Rule. Generating the scenarios and computing P/L under these scenarios is open to interpretation. We cover each of these in the next two sections, with their advantages and drawbacks.

Chapter 2
Generating Scenarios

To compute VaR, we first need to generate projective scenarios of market conditions. Broadly speaking, there are two ways to derive this set of scenarios3

  1. Project future market conditions using a Monte Carlo simulation framework
  2. Project future market conditions using historical (actual) changes in market conditions

MONTE CARLO SIMULATION

Many commercial providers simulate future market conditions using Monte Carlo simulation. To do this, they must first estimate the distributions of risk factors, including correlations between risk factors. Using correlations that are derived from historical data makes the general assumption that correlations are constant within the period. As shown in the academic literature, correlations tend to change, especially in extreme market moves – exactly the kind of moves that tend to define the VaR threshold.4 By constraining correlations, VaR may be either overstated or understated depending on the structure of the position. To account for this, some providers allow users to “stress” correlations by increasing or decreasing them. Such a stress scenario is either arbitrary, or is informed by using correlations from yet another time-period (for example, using correlations from a time of market stress), mixing and matching market data in an ad hoc way.

Further, many market risk factors are highly correlated, which is especially true on the interest rate curve. To account for this, some providers use a single factor for rate-level and then a second or third factor for slope and curvature of the curve. While this may be broadly representative, this approach may not capture subtle changes on other parts of the curve. This limited approach is acceptable for non-callable fixed income securities, but proves problematic when applying curve changes to complex securities such as MBS, where the security value is a function of forward mortgage rates, which in turn is a multivariate function of points on the curve and often implied volatility.

MONTE CARLO SIMULATION

Many commercial providers simulate future market conditions using Monte Carlo simulation. To do this, they must first estimate the distributions of risk factors, including correlations between risk factors. Using correlations that are derived from historical data makes the general assumption that correlations are constant within the period. As shown in the academic literature, correlations tend to change, especially in extreme market moves – exactly the kind of moves that tend to define the VaR threshold.4 By constraining correlations, VaR may be either overstated or understated depending on the structure of the position. To account for this, some providers allow users to “stress” correlations by increasing or decreasing them. Such a stress scenario is either arbitrary, or is informed by using correlations from yet another time-period (for example, using correlations from a time of market stress), mixing and matching market data in an ad hoc way.

Further, many market risk factors are highly correlated, which is especially true on the interest rate curve. To account for this, some providers use a single factor for rate-level and then a second or third factor for slope and curvature of the curve. While this may be broadly representative, this approach may not capture subtle changes on other parts of the curve. This limited approach is acceptable for non-callable fixed income securities, but proves problematic when applying curve changes to complex securities such as MBS, where the security value is a function of forward mortgage rates, which in turn is a multivariate function of points on the curve and often implied volatility.

HISTORICAL SIMULATION

RiskSpan projects future market conditions by using actual (observed) -day changes in market conditions over the look-back period. For example, if we are computing 10-day VaR for regulatory capital usage under the Market Risk Rule, RiskSpan takes actual 10-day changes in market variables. This approach allows our VaR scenarios to account for natural changes in correlation under extreme market moves, such as occurs during a flight-to-quality where risky assets tend to underperform risk-free assets, and risky assets tend to move in a highly correlated manner. RiskSpan believes this is a more natural way to capture changing correlations, without the arbitrary overlay of how to change correlations in extreme market moves. This, in turn, will more correctly capture VaR.5

 

HISTORICAL SIMULATION

RiskSpan projects future market conditions by using actual (observed) -day changes in market conditions over the look-back period. For example, if we are computing 10-day VaR for regulatory capital usage under the Market Risk Rule, RiskSpan takes actual 10-day changes in market variables. This approach allows our VaR scenarios to account for natural changes in correlation under extreme market moves, such as occurs during a flight-to-quality where risky assets tend to underperform risk-free assets, and risky assets tend to move in a highly correlated manner. RiskSpan believes this is a more natural way to capture changing correlations, without the arbitrary overlay of how to change correlations in extreme market moves. This, in turn, will more correctly capture VaR.5

Chapter 3
Calculating Simulated P/L

Get a Demo

With the VaR scenarios defined, we move on to computing P/L under these scenarios. Generally, there are two methods employed

  1. A Taylor approximation of P/L for each instrument, sometimes called “delta-gamma”
  2. A full revaluation of each instrument using its market-accepted technique for valuation

Market practitioners sometimes blend these two techniques, employing full revaluation where the valuation technique is simple (e.g. yield + spread) and using delta-gamma where revaluation is more complicated (e.g. OAS simulation on MBS).

 

DELTA-GAMMA P/L APPROXIMATION

Many market practitioners use a Taylor approximation or “delta-gamma” approach to valuing an instrument under each VaR scenario. For instruments whose price function is approximately linear across each of the m risk factors, users tend to use the first order Taylor approximation, where the instrument price under the kth VaR scenario is given by

Making the price change in the kth scenario

Where ΔP is the simulated price change, Δxi is the change in the ith risk factor, and  is the price delta with respect to the ith risk factor evaluated at the base case. In many cases, these partial derivatives are approximated by bumping the risk factors up/down.6 If the instrument is slightly non-linear, but not non-linear enough to use a higher order approximation, then approximating a first derivative can be a source of error in generating simulated prices. For instruments that are approximately linear, using first order approximation is typically as good as full revaluation. From a computation standpoint, it is marginally faster but not significantly so. Instruments whose price function is approximately linear also tend to have analytic solutions to their initial price functions, for example yield-to-price, and these analytic solutions tend to be as fast as a first-order Taylor approximation. If the instrument is non-linear, practitioners must use a higher order approximation which introduces second-order partial derivatives. For an instrument with m risk-factors, we can approximate the price change in the kth scenario by using the multivariate second order Taylor approximation

To simplify the application of the second-order Taylor approximation, practitioners tend to ignore many of the cross-partial terms. For example, in valuing MBS under delta-gamma, practitioners tend to simplify the approximation by using the first derivatives and a single “convexity” term, which is the second derivative of price with respect to overall rates. Using this short-cut raises a number of issues:

  1. It assumes that the cross-partials have little impact. For many structured products, this is not true.7
  2. Since structured products calculate deltas using finite shifts, how exactly does one calculate a second-order mixed partials?8
  3. For structured products, using a single, second-order “convexity” term assumes that the second order term with respect to rates is uniform across the curve and does not vary by where you are on the curve. For complex mortgage products such as mortgage servicing rights, IOs and Inverse IOs, convexity can vary greatly depending on where you look at the curve.

Using a second-order approximation assumes that the second order derivatives are constant as rates change. For MBS, this is not true in general. For example, in the graphs below we show a constant-OAS price curve for TBA FNMA 30yr 3.5%, as well as a graph of its “DV01”, or first derivative with respect to rates. As you can see, the DV01 graph is non-linear, implying that the convexity term (second derivative of the price function) is non-constant, rendering a second-order Taylor approximation a weak assumption. This is especially true for large moves in rate, the kind of moves that dominate the computation of the VaR.9

In addition to the assumptions above, we occasionally observe that commercial VaR providers compute 1-day VaR and, in the interest of computational savings, scale this 1-day VaR by √10 to generate 10-day VaR. This approximation only works if

  1. Changes in risk factors are all independently, identically distributed (no autocorrelation or heteroscedasticity)
  2. The asset price function is linear in all risk factors

In general, neither of these conditions hold and using a scaling factor of √10 will likely yield an incorrect value for 10-day VaR.10

 

RATIONALIZING WEAKNESS IN THE APPROXIMATION

With the weaknesses in the Taylor approximation cited above, why do some providers still use delta-gamma VaR? Most practitioners will cite that the Taylor approximation is much faster than full revaluation for complex, non-linear instruments. While this seems true at first glance, you still need to:

  1. Compute or approximate all the first partial derivatives
  2. Compute or approximate some of the second partial derivatives and decide which are relevant or irrelevant. This choice may vary from security type to security type.

Neither of these tasks are computationally simple for complex, path-dependent securities which are found in many portfolios. Further, the choice of which second-order terms to ignore has to be supported by documentation to satisfy regulators under the Market Risk Rule.

Even after approximating partials and making multiple, qualitative assessments of which second-order terms to include/exclude, we are still left with error from the Taylor approximation. This error grows with the size of the market move, which also tends to be the scenarios that dominate the VaR calculation. With today’s flexible cloud computation and ultra-fast, cheap processing, the Taylor approximation and its computation of partials ends up being only marginally faster than a full revaluation for complex instruments.11

With the weaknesses in Taylor approximation, especially with non-linear instruments, and the speed and cheapness of full revaluation, we believe that fully revaluing each instrument in each scenario is both more accurate and more straightforward than having to defend a raft of assumptions around the Taylor approximation.

Chapter 4
Conclusion

Talk Scope

With these points in mind, what is the best method for computing VaR? Considering the complexity of many instruments, and considering the comparatively cheap and fast computation available through today’s cloud computing, we believe that calculating VaR using a historical-scenario, full revaluation approach provides the most accurate and robust VaR framework.

From a scenario generation standpoint, using historical scenarios allows risk factors to evolve in a natural way. This in turn captures actual changes in risk factor correlations, changes which can be especially acute in large market moves. In contrast, a Monte Carlo simulation of scenarios typically allows users to “stress” correlations, but these stresses scenarios are arbitrary which may ultimately lead to misstated risk.

From a valuation framework, we feel that full revaluation of assets provides the most accurate representation of risk, especially for complex instruments such as complex ABS and MBS securities. The assumptions and errors introduced in the Taylor approximation may overwhelm any minor savings in run-time, given today’s powerful and cheap cloud analytics. Further, the Taylor approximation forces users to make and defend qualitative judgements of which partial derivatives to include and which to ignore. This greatly increasing the management burden around VaR as well as regulatory scrutiny around justifying these assumptions.

In short, we believe that a historical scenario, full-revaluation VaR provides the most accurate representation of VaR, and that today’s cheap and powerful computing make this approach feasible for most books and trading positions. For VaR, it’s no longer necessary to settle for second-best.

References

ENDNOTES

1 The holding period n is typically one day, ten days, or 21 days (a business-month) although in theory it can be any length period.
 
2 We can also partition the book into different sub-books or “equivalence classes” and compute VaR on each class in the partition. The entire book is the trivial partition.
 
3 There is a third approach to VaR: parametric VaR, where the distributions of asset prices are described by the well-known distributions such as Gaussian. Given the often-observed heavy-tail distributions, combined with difficulties in valuing complex assets with non-linear payoffs, we will ignore parametric VaR in this review.
 
4 The academic literature contains many papers on increased correlation during extreme market moves, for example [5]

5 For example, a bank may have positions in two FX pairs that are poorly correlated in times normal times and highly negatively correlated in times of stress. If a 99%ile worst-move coincides with a stress period, then the aggregate P/L from the two positions may offset each other. If we used the overall correlation to drive a Monte Carlo simulated VaR, the calculated VaR could be much higher.

6 This is especially common in MBS, where the first and second derivatives are computed using a secant-line approximation after shifting risk factors, such as shifting rates ± 25bp

7 For example, as rates fall and a mortgage becomes more refinancible, the mortgage’s exposure to implied volatility also increases, implying that the cross-partial for price with respect to rates and vol is non-zero.

8 Further, since we are using finite shifts, the typical assumption that ƒxy = ƒyx which is based on the smoothness of ƒ(x,y) does not necessarily hold. Therefore, we need to compute two sets of cross partials, further increasing the initial setup time.

9 Why is the second derivative non-constant? As rates move significantly, prepayments stop rising or falling. At these “endpoints,” cash flows on the mortgage change little, making the instrument positively convex like a fixed-amortization schedule bond. In between, changes in prepayments case the mortgage to extend or shorten as rates rise or fall, respectively, which in turn make the MBS negatively convex.

10 Much has been written on the weakness of this scaling, see for example [7]

11 For example, using a flexible computation grid RiskSpan can perform a full OAS revaluation on 20,000 MBS passthroughs using a 250-day lookback period in under one hour. Lattice-solved options are an order of magnitude faster, and analytic instruments such as forwards, European options, futures and FX are even faster.

1 The holding period n is typically one day, ten days, or 21 days (a business-month) although in theory it can be any length period.

2 We can also partition the book into different sub-books or “equivalence classes” and compute VaR on each class in the partition. The entire book is the trivial partition.

3 There is a third approach to VaR: parametric VaR, where the distributions of asset prices are described by the well-known distributions such as Gaussian. Given the often-observed heavy-tail distributions, combined with difficulties in valuing complex assets with non-linear payoffs, we will ignore parametric VaR in this review.

4 The academic literature contains many papers on increased correlation during extreme market moves, for example [5]

5 For example, a bank may have positions in two FX pairs that are poorly correlated in times normal times and highly negatively correlated in times of stress. If a 99%ile worst-move coincides with a stress period, then the aggregate P/L from the two positions may offset each other. If we used the overall correlation to drive a Monte Carlo simulated VaR, the calculated VaR could be much higher.

6 This is especially common in MBS, where the first and second derivatives are computed using a secant-line approximation after shifting risk factors, such as shifting rates ± 25bp

7 For example, as rates fall and a mortgage becomes more refinancible, the mortgage’s exposure to implied volatility also increases, implying that the cross-partial for price with respect to rates and vol is non-zero.

8 Further, since we are using finite shifts, the typical assumption that ƒxy = ƒyx which is based on the smoothness of ƒ(x,y) does not necessarily hold. Therefore, we need to compute two sets of cross partials, further increasing the initial setup time.

9 Why is the second derivative non-constant? As rates move significantly, prepayments stop rising or falling. At these “endpoints,” cash flows on the mortgage change little, making the instrument positively convex like a fixed-amortization schedule bond. In between, changes in prepayments case the mortgage to extend or shorten as rates rise or fall, respectively, which in turn make the MBS negatively convex.

10 Much has been written on the weakness of this scaling, see for example [7]

11 For example, using a flexible computation grid RiskSpan can perform a full OAS revaluation on 20,000 MBS passthroughs using a 250-day lookback period in under one hour. Lattice-solved options are an order of magnitude faster, and analytic instruments such as forwards, European options, futures and FX are even faster.

Get the fully managed solution

Get a Demo


Get Started
Log in

Linkedin   

risktech2024