Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: General

EDGE: QM vs Non-QM Prepayments

Prepayment speeds for qualified mortgages (QM loans) have anecdotally been faster than non-QM loans. For various reasons, the data necessary to analyze interest rate incentive response has not been readily available for these categories of mortgages.

In order to facilitate the generation of traditional refinancing curves (S-curves) over the last year, we have normalized data to improve the differentiation of QM versus non-QM loans within non-agency securities.

Additionally, we isolated the population to remove prepay impact from loan balance and seasoning.

The analysis below was performed on securitized loans with 9 to 36 months of seasoning and an original balance between 200k and 500k. S-curves were generated for observation periods from January 2016 through July 2021.

Results are shown in the table and chart below.

Edge-QM-vs-Non-QM-Refi-Incentive


Edge-QM-vs-Non-QM-Refi-Incentive

For this analysis, refinance incentive was calculated as the difference between mortgage note rate and the 6-week lagged Freddie Mac primary mortgage market survey (PMMS) rate. Non-QM borrowers would not be able to easily refi into a conventional mortgage. We further analyzed the data by examining prepayments speeds for QM and non-QM loans at different level of SATO. SATO, the spread at origination, is calculated as the difference between mortgage note rate and the prevailing PMMS rate at time of loan’s origination.

Edge-QM-vs-Non-QM-Refi-Incentive

Using empirical data maintained by RiskSpan, it can be seen the refinance response for QM loans remains significantly faster than Non-QM loans.

Using Edge, RiskSpan’s data analytics platform, we can examine any loan characteristic and generate S-curves, aging curves, and time series. If you are interested in performing historical analysis on securitized loan data, please contact us for a free demonstration.


Managing Market Risk for Crypto Currencies

 

Contents

Overview

Asset Volatility vs Asset Sensitivity to Benchmark (Beta)

Portfolio Asset Covariance

Value at Risk (VaR)

Bitcoin Futures: Basis and Proxies

Intraday Value at Risk (VaR)

Risk-Based Limits

VaR Validation (Bayesian Approach)

Scenario Analysis

Conclusion


Overview

Crypto currencies have now become part of institutional investment strategies. According to CoinShares, assets held under management by crypto managers reached $57B at the end of Q1 2021.  

Like any other financial asset, crypto investments are subject to market risk monitoring with several approaches evolving. Crypto currencies exhibit no obvious correlation to other assets classes, risk factors  or economic variables. However, crypto currencies have exhibited high price volatility and have enough historical data to implement a robust market risk process. 

In this paper we discuss approaches to implementing market risk analytics for a portfolio of crypto assets. We will look at betas to benchmarks, correlations, Value at Risk (VaR) and historical event scenarios. 

Value at Risk allows risk managers to implement risk-based limits structures, instead of relying on traditional notional measures. The methodology we propose enables consolidation of risk for crypto assets with the rest of the portfolio. We will also discuss the use of granular time horizons for intraday limit monitoring. 

Asset Volatility vs Asset Sensitivity to Benchmark (Beta)

For exchange-traded instruments, beta measures the sensitivity of asset price returns relative to a benchmark. For US-listed large cap stocks, beta is generally computed relative to the S&P 500 index. For crypto currencies, several eligible benchmark indices have emerged that represent the performance of the overall crypto currency market.

We analyzed several currencies against S&P’s Bitcoin Index (SPBTC). SPBTC is designed to track the performance of the original crypto asset, Bitcoin. As market capitalization for other currencies grows, it would be more appropriate to switch to a dynamic multi-currency index such as Nasdaq’s NCI. At the time of this paper, Bitcoin constituted 62.4% of NCI.

Traditionally, beta is calculated over a variable time frame using least squares fit on a linear regression of benchmark return and asset return. One of the issues with calculating betas is the variability of the beta itself. In order to overcome that, especially given the volatility of crypto currencies, we recommend using a rolling beta.

Due to the varying levels of volatility and liquidity of various crypto currencies, a regression model may not always be a good fit. In addition to tracking fit through R-squared, it is important to track confidence level for the computed betas.

Crypto-VolitilityFigure 1 History of Beta to S&P Bitcoin Index with Confidence Intervals

The chart above shows rolling betas and confidence intervals for four crypto currencies between January 2019 and July 2021. Beta and confidence interval both vary over time and periods of high volatility (stress) cause a larger dislocation in the value of beta.

Rolling betas can be used to generate a hierarchical distribution of expected asset values.

Portfolio Asset Covariance

Beta is a useful measure to track an asset’s volatility relative to a single benchmark. In order to numerically analyze the risk exposure (variance) of a portfolio with multiple crypto assets, we need to compute a covariance matrix. Portfolio risk is a function not only of each asset’s volatility but also of the cross-correlation among them.

Crypto-PortfolioFigure 2 Correlations for 11 currencies (calculated using observations from 2021)

The table above shows a correlation matrix across 11 crypto assets, including Bitcoin.

Like betas, correlations among assets change over time. But correlation matrices are more unwieldy to track over time than betas are. For this reason, hierarchical models provide a good, practical framework for time-varying covariance matrices.

Value at Risk (VaR)

The VaR for a position or portfolio can be defined as some threshold Τ (in dollars) where the existing position, when faced with market conditions resembling some given historical period, will have P/L greater than Τ with probability k. Typically, k  is chosen to be 99% or 95%.

To compute this threshold Τ, we need to:

  1. Set a significance percentile k, a market observation period, and holding period n.
  2. Generate a set of future market conditions (scenarios) from today to period n.
  3. Compute a P/L on the position for each scenario

After computing each position’s P/L, we sum the P/L for each scenario and then rank the scenarios’ P/Ls to find the the k th percentile (worst) loss. This loss defines our VaR Τ at the the k th percentile for observation-period length n.

Determining what significance percentile k and observation length n to use is straightforward and often dictated by regulatory rules. For example, 99th percentile 10-day VaR is used for risk-based capital under the Market Risk Rule. Generating the scenarios and computing P/L under these scenarios is open to interpretation. We cover each of these, along with the advantages and drawbacks of each, in the next two sections.

To compute VaR, we first need to generate projective scenarios of market conditions. Broadly speaking, there are two ways to derive this set of scenarios:

  1. Project future market conditions using historical (actual) changes in market conditions
  2. Project future market conditions using a Monte Carlo simulation framework

In this paper, we consider a historical simulation approach.

RiskSpan projects future market conditions using actual (observed) n-period changes in market conditions over the lookback period. For example, if we are computing 1-day VaR for regulatory capital usage under the Market Risk Rule, RiskSpan takes actual daily changes in risk factors. This approach allows our VaR scenarios to account for natural changes in correlation under extreme market moves. RiskSpan finds this to be a more natural way of capturing changing correlations without the arbitrary overlay of how to change correlations in extreme market moves. This, in turn, will more accurately capture VaR. Please note that newer crypto currencies may not have enough data to generate a meaningful set of historical scenarios. In these cases, using a benchmark adjusted by a short-term beta may be used as an alternative.

One key consideration for the historical simulation approach is the selection of the observation window or lookback period. Most regulatory guidelines require at least a one-year window. However, practitioners also recommend a shorter lookback period for highly volatile assets. In the chart below we illustrate how VaR for our portfolio of crypto currencies changes for a range of lookback periods and confidence intervals. Please note that VaR is expressed as a percentage of portfolio market value.

Use of an exponentially weighted moving average methodology can be used to overcome the challenges associated with using a shorter lookback period. This approach emphasizes recent observations by using exponentially weighted moving averages of squared deviations. In contrast to equally weighted approaches, these approaches attach different weights to the past observations contained in the observation period. Because the weights decline exponentially, the most recent observations receive much more weight than earlier observations.

Crypto-VaRFigure 3 Daily VaR as % of Market Value calculated using various historical observation periods

VaR as a single number does not represent the distribution of P/L outcomes. In addition to computing VaR under various confidence intervals, we also compute expected shortfall, worst loss, and standard deviation of simulated P/L vectors. Worst loss and standard deviation are self-explanatory while the calculation of expected shortfall is described below.

Expected shortfall is the average of all the P/L figures to the left of the VaR figure. If we have 1,000 simulated P/L vectors, and the VaR is the 950th worst case observation, the expected shortfall is the average of P/Ls from 951 to 1000.

Crypto-VaR

The table below presents VaR-related metrics as a percentage of portfolio market value under various lookback periods.

Crypto-VaRFigure 4 VaR for a portfolio of crypto assets computed for various lookback periods and confidence intervals

Bitcoin Futures: Basis and Proxies

One of the most popular trades for commodity futures is the basis trade. This is when traders build a strategy around the difference between the spot price and futures contract price of a commodity. This exists in corn, soybean, oil and of course Bitcoin. For the purpose of calculating VaR, specific contracts may not provide enough history and risk systems use continuous contracts. Continuous contracts introduce additional basis as seen in the chart below. Risk managers need to work with the front office to align risk factor selection with trading strategies, without compromising independence of the risk process.

Crypto-BasisFigure 5 BTC/Futures basis difference between generic and active contracts

Intraday Value

The highly volatile nature of crypto currencies requires another consideration for VaR calculations. A typical risk process is run at the end of the day and VaR is calculated for a one-day or longer forecasting period. But Crypto currencies, especially Bitcoin, can also show significant intraday price movements.

We obtained intraday prices for Bitcoin (BTC) from Gemini, which is ranked third by volume. This data was normalized to create time series to generate historical simulations. The chart below shows VaR as a percentage of market value for Bitcoin (BTC) for one-minute, one-hour and one-day forecasting periods. Our analysis shows that a Bitcoin position can lose as much as 3.5% of its value in one hour (99th percentile VaR).

Crypto-Intraday

 

Risk-Based Limits 

Right from the inception of Value at Risk as a concept it has been used by companies to manage limits for a trading unit. VaR serves as a single risk-based limit metric with several advantages and a few challenges:

Pros of using VaR for risk-based limit:

  • VaR can be applied across all levels of portfolio aggregation.
  • Aggregations can be applied across varying exposures and strategies.
  • Today’s cloud scale makes it easy to calculate VaR using granular risk factor data.

VaR can be subject to model risk and manipulation. Transparency and use of market risk factors can avoid this pitfall.

Ability to calculate intra-day VaR is key for a risk-based limit implementation for crypto assets. Risk managers should consider at least an hourly VaR limit in addition to the traditional daily limits.

VaR Validation (Bayesian Approach)

Standard approaches for back-testing VaR are applicable to portfolios of crypto assets as well.

Given the volatile nature of this asset class, we also explored an approach to validating the confidence interval and percentiles implied from historical simulations. Although this is a topic that deserves its own document, we present a high-level explanation and results of our analysis.

Building an approach first proposed in the Pyfolio library, we generated a posterior distribution using the Pymc3 package from our historically observed VaR simulations.

Sampling routines from Pymc3 were used to generate 10,000 simulations of the 3-year lookback case. A distribution of percentiles (VaR) was then computed across these simulations.

The distribution shows that the mean 95th percentile VaR would be 7.3% vs 8.9% calculated using the historical simulation approach. However, the tail of the distribution indicates a VaR closer to the historical simulation approach. One could conclude that the test indicates that the original calculation still represents the extreme case, which is the motivation behind VaR.

Crypto-Var-ValidationFigure 6 Distribution of percentiles generated from posterior simulations

Scenario Analysis

In addition to standard shock scenarios, we recommend using the volatility of Bitcoin to construct a simulation of outcomes. The chart below shows the change in Bitcoin (BTC) volatility for select events in the last two years. Outside of standard macro events, crypto assets respond to cyber security events and media effects, including social media.

Crypto-Scenario-AnalysisFigure 7 Weekly observed volatility for Bitcoin

Conclusion

Given the volatility of crypto assets, we recommend, to the extent possible, a probability distribution approach. At the very least, risk managers should monitor changes in relationship (beta) of assets.

For most financial institutions, crypto assets are part of portfolios that include other traditional asset classes. A standard approach must be used across all asset classes, which may make it challenging to apply shorter lookback windows for computing VaR. Use of the exponentially weighted moving approach (described above) may be considered.

Intraday VaR for this asset class can be significant and risk managers should set appropriate limits to manage downward risk.

Idiosyncratic risks associated with this asset class have created a need for monitoring scenarios not necessarily applicable to other asset classes. For this reason, more scenarios pertaining to cyber risk are beginning to be applied across other asset classes.  

CONTACT US TO LEARN MORE!

Related Article

Calculating VaR: A Review of Methods


RiskSpan Named to Inaugural STORM50 Ranking by Chartis Research – Winner of “A.I. Innovation in Capital Markets”

Chartis Research has named RiskSpan to its Inaugural “STORM50” Ranking of leading risk and analytics providers. The STORM report “focuses on the computational infrastructure and algorithmic efficiency of the vast array of technology tools used across the financial services industry” and identifies industry-leading vendors that excel in the delivery of Statistical Techniques, Optimization frameworks, and Risk Models of all types.

STORM50

RiskSpan’s flagship Edge Platform was a natural fit for the designation because of its positioning squarely at the nexus of statistical behavioral modeling (specifically around mortgage credit and prepayment risk) and functionality enabling users to optimize trading and asset management strategies.  Being named the winner of the “A.I. Innovation in Capital Markets” solutions category reflects the work of RiskSpan’s vibrant innovation lab, which includes researching and developing machine learning solutions to structured finance challenges. These solutions include mining a growing trove of alternative/unstructured data sources, anomaly detection in loan-level and other datasets, and natural language processing for constructing deal cash flow models from legal documents.

Learn more about the Edge Platform or contact us to discuss ways we might help you modernize and improve your mortgage and structured finance data and analytics challenges.


Is the housing market overheated? It depends where you are.

Mortgage credit risk modeling has evolved slowly in the last few decades. While enhancements leveraging conventional and alternative data have improved underwriter insights into borrower income and assets, advances in data supporting underlying property valuations have been slow. With loan-to-value ratios being such a key driver of loan performance, the stability of a subject property’s value is arguably as important as the stability of a borrower’s income.

Most investors rely on current transaction prices to value comparable properties, largely ignoring the risks to the sustainability of those prices. Lacking the data necessary to identify crucial factors related to a property value’s long-term sustainability, investors generally have little choice but to rely on current snapshots. To address this problem, credit modelers at RiskSpan are embarking on an analytics journey to evaluate the long-term sustainability of a property’s value.

To this end, we are working to pull together a deep dataset of factors related to long-term home price resiliency. We plan to distill these factors into a framework that will enable homebuyers, underwriters, and investors to quickly assess the risk inherent to the property’s physical location. The data we are collecting falls into three broad categories:

  • Regional Economic Trends
  • Climate and Natural Hazard Risk
  • Community Factors

Although regional home price outlook sometimes factors into mortgage underwriting, the long-term sustainability of an individual home price is seldom, if ever, taken into account. The future value of a secured property is arguably of greater importance to mortgage investors than its value at origination. Shouldn’t they be taking an interest in regional economic condition, exposure to climate risk, and other contributors to a property valuation’s stability?

We plan to introduce analytics across all three of these dimensions in the coming months. We are particularly excited about the approach we’re developing to analyze climate and natural hazard risk. We will kick things off, however, with basic economic factors. We are tracking the long-term sustainability of house prices through time by tracking economic fundamentals at the regional level, starting with the ratio of home prices to median household income.

Economic Factors

Housing is hot. Home prices jumped 12.7% nationally in 2020, according to FHFA’s house price index[1]. Few economists are worried about a new housing bubble, and most attribute this rise to supply and demand dynamics. Housing supply is low and rising housing demand is a function of demography –millennials are hitting 40 and want a home of their own.

But even if the current dynamic is largely driven by low supply, there comes a certain point at which house prices deviate too much from area median household income to be sustainable. Those who bear the most significant exposure to mortgage credit risk, such as GSEs and mortgage insurers, track regional house price dynamics to monitor regions that might be pulling away from fundamentals.

Regional home-price-to-income ratio is a tried-and-true metric for judging whether a regional market is overheating or under-valued. We have scored each MSA by comparing its current home-price-to-income ratio to its long-term average. As the chart below illustrating this ratio’s trend shows, certain MSAs, such as New York, consistently have higher ratios than other, more affordable MSAs, such as Chicago.

Because comparing one MSA to another in this context is not particularly revealing, we instead compare each MSA’s current ratio to the long-term ratio for itself. MSAs where that ratio exceeds its long-term average are potentially over-heated, while MSAs under that ratio potentially have more room to grow. In the table below highlighting the top 25 MSAs based on population, we look at how the home-price-to-household-income ratio deviates from its MSA long-term average. The metric currently suggests that Dallas, Denver, Phoenix, and Portland are experiencing potential market dislocation.

Loans originated during periods of over-heating have a higher probability of default, as illustrated in the scatterplot below. This plot shows the correlation between the extent of the house-price-to-income ratio’s deviation from its long-term average and mortgage default rates. Each dot represents all loan originations in a given MSA for a given year[1]. Only regions with large deviations in house price to income ratio saw explosive default rates during the housing crisis. This metric can be a valuable tool for loan and SFR investors to flag metros to be wary of (or conversely, which metros might be a good buy).

Although admittedly a simple view of regional economic dynamics driving house prices (fundamentals such as employment, housing starts per capita, and population trends also play important roles) median income is an appropriate place to start. Median income has historically proven itself a valuable tool for spotting regional price dislocations and we expect it will continue to be. Watch this space as we continue to add these and other elements to further refine how we measure property value stability and its likely impact on mortgage credit.


[1] FHFA Purchase Only USA NSA % Change over last 4 quarters

Contact us to learn more.



Is Your Enterprise Risk Management Keeping Up with Recent Regulatory Changes?​

For enterprise risk managers, ensuring that all the firm’s various risk management structures and frameworks are keeping up with ever-changing regulatory guidance can be a daunting task. Regulatory updates take on particular importance for model risk managers. MRM is required not only to understand and comply with the regulatory guidance specific to model risk management itself, but also to understand the regulatory ramifications of the risk models they validate.

This post focuses on recent updates to eight ERM areas that can sometimes seem like a moving target when it comes to risk compliance.

The timeline below illustrates the extensive variability that can exist from regulator to regulator when it comes to which ERM components are of most concern and the nature and speed of adoption. To take one example, model risk management guidance was issued in 2011 and all Fed- and OCC-regulated institutions were in general compliance with it by 2014. The FDIC, however, did not issue the same guidance until 2017 and enforcement varies considerably. Although every FDIC-regulated institution is technically required to be in compliance with the MRM guidance, several have yet to undergo even their first MRM exam. Things get even cloudier for credit unions as the NCUA has not issued any guidance or regulation pertaining to MRM. The NCUA requires MRM practices to be observed during Capital Planning and Stress Testing (per its 2019 capital planning guide). But this narrow definition allows most credit unions to skirt regulator-required MRM entirely.

Because it can be difficult to stay on top of which regulator is requiring what and when, here is a quick summary of recent updates, organized by risk area.

Bank Secrecy Act (BSA/ Anti Money Laundering (AML) 

The past year has seen five guidance updates pertaining to BSA/AML. Most of these seek to increase the effectiveness, predictability, and transparency of BSA/AML regulatory exams. Other updates clarify specific aspects of BSA/AML risk.

  1. Updated Sections of the FFIEC BSA/AML Examination Manual (OCC 2021-10/SR 21-9 & OCC 2021-28). The updated sections:
    • Reinforce the risk-focused approach to BSA/AML examinations, and
    • Clarify regulatory requirements and include updated information for examiners regarding transaction testing, including examples.
  1. Interagency Statement on Model Risk Management for Bank Systems Supporting BSA/AML Compliance and Request for Information (OCC 2021-19/SR 21-8) as of April 12, 2021. This guidance:
    • Outlines the importance of MRM governance to AML exams,
    • Is designed to be flexible when applying MRM principals to BSA/AML models,
    • Updates MRM principles and validation to be more responsive,
    • Seeks not to apply a single industry-wide approach, and
    • Directs validators to consider third-party documentation when reviewing AML models.
  1. Answers to Frequently Asked Questions Regarding Suspicious Activity Reporting and Other AML Considerations (OCC 2021-4) as of January 21, 2021. These include instructions around:
    • Requests by law enforcement for financial institutions to maintain accounts,
    • Receipt of grand jury subpoenas/law enforcement inquiries and suspicious activity report (SAR) filing,
    • Maintaining a customer relationship following the filing of a SAR or multiple SARs,
    • SAR filing on negative news identified in media searches,
    • SAR monitoring on multiple negative media alerts,
    • Information in data fields and narrative, and
    • SAR character limits.
  1. Joint Statement on Bank Secrecy Act Due Diligence Requirements for Customers Who May Be Considered Politically Exposed Persons (OCC 2020-77/SR 20-19) as of August 21, 2020. This statement:
    • Explains that the BSA/AML regulations do not define what constitutes a politically exposed person (PEP),
    • Clarifies that the customer due diligence rule does not create a regulatory requirement and that there is no supervisory expectation for banks to have unique, additional due diligence steps for PEPs,
    • Clarifies how banks can apply a risk-based approach to customer due diligence in developing risk profiles for their customers, and
    • Discusses potential risk factors, levels and types of due diligence.
  1. OCC-Proposed Rule Regarding Exemptions to Suspicious Activity Report Requirements as of December 17, 2020:
    • The proposed rule would amend the agency’s SAR regulations to allow the OCC to issue exemptions from the requirements of those regulations on when and how to file suspicious activity reports (SARs).
Allowance for Loan and Lease Losses (ALLL)/ Current Expected Credit Losses (CECL) 

Current Expected Credit Losses: Final Rule (OCC 2020-85/SR 19-8/FIL-7-2021) as of October 1, 2020. The rule:

    • Applies to all community banks that adopted CECL in 2020 per GAAP requirements,
    • Exempts all other institutions until 2023,
    • Adopts all of the 2020 CECL IFR, and
    • Clarifies that a banking organization is not required to use the transition during fiscal quarters in which it would not generate a regulatory capital benefit.
Asset Liability Management (ALM) and Liquidity Risk Management 

Four important updates to ALM and liquidity risk guidance were issued in the past year.

  1. Net Stable Funding Ratio: Final Rule (OCC 2021-9) as of February 24, 2021. The rule:
    • Implements a minimum stable funding requirement designed to reduce the likelihood that disruptions to a covered company’s regular sources of funding will compromise its liquidity position,
    • Requires the maintenance a ratio of “available stable funding” to “required stable funding” of at least 1.0 on an ongoing basis,
    • Defines “available stable funding” as the stability of a banking organization’s funding sources,
    • Defines “required stable funding” as the liquidity characteristics of a banking organization’s assets, derivatives, and off-balance-sheet exposures,
    • Requires notification of a shortfall, realized or potential within 10 business days, and
    • Provides public disclosure rules for a consolidated NSFR.
  1. Volcker Rule Covered Funds: Final Rule (OCC 2020-71) as of July 41, 2020. The rule:
    • Permits the activities of qualifying foreign excluded funds,
    • Revises the exclusions from the definition of “covered fund,”
    • Creates new exclusions from the definition of covered fund for credit funds, qualifying venture capital funds, family wealth management vehicles, and customer facilitation vehicles, and
    • Modifies the definition of “ownership interest.”
  1. Interest Rate Risk: Revised Comptroller’s Handbook Booklet (OCC 2020-26) as of March 26, 2020. The updated Handbook:
    • Expands discussions on MRM expectations for reviewing and testing model assumptions,
    • Addresses funds transfer pricing (FTP), and
    • Adds guidelines for advanced approaches to interest rate risk management consistent with the Pillar 2 supervisory approach.
  1. Capital and Liquidity Treatment for Money Market Liquidity Facility and Paycheck Protection Program: Final Rule (OCC 2020-96) as of November 3, 2020. This rule:
    • Permits a zero-percent risk weight for PPP loans,
    • Eliminates the regulatory capital impact and liquidity rule provisions for participating in the PPP and Money Market Liquidity Facility.
Artificial Intelligence (AL)/ Machine Learning (ML) 

The only recent regulatory update pertaining to AI/Machine Learning has been a request for comment related to usage, controls, governance, and risk. At present, there is no formal guidance specifically related to AI or ML models. The OCC’s semi-annual risk perspectives includes just a couple of sentences stating that users of ML models should be able to defend and explain their risks. The Fed’s feedback has been similarly broad. Movement seems afoot to issue more detailed guidance on how ML models should be governed and monitored. But this is likely to be limited to specific applications and not to the ML models themselves.

The Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, Including Machine Learning (OCC 2021-17) as of March 31, 2021, seeks respondents’ views on appropriate governance, risk management, and controls over artificial intelligence, and any challenges in developing, adopting, and managing artificial intelligence approaches.

Capital Risk 

We focus on the two items of capital risk guidance issued in the past year. The rule applies to community banks with total assets of less than $10 billion as of December 31, 2019.

  1. Temporary Asset Thresholds: Interim Final Rule (OCC 2020-107) as of December 2, 2020:
    • The rule allows these institutions to use asset data as of December 31, 2019, to determine the applicability of various regulatory asset thresholds during calendar years 2020 and 2021.
  1. Regulatory Capital Rule: Eligible Retained Income: Final Rule (OCC 2020-87) as of October 8, 2020:

The final rule revises the definition of eligible retained income to the greater of:

    • Net income for the four preceding calendar quarters, net of any distributions and associated tax effects not already reflected in net income, and
    • The average of a Bank’s net income over the preceding four quarters.
Fair Lending 
  1. Community Reinvestment Act: Key Provisions of the June 2020 CRA Rule and Frequently Asked Questions (OCC 2020-99) as of November 9, 2020:

The rule establishes new criteria for designating bank assessment areas, including:

    • Facility-based assessment areas based on the location of a bank’s main office and branches and, at a bank’s discretion, on the location of the bank’s deposit-taking automated teller machines, and
    • Deposit-based assessment areas, which apply to a bank with 50 percent or more of its retail domestic deposits outside its facility-based assessment areas.
  1. Community Reinvestment Act: Implementation of the June 2020 Final Rule (OCC 2021-24) as of May 18, 2021. The OCC has determined that it will reconsider its June 2020 rule. While this reconsideration is ongoing, the OCC will not implement or rely on the evaluation criteria in the June 2020 rule pertaining to:
    • Quantification of qualifying activities
    • Assessment areas
    • General performance standards
    • Data collection
    • Recordkeeping
    • Reporting
Market Risk 
  1. Libor Transition: Self-Assessment Tool for Banks (OCC 2021-7) as of February 10, 2021. The self-assessment tool can be used to assess the following:
    • Five primary topics: Assets and contracts; LIBOR risk exposure; Fallback language; Consumer impact; Third-party service provider
    • The appropriateness of a bank’s Libor transition plan
    • Bank management’s execution of the bank’s transition plan
    • Related oversight and reporting
  1. Standardized Approach for Counterparty Credit Risk; Correction: Final Rule (OCC 2020-82) as of September 21, 2020. The issuance corrects errors in the standardized approach for counterparty credit risk (SA-CCR) rule:
    • Clarifying that a Bank that uses SA-CCR will be permitted to exclude the future exposure of all credit derivatives
    • Revising the number of outstanding margin disputes
    • Correcting the calculation of the hypothetical capital requirement of a qualifying central counterparty (QCCP)
  1. Agencies Finalize Amendments to Swap Margin Rule (News Release 2020-83) as of June 25, 2020:
    • Swap entities that are part of the same banking organization will no longer be required to hold a specific amount of initial margin for uncleared swaps with each other, known as inter-affiliate swaps.
    • Final rule allows swap entities to amend legacy swaps to replace the reference to LIBOR or other reference rates that are expected to end without triggering margin exchange requirements.
Operations Risk
  1. Corporate and Risk Governance. Revised and New Publications in the Director’s Toolkit (OCC 2020-97) as of November 5, 2020:
    • Defines permissible derivatives activities,
    • Allows engagement in certain tax equity finance transactions,
    • Expands the ability to choose corporate governance provisions under state law,
    • Includes OCC interpretations relating to capital stock issuances and repurchases, and
    • Applies rules relating to finder activities, indemnification, equity kickers, postal services, independent undertakings, and hours and closings to FSAs.
  1. Activities and Operations of National Banks and Federal Savings Associations: Final Rule (OCC 2020-111) as of December 23, 2020:
    • Focuses on key areas of planning, operations, and risk management,
    • Outlines directors’ responsibilities as well as management’s role,
    • Explains basic concepts and standards for safe and sound operation of banks, and
    • Delineates laws and regulations that apply to banks.
  1. Operational Risk: Sound Practices to Strengthen Operational Resilience (OCC 2020-94) as of October 10, 2020:
    • Outlines standards for operational resilience set forth in the agencies’ rules and guidance,
    • Promotes a principles-based approach for effective governance, robust scenario analysis, secure and resilient information systems, and thorough surveillance and reporting,
    • Introduces sound practices for managing cyber risk.
Contact us to learn more.


RiskSpan VQI: Agency Mortgage Risk Layers for Q2 2021

RiskSpan’s Vintage Quality Index computes and aggregates the percentage of Agency originations each month with one or more “risk factors” (low-FICO, high DTI, high LTV, cash-out refi, investment properties, etc.). Months with relatively few originations characterized by these risk factors are associated with lower VQI ratings. As the historical chart above shows, the index maxed out (i.e., had an unusually high number of loans with risk factors) leading up to the 2008 crisis.

RiskSpan uses the index principally to fine-tune its in-house credit and prepayment models by accounting for shifts in loan composition by monthly cohort.

Rising Home Prices Contribute to More High-DTI Loans and Cash-out Refis

The Vintage Quality Index rose noticeably during the second quarter of 2021 — up to a value of 83.40, compared to 76.68 in the first quarter.

Unlike last quarter, when a precipitous drop in high-LTV loans effectively masked and counterbalanced more modest increases in the remaining risk metrics, this quarter’s sizeable VQI jump is attributable to a more across-the-board increase in risk layers.

A sharp rebound in the percentage of high-LTV loans, a metric that had been in steady decline since the middle of 2019, was accompanied by modest increases in borrowers with low credit scores (FICO below 660) and high debt-to-income ratios (greater than 45%).

The spike in home prices across the country that likely accounts for the rise in high-LTV mortgages also appears to be prompting an increasing number of borrowers to seek cash-out refinancings. More than 22 percent of originations had LTVs in excess of 80 percent at the end of Q2, compared to just 17 percent at the end of Q1. Similarly, nearly 25 percent of mortgages were cash-out refis in June, compared to 22 percent in March.

Modest declines were observed in the percentages of loans on investment and multi-unit properties. All other risk metrics were up for the quarter, as the plots below illustrate.

Population assumptions:

  • Monthly data for Fannie Mae and Freddie
  • Loans originated more than three months prior to issuance are excluded because the index is meant to reflect current market
  • Loans likely to have been originated through the HARP program, as identified by LTV, MI coverage percentage, and loan purpose, are also excluded. These loans do not represent credit availability in the market as they likely would not have been originated today but for the existence of

Data assumptions:

  • Freddie Mac data goes back to 12/2005. Fannie Mae only back to 12/2014.
  • Certain fields for Freddie Mac data were missing prior to 6/2008.

GSE historical loan performance data release in support of GSE Risk Transfer activities was used to help back-fill data where it was missing.

An outline of our approach to data imputation can be found in our VQI Blog Post from October 28, 2015.


EDGE: Extended Delinquencies in Loan Balance Stories

In June, we highlighted Fannie Mae’s and Freddie Mac’s new “expanded delinquency” states. The Enterprises are now reporting delinquency states from 1 to 24 months to better account for loans that are seriously delinquent and not repurchased under the extended timeframe for repurchase of delinquent loans announced in 2020.

This new data reveals a strong correlation between loan balance and “chronically delinquent” loans. In the graph below, we chart loan balance on the x-axis and 180+Day delinquency on the y-axis, for 2017-18 production 30yr 3.5s through 4.5 “generic” borrowers.[1]

As the graph shows, within a given coupon, loans with larger original balances also tended to have higher “chronic delinquencies.

EDGE-Orig-Loan-Size

The graph above also illustrates a clear correlation between higher chronic delinquencies and higher coupons. This phenomenon is most likely due to SATO. While each of these queries excluded low-FICO, high-LTV, and NY loans, the 2017-18 30yr 3.5 cohort was mostly at-the-money origination, whereas 4.0s and 4.5s had an average SATO of 30bp and 67bp respectively. The higher SATO indicates a residual credit quality issue. As one would expect, and we demonstrated in our June analysis, lower-credit-quality loans tend also to have higher chronic delinquencies.

The first effect – higher chronic delinquencies among larger loans within a coupon – is more challenging to understand. We posit that this effect is likely due to survivor bias. The large refi wave over the last 18 months has factored-down higher-balance cohorts significantly more than lower-balance cohorts.

EDGE-Factors

Higher-credit-quality borrowers tend to refinance more readily than lower-credit-quality borrowers, and because the larger-loan-balance cohorts have seen higher total prepayments, these same cohorts are left with a larger residue of lower-quality credits. The impact of natural credit migration (which is observed in all cohorts) tends to leave behind a larger proportion of credit-impaired borrowers in faster-paying cohorts versus the slower-paying, lower-loan-balance cohorts.

The higher chronic delinquencies in larger-loan-balance cohorts may ultimately lead to higher buyouts, depending on the resolution path taken. As loan balance decreases, the lower balance cohorts will have reduced risk to these potential buyouts, leaving them better protected to any uptick in involuntary speeds.


Contact us if you are interested in seeing variations on this theme. Using Edge, we can examine any loan characteristic and generate a S-curve, aging curve, or time series.


[1] We filtered for borrowers with LTV<=80, FICO>=700, and ex-NY. We chose 2017-18 production to analyze, to give sufficient time for loans to go chronically delinquent. We see a similar relationship in 2019 production, see RiskSpan for details.


Climate Terms the Housing Market Needs to Understand

The impacts of climate change on housing and holders of mortgage risk are very real and growing. As the frequency and severity of perils increases, so does the associated cost – estimated to have grown from $100B in 2000 to $450B 2020 (see chart below). Many of these costs are not covered by property insurance, leaving homeowners and potential mortgage investors holding the bag. Even after adjusting for inflation and appreciation, the loss to both investors and consumers is staggering. 

Properly understanding this data might require adding some new terms to your personal lexicon. As the housing market begins to get its arms around the impact of climate change to housing, here are a few terms you will want to incorporate into your vocabulary.

  1. Natural Hazard

In partnership with climate modeling experts, RiskSpan has identified 21 different natural hazards that impact housing in the U.S. These include familiar hazards such as floods and earthquakes, along with lesser-known perils, such as drought, extreme temperatures, and other hydrological perils including mudslides and coastal erosion. The housing industry is beginning to work through how best to identify and quantify exposure and incorporate the impact of perils into risk management practices more broadly. Legacy thinking and risk management would classify these risks as covered by property insurance with little to no downstream risk to investors. However, as the frequency and severity increase, it is becoming more evident that risks are not completely covered by property & casualty insurance.

We will address some of these “hidden risks” of climate to housing in a forthcoming post.

  1. Wildland Urban Interface

The U.S. Fire Administration defines Wildland Urban Interface as “the zone of transition between unoccupied land and human development. It is the line, area, or zone where structures and other human development meet or intermingle with undeveloped wildland or vegetative fuels.” An estimated 46 million residences in 70,000 communities in the United States are at risk for WUI fires. Wildfires in California garner most of the press attention. But fire risk to WUIs is not just a west coast problem — Florida, North Carolina and Pennsylvania are among the top five states at risk. Communities adjacent to and surrounded by wildland are at varying degrees of risk from wildfires and it is important to assess these risks properly. Many of these exposed homes do not have sufficient insurance coverage to cover for losses due to wildfire.

  1. National Flood Insurance Program (NFIP) and Special Flood Hazard Area (SFHA)

The National Flood Insurance Program provides flood insurance to property owners and is managed by the Federal Emergency Management Agency (FEMA). Anyone living in a participating NFIP community may purchase flood insurance. But those in specifically designated high-risk SFPAs must obtain flood insurance to obtain a government-backed mortgage. SFHAs as currently defined, however, are widely believed to be outdated and not fully inclusive of areas that face significant flood risk. Changes are coming to the NFIP (see our recent blog post on the topic) but these may not be sufficient to cover future flood losses.

  1. Transition Risk

Transition risk refers to risks resulting from changing policies, practices or technologies that arise from a societal move to reduce its carbon footprint. While the physical risks from climate change have been discussed for many years, transition risks are a relatively new category. In the housing space, policy changes could increase the direct cost of homeownership (e.g., taxes, insurance, code compliance, etc.), increase energy and other utility costs, or cause localized employment shocks (i.e., the energy industry in Houston). Policy changes by the GSEs related to property insurance requirements could have big impacts on affected neighborhoods.

  1. Physical Risk

In housing, physical risks include the risk of loss to physical property or loss of land or land use. The risk of property loss can be the result of a discrete catastrophic event (hurricane) or of sustained negative climate trends in a given area, such as rising temperatures that could make certain areas uninhabitable or undesirable for human housing. Both pose risks to investors and homeowners with the latter posing systemic risk to home values across entire communities.

  1. Livability Risk

We define livability risk as the risk of declining home prices due to the desirability of a neighborhood. Although no standard definition of “livability” exists, it is generally understood to be the extent to which a community provides safe and affordable access to quality education, healthcare, and transportation options. In addition to these measures, homeowners also take temperature and weather into account when choosing where to live. Finding a direct correlation between livability and home prices is challenging; however, an increased frequency of extreme weather events clearly poses a risk to long-term livability and home prices.

Data and toolsets designed explicitly to measure and monitor climate related risk and its impact on the housing market are developing rapidly. RiskSpan is at the forefront of developing these tools and is working to help mortgage credit investors better understand their exposure and assess the value at risk within their businesses.

Contact us to learn more.



Why Mortgage Climate Risk is Not Just for Coastal Investors

When it comes to climate concerns for the housing market, sea level rise and its impacts on coastal communities often get top billing. But this article in yesterday’s New York Times highlights one example of far-reaching impacts in places you might not suspect.

Chicago, built on a swamp and virtually surrounded by Lake Michigan, can tie its whole existence as a city to its control and management of water. But as the Times article explains, management of that water is becoming increasingly difficult as various dynamics related to climate change are creating increasingly large and unpredictable fluctuations in the level of the lake (higher highs and lower lows). These dynamics are threatening the city with more frequency and severe flooding.

The Times article connects water management issues to housing issues in two ways: the increasing frequency of basement flooding caused by sewer overflow and the battering buildings are taking from increased storm surge off the lake. Residents face increasing costs to mitigate their exposure and fear the potentially negative impact on home prices. As one resident puts it, “If you report [basement flooding] to the city, and word gets out, people fear it’s going to devalue their home.”

These concerns — increasing peril exposure and decreasing valuations — echo fears expressed in a growing number of seaside communities and offer further evidence that mortgage investors cannot bank on escaping climate risk merely by avoiding the coasts. Portfolios everywhere are going to need to begin incorporating climate risk into their analytics.



Hurricane Season a Double-Whammy for Mortgage Prepayments

As hurricane (and wildfire) season ramps up, don’t sleep on the increase in prepayment speeds after a natural disaster event. The increase in delinquencies might get top billing, but prepays also increase after events—especially for homes that were fully insured against the risk they experienced. For a mortgage servicer with concentrated geographic exposure to the event area, this can be a double-whammy impacting their balance sheet—delinquencies increase servicing advances, prepays rolling loans off the book. Hurricane Katrina loan performance is a classic example of this dynamic.

Hurrican-Season-a-Double-Whammy-for-Mortgage



Get Started
Log in

Linkedin   

risktech2024