Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Blog Archives

Is Your Enterprise Risk Management Keeping Up with Recent Regulatory Changes?​

For enterprise risk managers, ensuring that all the firm’s various risk management structures and frameworks are keeping up with ever-changing regulatory guidance can be a daunting task. Regulatory updates take on particular importance for model risk managers. MRM is required not only to understand and comply with the regulatory guidance specific to model risk management itself, but also to understand the regulatory ramifications of the risk models they validate.

This post focuses on recent updates to eight ERM areas that can sometimes seem like a moving target when it comes to risk compliance.

The timeline below illustrates the extensive variability that can exist from regulator to regulator when it comes to which ERM components are of most concern and the nature and speed of adoption. To take one example, model risk management guidance was issued in 2011 and all Fed- and OCC-regulated institutions were in general compliance with it by 2014. The FDIC, however, did not issue the same guidance until 2017 and enforcement varies considerably. Although every FDIC-regulated institution is technically required to be in compliance with the MRM guidance, several have yet to undergo even their first MRM exam. Things get even cloudier for credit unions as the NCUA has not issued any guidance or regulation pertaining to MRM. The NCUA requires MRM practices to be observed during Capital Planning and Stress Testing (per its 2019 capital planning guide). But this narrow definition allows most credit unions to skirt regulator-required MRM entirely.

Because it can be difficult to stay on top of which regulator is requiring what and when, here is a quick summary of recent updates, organized by risk area.

Bank Secrecy Act (BSA/ Anti Money Laundering (AML) 

The past year has seen five guidance updates pertaining to BSA/AML. Most of these seek to increase the effectiveness, predictability, and transparency of BSA/AML regulatory exams. Other updates clarify specific aspects of BSA/AML risk.

  1. Updated Sections of the FFIEC BSA/AML Examination Manual (OCC 2021-10/SR 21-9 & OCC 2021-28). The updated sections:
    • Reinforce the risk-focused approach to BSA/AML examinations, and
    • Clarify regulatory requirements and include updated information for examiners regarding transaction testing, including examples.
  1. Interagency Statement on Model Risk Management for Bank Systems Supporting BSA/AML Compliance and Request for Information (OCC 2021-19/SR 21-8) as of April 12, 2021. This guidance:
    • Outlines the importance of MRM governance to AML exams,
    • Is designed to be flexible when applying MRM principals to BSA/AML models,
    • Updates MRM principles and validation to be more responsive,
    • Seeks not to apply a single industry-wide approach, and
    • Directs validators to consider third-party documentation when reviewing AML models.
  1. Answers to Frequently Asked Questions Regarding Suspicious Activity Reporting and Other AML Considerations (OCC 2021-4) as of January 21, 2021. These include instructions around:
    • Requests by law enforcement for financial institutions to maintain accounts,
    • Receipt of grand jury subpoenas/law enforcement inquiries and suspicious activity report (SAR) filing,
    • Maintaining a customer relationship following the filing of a SAR or multiple SARs,
    • SAR filing on negative news identified in media searches,
    • SAR monitoring on multiple negative media alerts,
    • Information in data fields and narrative, and
    • SAR character limits.
  1. Joint Statement on Bank Secrecy Act Due Diligence Requirements for Customers Who May Be Considered Politically Exposed Persons (OCC 2020-77/SR 20-19) as of August 21, 2020. This statement:
    • Explains that the BSA/AML regulations do not define what constitutes a politically exposed person (PEP),
    • Clarifies that the customer due diligence rule does not create a regulatory requirement and that there is no supervisory expectation for banks to have unique, additional due diligence steps for PEPs,
    • Clarifies how banks can apply a risk-based approach to customer due diligence in developing risk profiles for their customers, and
    • Discusses potential risk factors, levels and types of due diligence.
  1. OCC-Proposed Rule Regarding Exemptions to Suspicious Activity Report Requirements as of December 17, 2020:
    • The proposed rule would amend the agency’s SAR regulations to allow the OCC to issue exemptions from the requirements of those regulations on when and how to file suspicious activity reports (SARs).
Allowance for Loan and Lease Losses (ALLL)/ Current Expected Credit Losses (CECL) 

Current Expected Credit Losses: Final Rule (OCC 2020-85/SR 19-8/FIL-7-2021) as of October 1, 2020. The rule:

    • Applies to all community banks that adopted CECL in 2020 per GAAP requirements,
    • Exempts all other institutions until 2023,
    • Adopts all of the 2020 CECL IFR, and
    • Clarifies that a banking organization is not required to use the transition during fiscal quarters in which it would not generate a regulatory capital benefit.
Asset Liability Management (ALM) and Liquidity Risk Management 

Four important updates to ALM and liquidity risk guidance were issued in the past year.

  1. Net Stable Funding Ratio: Final Rule (OCC 2021-9) as of February 24, 2021. The rule:
    • Implements a minimum stable funding requirement designed to reduce the likelihood that disruptions to a covered company’s regular sources of funding will compromise its liquidity position,
    • Requires the maintenance a ratio of “available stable funding” to “required stable funding” of at least 1.0 on an ongoing basis,
    • Defines “available stable funding” as the stability of a banking organization’s funding sources,
    • Defines “required stable funding” as the liquidity characteristics of a banking organization’s assets, derivatives, and off-balance-sheet exposures,
    • Requires notification of a shortfall, realized or potential within 10 business days, and
    • Provides public disclosure rules for a consolidated NSFR.
  1. Volcker Rule Covered Funds: Final Rule (OCC 2020-71) as of July 41, 2020. The rule:
    • Permits the activities of qualifying foreign excluded funds,
    • Revises the exclusions from the definition of “covered fund,”
    • Creates new exclusions from the definition of covered fund for credit funds, qualifying venture capital funds, family wealth management vehicles, and customer facilitation vehicles, and
    • Modifies the definition of “ownership interest.”
  1. Interest Rate Risk: Revised Comptroller’s Handbook Booklet (OCC 2020-26) as of March 26, 2020. The updated Handbook:
    • Expands discussions on MRM expectations for reviewing and testing model assumptions,
    • Addresses funds transfer pricing (FTP), and
    • Adds guidelines for advanced approaches to interest rate risk management consistent with the Pillar 2 supervisory approach.
  1. Capital and Liquidity Treatment for Money Market Liquidity Facility and Paycheck Protection Program: Final Rule (OCC 2020-96) as of November 3, 2020. This rule:
    • Permits a zero-percent risk weight for PPP loans,
    • Eliminates the regulatory capital impact and liquidity rule provisions for participating in the PPP and Money Market Liquidity Facility.
Artificial Intelligence (AL)/ Machine Learning (ML) 

The only recent regulatory update pertaining to AI/Machine Learning has been a request for comment related to usage, controls, governance, and risk. At present, there is no formal guidance specifically related to AI or ML models. The OCC’s semi-annual risk perspectives includes just a couple of sentences stating that users of ML models should be able to defend and explain their risks. The Fed’s feedback has been similarly broad. Movement seems afoot to issue more detailed guidance on how ML models should be governed and monitored. But this is likely to be limited to specific applications and not to the ML models themselves.

The Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, Including Machine Learning (OCC 2021-17) as of March 31, 2021, seeks respondents’ views on appropriate governance, risk management, and controls over artificial intelligence, and any challenges in developing, adopting, and managing artificial intelligence approaches.

Capital Risk 

We focus on the two items of capital risk guidance issued in the past year. The rule applies to community banks with total assets of less than $10 billion as of December 31, 2019.

  1. Temporary Asset Thresholds: Interim Final Rule (OCC 2020-107) as of December 2, 2020:
    • The rule allows these institutions to use asset data as of December 31, 2019, to determine the applicability of various regulatory asset thresholds during calendar years 2020 and 2021.
  1. Regulatory Capital Rule: Eligible Retained Income: Final Rule (OCC 2020-87) as of October 8, 2020:

The final rule revises the definition of eligible retained income to the greater of:

    • Net income for the four preceding calendar quarters, net of any distributions and associated tax effects not already reflected in net income, and
    • The average of a Bank’s net income over the preceding four quarters.
Fair Lending 
  1. Community Reinvestment Act: Key Provisions of the June 2020 CRA Rule and Frequently Asked Questions (OCC 2020-99) as of November 9, 2020:

The rule establishes new criteria for designating bank assessment areas, including:

    • Facility-based assessment areas based on the location of a bank’s main office and branches and, at a bank’s discretion, on the location of the bank’s deposit-taking automated teller machines, and
    • Deposit-based assessment areas, which apply to a bank with 50 percent or more of its retail domestic deposits outside its facility-based assessment areas.
  1. Community Reinvestment Act: Implementation of the June 2020 Final Rule (OCC 2021-24) as of May 18, 2021. The OCC has determined that it will reconsider its June 2020 rule. While this reconsideration is ongoing, the OCC will not implement or rely on the evaluation criteria in the June 2020 rule pertaining to:
    • Quantification of qualifying activities
    • Assessment areas
    • General performance standards
    • Data collection
    • Recordkeeping
    • Reporting
Market Risk 
  1. Libor Transition: Self-Assessment Tool for Banks (OCC 2021-7) as of February 10, 2021. The self-assessment tool can be used to assess the following:
    • Five primary topics: Assets and contracts; LIBOR risk exposure; Fallback language; Consumer impact; Third-party service provider
    • The appropriateness of a bank’s Libor transition plan
    • Bank management’s execution of the bank’s transition plan
    • Related oversight and reporting
  1. Standardized Approach for Counterparty Credit Risk; Correction: Final Rule (OCC 2020-82) as of September 21, 2020. The issuance corrects errors in the standardized approach for counterparty credit risk (SA-CCR) rule:
    • Clarifying that a Bank that uses SA-CCR will be permitted to exclude the future exposure of all credit derivatives
    • Revising the number of outstanding margin disputes
    • Correcting the calculation of the hypothetical capital requirement of a qualifying central counterparty (QCCP)
  1. Agencies Finalize Amendments to Swap Margin Rule (News Release 2020-83) as of June 25, 2020:
    • Swap entities that are part of the same banking organization will no longer be required to hold a specific amount of initial margin for uncleared swaps with each other, known as inter-affiliate swaps.
    • Final rule allows swap entities to amend legacy swaps to replace the reference to LIBOR or other reference rates that are expected to end without triggering margin exchange requirements.
Operations Risk
  1. Corporate and Risk Governance. Revised and New Publications in the Director’s Toolkit (OCC 2020-97) as of November 5, 2020:
    • Defines permissible derivatives activities,
    • Allows engagement in certain tax equity finance transactions,
    • Expands the ability to choose corporate governance provisions under state law,
    • Includes OCC interpretations relating to capital stock issuances and repurchases, and
    • Applies rules relating to finder activities, indemnification, equity kickers, postal services, independent undertakings, and hours and closings to FSAs.
  1. Activities and Operations of National Banks and Federal Savings Associations: Final Rule (OCC 2020-111) as of December 23, 2020:
    • Focuses on key areas of planning, operations, and risk management,
    • Outlines directors’ responsibilities as well as management’s role,
    • Explains basic concepts and standards for safe and sound operation of banks, and
    • Delineates laws and regulations that apply to banks.
  1. Operational Risk: Sound Practices to Strengthen Operational Resilience (OCC 2020-94) as of October 10, 2020:
    • Outlines standards for operational resilience set forth in the agencies’ rules and guidance,
    • Promotes a principles-based approach for effective governance, robust scenario analysis, secure and resilient information systems, and thorough surveillance and reporting,
    • Introduces sound practices for managing cyber risk.
Contact us to learn more.


Three Principles for Effectively Monitoring Machine Learning Models

The recent proliferation in machine learning models in banking and structured finance is becoming impossible to ignore. Rarely does a week pass without a client approaching us to discuss the development or validation (or both) of a model that leverages at least one machine learning technique. RiskSpan’s own model development team has also been swept up in the trend – deep learning techniques have featured prominently in developing the past several versions of our in-house residential mortgage prepayment model.  

Machine learning’s rise in popularity is attributable to multiple underlying trends: 

  1. Quantity and complexity of data. Nowadays, firms store every conceivable type of data relating to their activities and clients – and frequently supplement this with data from any number of third-party providers. The increasing dimensionality of data available to modelers makes traditional statistical variable selection more difficult. The tradeoff between a model’s complexity and the rules adapted in variable selection can be hard to balance. An advantage of ML approaches is that they can handle multi-dimensional data more efficiently. ML frameworks are good at identifying trends and patterns – without the need for human intervention. 
  2. Better learning algorithms. Because ML algorithms learn to make more accurate projections as new data is introduced to the framework (assuming there is no data bias in the new data) model features based on newly introduced data are more likely to resemble features created using model training data.  
  3. Cheap computation costsNew techniques, such as XGBoost, are designed to be memory efficient. It introduces an innovated system design that helps in reducing the computation cost. 
  4. Proliferation breeds proliferation. As the number of machine learning packages in various programming tools increases, it facilitates implementation and promotes further ML model development. 

Addressing Monitoring Challenges 

Notwithstanding these advances, machine learning models are by no means easy to build and maintain. Feature engineering and parameter tuning procedures are time consuming. And once a ML model has been put into production, monitoring activities must be implemented to detect anomalies to make sure the model works as expected (just like with any other model). According to the OCC 2011-12 supervisory guidance on the model risk management, ongoing monitoring is essential to evaluate whether changes in products, exposures, activities, clients, or market conditions necessitate adjustment, redevelopment, or replacement of the model and to verify that any extension of the model beyond its original scope is valid. While monitoring ML models resembles monitoring conventional statistical models in many respects, the following activities take on particular importance with ML model monitoring: 

  1. Review the underlying business problem. Defining the business problem is the first step in developing any ML model. This should be carefully articulated in the list of business requirements that the ML model is supposed to follow. Any shift in the underlying business problem will likely create drift in the training data and, as a result, new data coming to the model may no longer be relevant to the original business problem. The ML model becomes degraded and the new process of feature engineering and parameter tuning needs to be considered to remediate the impact. This review should be conducted whenever the underlying problem or requirements change. 
  2.  Review of data stability (model input). In the real world, even if the underlying business problem is unchanged, there might be shifts in the predicting data caused by changing borrower behaviors, changes in product offerings, or any other unexpected market drift. Any of these things could result in the ML model receiving data that it has not been trained on. Model developers should measure the data population stability between the training dataset and the predicting dataset. If there is evidence of the data having shifted, model recalibration should be considered. This assessment should be done when the model user identifies significant shift in the model’s performance or when a new testing dataset is introduced to the ML model. Where data segmentation has been used in the model development process, this assessment should be performed at the individual segment level, as well. 
  3. Review of performance metrics (model output). Performance metrics quantify how well an ML model is trained to explain the data. Performance metrics should fit the model’s type. For instance, the developer of a binary classification model could use Kolmogorov-Smirnov (KS) table, receiver operating characteristic (ROC) curve, and area under the curve (AUC) to measure the model’s overall rank order ability and its performance at different cutoffs. Any shift (upward or downward) in performance metrics between a new dataset and the training dataset should raise a flag in monitoring activity. All material shifts need to be reviewed by the model developer to determine their cause. Such assessments should be conducted on an annual basis or whenever new data is available. 

Like all models, ML models are only as good as the data they are fed. But ML models are particularly susceptible to data shifts because their processing components are less transparent. Taking these steps to ensure they are learning based on valid and consistent data are essential to managing a functional inventory of ML models. 


Validating Structured Finance Models

Introduction: Structured Finance Models

Models used to govern the valuation and risk management of structured finance instruments take a variety of forms. Unlike conventional equity investments, structured finance instruments are often customized to meet the unique needs of specific investors. They are tailored to mitigate various types of risks, including interest rate risk, credit risk, market risk and counterparty risks. Therefore, structured finance instruments may be derived from a compilation of loans, stocks, indices, or derivatives. Mortgage-backed securities (MBS) are the most ubiquitous example of this, but structured finance instruments also include:

  • Derivatives
  • Collateralized Mortgage Obligations (CMO)
  • Collateralized Bond Obligations (CBO)
  • Collateralized Debt Obligations (CDO)
  • Credit Default Swaps (CDS)
  • Hybrid Securities

Pricing and measuring the risk of these instruments is typically carried out using an integrated web of models. One set of models might be used to derive a price based on discounted cash flows. Once cash flows and corresponding discounting factors have been established, other models might be used to compute risk metrics (duration and convexity) and financial metrics (NII, etc.).

These models can be grouped into three major categories:

  • Curve Builder and Rate Models: Market rates are fundamental to valuing most structured finance instruments. Curve builders calibrate market curves (treasury yield curve, Libor/Swap Rate curve, or SOFR curve) using the market prices of the underlying bond, future, or swap. Interest rate models take the market curve as an input and generate simulated rate paths as the future evolution of the selected type of the market curve.

  • Projection Models: Using the market curve (or the single simulated rate path), a current coupon projection model projects forward 30-year and 15-year fixed mortgage rates. Macroeconomic models project future home values using a housing-price index (HPI). Prepayment models estimate how quickly loans are likely to pay down based on mortgage rate projections and other macroeconomic projections. And roll-rate models forecast the probability of a loan’s transitioning from one current/default state to another.

  • Cash Flow Models and Risk Metrics: Cash flow models combine the deal information of the underlying structured instrument with related rate projections to derive an interest-rate-path-dependent cash flow.

The following illustrates how the standard discounted cash flow approach works for a mortgage-related structured finance instrument:

Cash Flow Models and Risk Metrics

Most well-known analytic solutions apply this discounted cash flow approach, or some adaptation of it, in analyzing structured finance instruments.

Derivatives introduce an additional layer of complexity that often calls for approaches and models beyond the standard discounted cash flow approach. Swaption and interest rate cap and floors, for example, require a deterministic approach, such as the Black model. For bond option pricing, lattice models or tree structures are commonly used. The specifics of these models are beyond the scope of this presentation, but many of the general model validation principles applied to discounted cash flow models are equally applicable to derivative pricing models.

Validating Curve Builder and Rate Models

Curve Builders

Let’s begin with the example of a curve builder designed for calibrating the on-the-run U.S. Treasury yield curve. To do this, the model takes a list of eligible on-the-run Treasury bonds as the key model inputs, which serves as the fitting knots[1]. A proper interpolator that connects all the fitting knots is then used to smooth the curve and generate monthly or quarterly rates for all maturities up to 30 years. If abnormal increments or decrements are observed in the calibrated yield curve, adjustments are made to alleviate deviations between the fitting knots until the fitted yield curve is stable and smooth. A model validation report should include a thorough conceptual review of how the model carries out this task.

Based on the market-traded securities selected, the curve builder is able to generate an on or off-the-run Treasury yield as well as LIBOR swap curve SOFR curve, or whatever is needed. The curve builder serves as the basis for measuring nominal and option‐adjusted spreads for many types of securities and for applying spreads whenever spread is used to determine model price.

A curve builder’s inputs are therefore a set of market-traded securities. To validate the inputs, we take the market price of the fitting knots for three month-end trading dates and compare them against the market price inputs used in the curve builder. We then calibrate the par rate and spot rate based on the retrieved market price and compare it with the fitted curve generated from the curve builder.

To validate curve builder’s model structure and development, we check the internal transition between the model-provided par rate, spot rate and forward rate on three month-end trading dates. Different compounding frequencies can significantly impact these transitions. We also review the model’s assumptions, limitations and governance activities established by the model owner.

Validating model outputs usually begins by benchmarking the outputs against a similar curve provided by Bloomberg or another reputable challenger system. Next, we perform a sensitivity analysis to check the locality and stability of the forward curve by shocking the input fitting knots and analyzing its impact on the model-provided forward curve. For large shocks (i.e., 300 bp or more) we test boundary conditions, paying particular attention to the forward curve. Normally, we expect to see forwards not becoming negative, as this would breach no-arbitrage conditions.

For the scenario analysis, we test the performance of the curve builder during periods of stress and other significant events, including bond market movement dates, Federal Open Market Committee (FOMC) dates and treasury auction dates. The selected dates cover significant events for Treasury/bond markets and provide meaningful analysis for the validation.

Interest Rate Models

An interest rate model is a mathematical model that is mainly used to describe the future evolution of interest rates. Its principal output is a simulated term structure, which is the fundamental component of a Monte Carlo simulation. Interest rate models typically fall into one of two broad categories:

  • Short-rate models: A short-rate model describes the future evolution of the short rate (instantaneous spot rate, usually written).
  • LIBOR Market Model (LMM): An LMM describes the future evolution of the forward rate, usually written. Unlike the instantaneous spot rate, forward rates can be observed directly from the market, as can their implied volatility.

This blog post provides additional commentary around interest rate model validations.

Conceptual soundness and model theory reviews are conducted based on the specific interest rate model’s dynamics. The model inputs, regardless of the model structure selected, include the selected underlying curve and its corresponding volatility surface as of the testing date. We normally benchmark model inputs against market data from a challenger system and discuss any observed differences.

We then examine the model’s output, which is the set of stochastic paths comprising a variety of required spot rates or forward LIBOR and swap rates, as well as the discount factors consistent with the simulated rates. To check the non-arbitrage condition in the simulated paths, we compare the mean and median path with the underlying curve and comment on the differences. We measure the randomness from the simulated paths and compare it against the interest rate model’s volatility parameter inputs.

Based on the simulated paths, an LMM also provides calibrated ATM swaption volatility. We compare the LMM’s implied ATM swaption volatility with its inputs and the market rates from the challenger system as a review of the model calibration. For the LMM, we also compare the model against history on the correlation of forward swap rates and serial correlation of a forward LIBOR rate. An LMM allows a good choice of structures that generate realistic swap rates, whose correlation is consistent with historical value.

Validating Projection Models

Projection models come in various shapes and sizes.

“Current Coupon” Models

Current coupon models generate mortgage rate projections based on a market curve or a single simulated interest rate path. These projections are a key driver to prepayment projection models and mortgage valuation models. There are a number of model structures that can explain the current coupon projection, ranging from the simple constant-spread method to the recursive forward-simulation method. Since it has been traditionally assumed that the ten-year part of the interest rate curve drives mortgage rates, a common assumption involves holding the spread between current coupon and the ten-year swap or treasury rates constant. However, this simple and intuitive approach has a basic problem: primary market mortgage rates nowadays depend on secondary-market MBS current-coupon yields. Hence, current coupon depends not just on the ten-year part of the curve, but also on other factors that affect MBS current-coupon yields. Such factors include:

  • The shape of the yield curve
  • Tenors on the yield curve
  • Volatilities

A conceptual review of current coupon models includes a discussion around the selected method and comparisons with alternative approaches. To validate model inputs, we focus on the data transition procedures between the curve builder and current coupon model or between the interest rate model and the current coupon model. To validate model outputs, we perform a benchmarking analysis against projections from a challenger approach. We also perform back-testing to measure the differences between model projections and actual data over a testing period, normally 12 months. We use mean absolute error (MAE) to measure the back-testing results. If the MAE is less than 0.5%, we conclude that the model projection falls inside the acceptable range. For the sensitivity analysis, we examine the movements of the current coupon projection under various shock scenarios (including key-rate shocks and parallel shifting) on the rate inputs.

Prepayment Models

Prepayment models are behavioral models that help investor understand and forecast loan portfolio’s likely prepayment behavior and identify the corresponding major drivers.

The prepayment model’s modeling structure is usually econometric in nature. It assumes that the same set of drivers that affected prepayment and default behavior in the past will drive them in the future under all scenarios, even though the period in the past that is most applicable may vary by scenario in the future.

Major drivers are identified and modeled separately as a function of collateral characteristics and macroeconomic variables. Each type of prepayment effect is then scaled based on the past prepayment and default experience of similar collateral. Assumed is that if the resulting model can explain and reasonably fit historical prepayments, then it may be a good model to project the future, subject to a review of the future projections after careful assessment.

Prepayment effects normally include housing turnover, refinancing and burnout[2]. Each prepayment effect is modeled separately and then combined together. A good conceptual review of prepayment modeling methodology will discuss the mathematical fundamentals of the model, including an assessment of the development procedure for each prepayment effect and comparisons with alternative statistical approaches.

Taking for example a model that projects prepayment rates on tradable Agency mortgage collateral (or whole-loan collateral comparable to Agencies) from settlement date to maturity, development data includes the loan-level or pool-level transition data originally from Fannie Mae, Freddie Mac, Ginnie Mae and third-party servicers. Data obtained from third parties is marked as raw data. We review the data processing procedures used to get from the raw data to the development data. These procedures include reviewing data characteristics, data cleaning, data preparation and data transformation processes.

After the development data preparation, variable selection and loan segmentation become key to explaining each prepayment effect. Model developers seek to select a set of collateral attributes with clear and constant evidence of impact to the given prepayment effect. We validate the loan segmentation process by checking whether the historical prepayment rate from different loan segments demonstrates level differences based on the set of collateral attributes selected.

A prepayment model’s implementation process is normally a black box. This increases the importance of the model output review, which includes performance testing, stress testing, sensitivity analysis, benchmarking and back-testing. An appropriate set of validation tests will capture:

  • Sensitivity to collateral and borrower characteristics (loan-to-value, loan size, etc.)
  • Sensitivity to significant assumptions
  • Benchmarking of prepayment projections
  • Performance during various historical events
  • Back-testing
  • Scenario stability
  • Model projections compared with projections from dealers
  • Performance by different types of mortgages, including CMOs and TBAs

A prepayment model sensitivity analysis might take a TBA security and gradually change the value of input variables, one at a time, to isolate the impact of each variable. This procedure provides an empirical understanding of how the model performs with respect to parameter changes. If the prepayment model has customized tuning functionality, we can apply the sensitivity analysis independently to each prepayment effect by setting the other tuning parameters at zero.

For the benchmarking analysis, we compare the model’s cohort-level, short-term conditional prepayment rate (CPR) projection against other dealer publications, including Barclays and J.P. Morgan (as applicable and available). We also compare the monthly CPR projections against those of the challenger model, such as Bloomberg Agency Model (BAM), for the full stack Agency TBA and discuss the difference. Discrepancies identified during the course of a benchmarking analysis may trigger further investigation into the model’s development. However, it doesn’t necessarily mean that the underlying model is in error since the challenger model itself is simply an alternative projection. Differences might be caused by any number of factors, including different development data or modeling methodologies.

Prepayment model back-testing involves selecting a set of market-traded MBS and a set of hypothetical loan cohorts and comparing the actual monthly CPR against the projected CPR over a prescribed time window (normally one year). Thresholds should be established prior to testing and differences that exceed these thresholds should be investigated and discussed in the model validation report.

Validating Cash Flow Models and Risk Metrics

A cash flow model combines the simulated paths from interest rate, prepayment, default, and delinquency models to compute projected cash flows associated with monthly principal and interest payments.

Cash flow model inputs include the underlying instrument’s characteristics (e.g., outstanding balance, coupon rate, maturity date, day count convention, etc.) and the projected vectors associated the CPR, default rate, delinquency, and severity (if applicable). A conceptual review of a cash flow model involves a verification of the data loading procedure to ensure that the instrument’s characteristics are captured correctly within the model. It should also review the underlying mathematical formulas to verify the projected vectors are correctly applied.

Model outputs can be validated via sensitivity analysis. This often involves shocking each input variable, one at a time, and examining its resulting impacts on the monthly remaining balance. Benchmarking can be accomplished by developing a challenger model and compare the resulting cash flows.

Combining the outputs of all the sub-models, a price of the underlying structured finance instrument can be generated (and tested) along with its related risk metrics (duration, convexity, option adjusted spread, etc.).

Using MBS as an example, an option adjusted spread (OAS) analysis is commonly used. Theoretically, OAS is calibrated by matching the model price with the market price. The OAS can be viewed as a constant spread that is applied to the discounting curve when computing the model price. Because it deals with the differences between model price and market price, OAS is particularly useful in MBS valuation. It is particularly helpful in measuring prepayment risk and market risk. A comprehensive analysis reviews the following:

  • Impact of interest rate shocks on a TBA stack in terms of price, OAS, effective duration, and effective convexity.
  • Impact of projected prepayment rate shock on a TBA stack in terms of price, OAS, effective duration, and effective convexity.
  • Impact of projected prepayment rate shock on the option cost (measured as basis point, zero-volatility spread minus OAS).

Beyond OAS, the validation should include independent benchmarking of the model price. Given a sample portfolio that contains the deal information for a list of structured finance instruments, validators derive a model price using the same market rate as the subject model as a basis for comparison. Analyzing the shock profiles enables validators to conclude whether the given discounting cash flow method is generating satisfactory model performance.

Conclusion

Structured finance model validations are complex because they invariably involve testing a complicated array of models, sub-models, and related models. The list of potential sub-models (across all three categories discussed above) significantly exceeds the examples cited.

Validators must design validation tasks specific to each model type in order to adequately assess the risks posed by potential shortcomings associated with model inputs, structure, theory, development, outputs and governance practices.

When it comes to models governing structured finance instruments, validators must identify any model risk not only at the independent sub-model level but at the broader system level, for which the final outputs include model price and risk metrics. This requires a disciplined and integrated approach.


 

 

[1] Knots represent a set of predefined points on the curve

[2] Burnout effect describes highly seasoned mortgage pools in which loans likely to repay have already done so, resulting in relatively slow prepayment speeds despite falling interest rates.

 


Validating Interest Rate Models

Many model validations—particularly validations of market risk models, ALM models, and mortgage servicing rights valuation models—require validators to evaluate an array of sub-models. These almost always include at least one interest rate model, which are designed to predict the movement of interest rates.

Validating interest rate models (i.e. short-rate models) can be challenging because many different ways of modeling how interest rates change over time (“interest rate dynamics”) have been created over the years. Each approach has advantages and shortcomings, and it is critical to distinguish the limitations and advantages of each of them  to understand whether the short-rate model being used is appropriate to the task. This can be accomplished via the basic tenets of model validation—evaluation of conceptual soundness, replication, benchmarking, and outcomes analysis. Applying these concepts to interest rate models, however, poses some unique complications.

A brief Introduction to the Short-Rate Model

In general, a short-rate model solves the short-rate evolution as a stochastic differential equation. Short-rate models can be categorized based on their interest rate dynamics.

A one-factor short-rate model has only one diffusion term. The biggest limitation of one-factor models is that the correlation between two continuously-compound spot rates at two dates is equal to one, which means a shock at a certain maturity is transmitted thoroughly across the curve that is not realistic in the market.

A multi-factor short-rate model, as its name implies, contains more than one diffusion term. Unlike one-factor models, multi-factor models consider the correlation between forward rates, which makes a multi-factor model more realistic and consistent with actual multi-dimension yield curve movements.

Validating Conceptual Soundness

Validating an interest rate model’s conceptual soundness includes reviewing its data inputs, mean-reversion feature, distributions of short rate, and model selection. Reviewing these items sufficiently requires a validator to possess a basic knowledge of stochastic calculus and stochastic differential equations.

Data Inputs

The fundamental data inputs to the interest rate model could be the zero-coupon curve (also known as term structure of interest rates) or the historical spot rates. Let’s take the Hull-White (H-W) one-factor model (H-W: drt = k(θ – rt)dt + σtdwt) as an example. H-W is an affine term structure model, of which analytical tractability is one of its most favorable properties. Analytical tractability is a valuable feature to model validators because it enables calculations to be replicated. We can calibrate the level parameter (θ) and the rate parameter (k) from the inputs curve. Commonly, the volatility parameter (σt) can be calibrated from historical data or swaptions volatilities. In addition, the analytical formulas are also available for zero-coupon bonds, caps/floors, and European swaptions.

Mean Reversion

Given the nature of mean reversion, both the level parameter and rate parameter should be positive. Therefore, an appropriate calibration method should be selected accordingly. Note the common approaches for the one-factor model—least square estimation and maximum likelihood estimation—could generate negative results, which are unacceptable by the mean-reversion feature. The model validator should compare different calibration results from different methods to see which method is the best approach for addressing the model assumption.

Short-Rate Distribution and Model Selection

The distribution of the short rate is another feature that we need to consider when we validate the short-rate model assumptions. The original short-rate models—Vasicek and H-W, for example—presume the short rate to be normally distributed, allowing for the possibility of negative rates. Because negative rates were not expected to be seen in the simulated term structures, the Cox-Ingersoll-Ross model (CIR, non-central chi-squared distributed) and Black-Karasinski model (BK, lognormal distributed) were invented to preclude the existence of negative rates. Compared to the normally distributed models, the non-normally distributed models forfeit a certain degree of analytical tractability, which makes validating them less straightforward. In recent years, as negative rates became a reality in the market, the shifted lognormal distributed model was introduced. This model is dependent on the shift size, determining a lower limit in the simulation process. Note there is no analytical formula for the shift size. Ideally, the shift size should be equal to the absolute value of the minimum negative rate in the historical data. However, not every country experienced negative interest rates, and therefore, the shift size is generally determined by the user’s experience by means of fundamental analysis.

The model validator should develop a method to quantify the risk from any analytical judgement. Because the interest rate model often serves as a sub-model in a larger module, the model selection should also be commensurate with the module’s ultimate objectives.

Replication

Effective model validation frequently relies on a replication exercise to determine whether a model follows the building procedures stated in its documentation. In general, the model documentation provides the estimation method and assorted data inputs. The model validator could consider recalibrating the parameters from the provided interest rate curve and volatility structures. This process helps the model validator better understand the model, its limitations, and potential problems.

Ongoing Monitoring & Benchmarking

Interest rate models are generally used to simulate term structures in order to price caps/floors and swaptions and measure the hedge cost. Let’s again take the H-W model as an example. Two standard simulation methods are available for the H-W model: 1) Monte Carlo simulation and 2) trinomial lattice method. The model validator could use these two methods to perform benchmarking analysis against one another.

The Monte Carlo simulation works ideally for the path-dependent interest rate derivatives. The Monte Carlo method is mathematically easy to understand and convenient for implementation. At each time step, a random variable is simulated and added into the interest rate dynamics. A Monte Carlo simulation is usually considered for products that can only be exercised at maturity. Since the Monte Carlo method simulates the future rates, we cannot be sure at which time the rate or the value of an option becomes optimal. Hence, a standard Monte Carlo approach cannot be used for derivatives with early-exercise capability.

On the other hand, we can price early-exercise products by means of the trinomial lattice method. The trinomial lattice method constructs a trinomial tree under the risk-neutral measure, in which the value at each node can be computed. Given the tree’s backward-looking feature, at each node we can compare the intrinsic value (current value) with the backwardly inducted value (continuous value), determining whether to exercise at a given node. The comparison step will keep running backwardly until it reaches the initial node and returns the final estimated value. Therefore, trinomial lattice works ideally for non-path-dependent interest rate derivatives. Nevertheless, lattice can be also implemented for path-dependent derivatives for the purpose of benchmarking.

Normally, we would expect to see that the simulated result from the lattice method is less accurate and more volatile than the result from the Monte Carlo simulation method, because a larger number of simulated paths can be selected in the Monte Carlo method. This will make the simulated result more stable, assuming the same computing cost and the same time step.

Outcomes Analysis

The most straightforward method for outcomes analysis is to perform sensitivity tests on the model’s key drivers. A standardized one-factor short-rate model usually contains three parameters. For the level parameter (θ), we can calibrate the equilibrium rate-level from the simulated term structure and compare with θ. For the mean-reversion speed parameter (k), we can examine the half-life, which equals to ln ⁡(2)/k , and compare with the realized half-life from simulated term structure. For the volatility parameter (σt), we would expect to see the larger volatility yields a larger spread in the simulated term structure. We can also recalibrate the volatility surface from the simulated term structure to examine if the number of simulated paths is sufficient to capture the volatility assumption.

As mentioned above, an affine term structure model is analytically tractable, which means we can use the analytical formula to price zero-coupon bonds and other interest rate derivatives. We can compare the model results with the market prices, which can also verify the functionality of the given short-rate model.

Conclusion

The popularity of certain types of interest rate models changes as fast as the economy. In order to keep up, it is important to build a wide range of knowledge and continue learning new perspectives. Validation processes that follow the guidelines set forth in the OCC’s and FRB’s Supervisory Guidance on Model Risk Management (OCC 2011-12 and SR 11-7) seek to answer questions about the model’s conceptual soundness, development, process, implementation, and outcomes.  While the details of the actual validation process vary from bank to bank and from model to model, an interest rate model validation should seek to address these matters by asking the following questions:

  • Are the data inputs consistent with the assumptions of the given short-rate model?
  • What distribution does the interest rate dynamics imply for the short-rate model?
  • What kind of estimation method is applied in the model?
  • Is the model analytically tractable? Are there explicit analytical formulas for zero-coupon bond or bond-option from the model?
  • Is the model suitable for the Monte Carlo simulation or the lattice method?
  • Can we recalibrate the model parameters from the simulated term structures?
  • Does the model address the needs of its users?

These are the fundamental questions that we need to think about when we are trying to validate any interest rate model. Combining these with additional questions specific to the individual rate dynamics in use will yield a robust validation analysis that will satisfy both internal and regulatory demands.


Get Started
Log in

Linkedin   

risktech2024