Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Category: Article

GenAI Applications for Loans and Private Credit

RiskSpan is actively furthering the advancement of several GenAI applications aimed at transforming how mortgage loan and private credit investors work and maximizing their efficiency and performance. They include:

1. Tape-Cracking 3.0: Making RiskSpan’s Smart Mapper Even Smarter

RiskSpan’s Edge Platform currently uses machine learning techniques as part of its Smart Mapper ETL Tool. When a new portfolio is loaded in a new format, the fuzzy logic that powers the Platform’s recommended mappings gets continually refined based on user activity.

In the coming months, the Platform’s existing ML-driven ETL process will be further refined to leverage the latest GenAI technology.

GenAI lends additional context to the automated mapping process by incorporating an understanding not only of the data in an individual column, but also of surrounding data as well as learned characteristics of the asset class in question. The resulting evolution from simply trying to ensure the headers match up a more holistic understanding of what the data actually is and the meaning it seeks to convey will be a game changer for downstream analysts seeking to make reliable data-driven investment decisions.

RiskSpan made several updates in 2023 to help users automate the end-to-end workflow for loan valuation and surveillance. AI-based data loading combined with the Platform’s loan risk assumptions and flexible data model will enable users to obtain valuation and risk metrics simply by dragging and dropping a loan file into the application.

2. Modeling Private Credit Transactions

Many financial institutions and legal advisors still spend an extraordinary amount of time reading and extracting relevant information from legal documents that accompany structured private credit transactions.

RiskSpan has partnered with clients to develop a solution to extract key terms from private credit and funding transactions. Trained multimodal AI models are further extended to generate executable code valuations. This code will be fully integrated into RiskSpan’s risk and pricing platform.

The application solves a heretofore intractable problem in which the information necessary to generate accurate cash flows for private credit transactions is spread across multiple documents (a frequent occurrence when terms for individual classes can only be obtained from deal amendments).

Execution code for cash flow generation and valuation utilizes RiskSpan’s validated analytics routines, such as day count handling, payment calculations, discounting, etc.

3. “Insight Support”

Tech support is one of today’s most widely known (and widely experienced) GenAI use cases. Seemingly all-knowing chatbots immediately answer users’ questions, sparing them the inconvenience of having to wait for the next available human agent. Like every other company, RiskSpan is enhancing its traditional tech support processes with GenAI to answer questions faster and and embed user-facing AI help within the Platform itself. But RiskSpan is taking things a step further by also exploring how GenAI can upend and augment its clients’ workflows.

RiskSpan refers to this workflow augmentation as “Insight Support.”

With Insight Support, GenAI evaluates an individual user’s data, dynamically serves up key insights, and automatically completes routine analysis steps without prompting. The resulting application can understand an individual user’s data and recognize what is most important to identify and highlight as part of a loan data analysis workflow.

Insight Support, for example, can leverage insights obtained by the AI-driven “Smarter Mapping” process to identify what specific type of collateral reporting is necessary. It can produce reports that highlight outliers, recognize the typical analytical/valuation run settings a user would want to apply, and then execute the analytical run and summarize the results in management-ready reporting. All in the name of shortening the analysis time needed to evaluate new investment opportunities.

Conclusion

Considered collectively, these three applications are building toward having RiskSpan’s SaaS platform function as a “virtual junior analyst” capable of handling much of the tedious work involved in analyzing loan and structured product investments and freeing up human analysts for higher-order tasks and decision making.

GenAI is the future of data and analytics and is therefore the future of RiskSpan’s Edge Platform. By revolutionizing the way data is analyzed, AI-created and -validated models, dashboards, and sorted data are already allowing experts to redirect their attention away from time-consuming data wrangling tasks and toward more strategic critical thinking. The more complete adoption of fully optimized AI solutions throughout the industry, made possible by a rising generation of “AI-native” data scientists will only accelerate this phenomenon.

RiskSpan’s commitment to pushing the boundaries of innovation in the Loan and Structured Product Space is underscored by its strategic approach to GenAI. While acknowledging the challenges posed by GenAI, RiskSpan remains poised for the future, leveraging its expertise to navigate the evolving landscape. As the industry anticipates the promised benefits of GenAI, RiskSpan’s vision and applications stand as a testament to its role as a thought leader in shaping the future of data analytics.

Stay tuned for more updates on RiskSpan’s innovative solutions, as we continue to lead the way in harnessing the power of GenAI for the benefit of our clients and the industry at large.


Celebrating Women’s Contributions by the Numbers

Because we’re a data company after all. RiskSpan commemorates International Women’s Day by taking note of the remarkable people behind these numbers.

Martha Stewart

Votes for Women

Serena Williams

Women's March in DC

Girls Who Code

Title IX

Sally Ride

Womens Rights

Taylor Swift

Sandra Day O'connor

Kathryn Blgelow

Betty White


Enriching Pre-Issue Intex CDI Files with [Actual, Good] Loan-Level Data

The way RMBS dealers communicate loan-level details to prospective investors today leaves a lot to be desired.

Any investor who has ever had to work with pre-issue Intex CDI files can attest to the problematic nature of the loan data they contain. Some are better than others, but virtually all of them lack information about any number of important loan features.

Investors can typically glean enough basic information about balances and average note rates from preliminary CDI files to run simple, static CPR/CDR scenarios. But information needed to run complex models — FICO scores, property characteristics and geography, and LTV ratios to name a few — is typically lacking. MBS investors who want to run to run more sophisticated prepayment and credit models – models that rely on more comprehensive loan-level datasets to run deeper analytics and scenarios – can be left holding the bag when these details are missing from the CDI file.

The loan-level detail exists – it’s just not in the CDI file. Loan-level detail often accompanies the CDI file in a separate spreadsheet (still quaintly referred to in the 21st Century as a “loan tape”). Having this data separate from the CDI file requires investors to run the loan tape through their various credit and prepayment models and then manually feed those results back into the Intex CDI file to fully visualize the deal structure and expected cash flows.

This convoluted, multi-step workaround adds both time and the potential for error to the pre-trade analytics process.

A Better Way

Investors using RiskSpan’s Edge Platform can streamline the process of evaluating a deal’s structure alongside the expected performance of its underlying mortgage loans into a single step.

EDGEPLATFORM

Here is how it works.

As illustrated above, when investors set up their analytical runs on Edge, RiskSpan’s proprietary credit and prepayment models automatically extract all the required loan-level data from the tape and then connect the modeling results to the appropriate corresponding deal tranche in the CDI file. This seamlessness reduces all the elements of the pre-trade analytics process down to a matter of just a few clicks.

Making all this possible is the Edge Platform’s Smart Mapper ETL solution, which allows it to read and process loan tapes in virtually any format. Using AI, the Platform recognizes every data element it needs to run the underlying analytics regardless of the order in which the data elements are arranged and irrespective of how (or even whether) column headers are used.

Contact us to learn more about how RMBS investors are reaping the benefits of consolidating all of their data analytics on a single cloud-native platform.


What is the Draw of Whole Loan Investing?

Mortgage whole loans are having something of a moment as an asset class, particularly among insurance companies and other nonbank institutional investors. With insurance companies increasing their holdings of whole loans by 35 percent annually over the past three years, many people are curious what it is about these assets that makes them so appealing in the current environment.

We sat down with Peter Simon, founder and CEO of Dominium Advisors, a tech-enabled asset manager specializing in the acquisition and management of residential mortgage loans for insurance companies and other institutional investors. As an asset manager, Dominium focuses on performing the “heavy lifting” related to loan investing for clients. 

How has the whole loan asset class evolved since the 2008 crisis? How have the risks changed?

Peter Simon: Since 2008, laws and regulations like the Dodd-Frank act and the formation of the Consumer Financial Protection Bureau have created important risk guardrails related to the origination of mortgage products. Many loan and mortgage product attributes, such as underwriting without proper documentation of income or assets or loan structures with negative amortization, which contributed to high levels of mortgage defaults in 2008 are no longer permissible. In fact, more than half of the types of mortgages that were originated pre-crisis are no longer permitted under the current “qualified mortgage” regulations.  In addition, there have been substantial changes to underwriting, appraisal and servicing practices which have reduced fraud and conflicts of interest throughout the mortgage lifecycle.

How does whole loan investing fit into the overall macro environment?

Peter Simon: Currently, the macro environment is favorable for whole loan investing. There is a substantial supply-demand imbalance – meaning there are more buyers looking for places to live then there are homes for them to live in. At the current rates of new home construction, mobility trends, and household formation, it is expected that this imbalance will persist for the next several years.  Demographic trends are also widening the current supply demand imbalance as more millennial buyers are entering their early 30s – the first time-homebuyer sweet spot.  And work from home trends created by the pandemic are creating a desire for additional living space.

Who is investing in whole loans currently?

Peter Simon: Banks have traditionally been the largest whole loan investors due to their historical familiarity with the asset class, their affiliated mortgage origination channels, their funding advantage and favorable capital rules for holding mortgages on balance sheet.  Lately, however, banks have pulled back from investing in loans due to concerns about the stickiness of deposits, which have been used traditionally to fund a portion of mortgage purchases, and proposed bank capital regulations that would make it more costly for banks to hold whole loans.  Stepping in to fill this void are other institutional investors — insurance companies, for example — which have seen their holdings of whole loans increase by 35% annually over the past 3 years. Credit and hedge funds and pension funds are also taking larger positions in the asset class. 

What is the specific appeal of whole loans to insurance companies and these other firms that invest in them?

Peter Simon: Spreads and yields on whole loans produce favorable relative value (risk versus yield) when compared to other fixed income asset classes like corporate bonds.  Losses since the Financial Crisis have been exceptionally low due to the product, process and regulatory improvements enacted after the Financial Crisis.  Whole loans also produce risks in a portfolio that tend to increase overall portfolio diversification.  Borrower prepayment risk, for example, is a risk that whole loan investors receive a spread premium for but is uncorrelated with many other fixed income risks.  And for investors looking for real estate exposure, residential mortgage risk has a much different profile than commercial mortgage risk.

Why don’t they just invest in non-Agency securities?

Peter Simon: Many insurance companies do in fact buy RMBS securities backed by non-QM loans.  In fact, most insurance companies who have residential exposure will have it via securities.  The thesis around investing in loans is that the yields are significantly higher (200 to 300 bps) than securities because loans are less liquid, are not evaluated by the rating agencies and expose the insurer to first loss on a defaulted loan.  So for insurance investors who believe the extra yield more than compensates them for these extra risks (which historically over the last 15 years it has), they will likely be interested in investing in loans.

What specific risk metrics do you evaluate when considering/optimizing a whole loan portfolio – which metrics have the highest diagnostic value?

Peter Simon: Institutional whole loan investors are primarily focused on three risks: credit risk, prepayment risk and liquidity risk. Credit risk, or the risk that an investor will incur a loss if the borrower defaults on the mortgage is typically evaluated using many different scenarios of home price appreciation and unemployment to evaluate both expected losses and “tail event” losses.  This risk is typically expressed as projected lifetime credit losses.  Prepayment risk is commonly evaluated using loan cash flow computed measures like option adjusted duration and convexity under various scenarios related to the potential direction of future interest rates (interest rate shocks).

How would you characterize the importance of market color and how it figures into the overall assessment/optimization process?

Peter Simon: Newly originated whole loans like any other “new issue” fixed income product are traded in the market every day.  Whole loans are generally priced at the loan level based on their specific borrower, loan and property attributes.  Collecting and tabulating loan level prices every day is the most effective way to construct an investment strategy that optimizes the relative differences between loans with different yield characteristics and minimizes credit and prepayment risks in many various economic and market scenarios.


The future of analytics pricing is RiskSpan’s Usage-based delivery model

Usage-based pricing model brings big benefits to clients of RiskSpan’s Edge Platform

Analytic solutions for loans, MSRs and structured products are typically offered as software-as-a-service (SaaS) or “on-prem” products, where clients pay a monthly or annual fee to access the software and its features. The compute needed to run analytic workloads is typically purchased in advance and is fixed regardless of the need or use case.  

However, this traditional pricing model is not always the best fit for the dynamic and diverse needs of analytics users. It is technologically outdated and does not meet users where they are – with varying data volumes, usage patterns, and analytical complexity requirements that fluctuate with the markets. It is simply wasteful for companies to pay for unused, fixed-fee compute capacity, year-after-year in long-term, set price contracts, when their needs don’t require it. 

Usage-based pricing is a trend that reflects the evolving nature of analytics and the increasing demand for more flexible, transparent, and value-driven pricing models.

RiskSpan has just announced the release of industry-innovating usage-based pricing that allows clients to scale up or down, based on their needs. Further, clients of the RiskSpan platform will now benefit from access to the full Edge Platform, including data, models and analytics – eliminating the need to license individual product modules. The Platform supports loans, MSRs and securities, with growing capabilities around private credit. Analyzing these assets can be compute- and data-intensive because of the need for collateral (loan-level) data and models to price, value, and calculate risk metrics.

A Single Platform
Integrated Data | Trade Analytics | Risk Management

Core Engine

Usage-based pricing is an innovative alternative approach based on user-configured workloads. It enables RiskSpan to invoice its clients according to how much compute they actually need and use, rather than a fixed fee based on the modules they purchased during the last budget cycle.  

Usage-based pricing benefits RiskSpan clients in several ways, including: 

    • Lower Costs: Clients pay only for what they need, rather than being locked into an expensive contract that may not suit their current or future situation.

    • Cost-Sharing Across the Enterprise: Clients can share costs across the enterprise and better manage expense based on usage by internal functions and business units.

    • Transparency: Clients can monitor their usage and directly link their analytics configuration and usage to their results and goals. They can also better control their spending, as they can track their usage and see how it affects their bill.

    • Flexibility: Clients can experiment with different features and options of the Platform, as they are not restricted by a predefined package or plan.

Usage-based pricing is not a one-size-fits-all solution, and it may not be suitable for every organization. Based on needs, large enterprise workloads will require specific, customized licensing and may benefit from locked in compute that comes with volume discounts.

Bottom Line on RiskSpan’s Usage-based Pricing Model

CONS of Traditional Fixed Fee Pricing PROS of Usage-Based Pricing
Flat-fee pricing models force customers to pay for unused capacity​. Lower Costs — Pay only for what you use, not the wasted capacity of a dedicated cluster
Unused capacity cannot be shared across the enterprise, which translates into wasted resources and higher costs. Cost Sharing — Costs can be shared across the enterprise to better manage expense based on usage by your internal functions and business units
Fixed pricing models make it difficult for customers to scale up or down as needed. Transparency — Transparent pricing that fits your specific analytics workload (size, complexity, performance)
Traditional “product module-based” purchasing runs the risk of over-buying on features that will not be used. Flexibility — Scale up and scale down your use as new and in-place features become useful to you under different market conditions

With the introduction of usage-based pricing, RiskSpan is adding core value to its Edge Platform and a low-cost entry point to bring its solution to a wider base of clients. Its industry-leading capabilities solve challenges facing various users in the loans, MSR, and structured portfolio domains. For example:

    1. Loan/MSR Trader seeks analytics to support bidding on pools of loans and/or MSRs. Their usage is ad-hoc and will benefit from usage-based pricing. Traders and investors can analyze prepay and credit performance trends by leveraging RiskSpan’s 20+ years of historical performance datasets.

    1. Securities Trader (Agency or Non-Agency) wants more flexibility to set their prepay or credit model assumptions to run ad-hoc scenario analysis not easily handled by their current vendor.

    1. Risk Manager wants another source of valuation for periodic MSR and loan portfolios to enhance decision making and compare against the marks from their third-party valuation firm. 

    1. Private Credit Risk Manager needs a built-for-purpose private credit analytics system to properly run risk metrics. Users can run separate and run ad hoc analysis on these holdings.

For more specific information about how RiskSpan will structure pricing with various commitment levels, click below to tell us about your needs, and a representative will be in touch with you shortly. 


RiskSpan’s Top 3 GenAI Applications for 2024

In the dynamic landscape of fixed-income securities, the role of generative artificial intelligence (GenAI) has become increasingly prominent. This transformative force is shaping the future of data, analytics, and predictive modeling, presenting both challenges and opportunities for industry leaders.

First, the challenges:

Managing GenAI applications in a responsible and ethical manner requires developers to be mindful of data security, data integrity, respecting intellectual property, and compliance standards, among other considerations. To this end, RiskSpan:

  • Maintains control over its data within its AWS instance and shares data with AI models solely for processing requests
  • Employs data encryption during transit and at rest to ensure confidentiality and access controls to restrict unauthorized data access within the AWS environment.
  • Affirms client ownership of inputs and outputs generated by the AI model’s API, ensuring data integrity and compliance with regulatory requirements.
  • Supports common compliance standards, including GDPR and HIPAA.

Standing at the forefront of this evolution within the loans and structured products space, RiskSpan is actively furthering the advancement of three specific GenAI applications aimed at transforming how market participants work and maximizing their efficiency and performance.

1. Modeling Private Credit Transactions

Many financial institutions and legal advisors still spend an extraordinary amount of time reading and extracting relevant information from legal documents that accompany structured private credit transactions.

RiskSpan has partnered with clients to develop a solution to extract key terms from private credit and funding transactions. Trained multimodal AI models are further extended to generate executable code valuations. This code will be fully integrated into RiskSpan’s risk and pricing platform.

The application solves a heretofore intractable problem in which the information necessary to generate accurate cash flows for private credit transactions is spread across multiple documents (a frequent occurrence when terms for individual classes can only be obtained from deal amendments).

Execution code for cash flow generation and valuation utilizes RiskSpan’s validated analytics routines, such as day count handling, payment calculations, discounting, etc.

2. Tape-Cracking 3.0: Making RiskSpan’s Smart Mapper Even Smarter

RiskSpan’s Edge Platform currently uses machine learning techniques as part of its Smart Mapper ETL Tool. When a new portfolio is loaded in a new format, the fuzzy logic that powers the Platform’s recommended mappings gets continually refined based on user activity.

In the coming months, the Platform’s existing ML-driven ETL process will be further refined to leverage the latest GenAI technology.

GenAI lends additional context to the automated mapping process by incorporating an understanding not only of the data in an individual column, but also of surrounding data as well as learned characteristics of the asset class in question. The resulting evolution from simply trying to ensure the headers match up a more holistic understanding of what the data actually is and the meaning it seeks to convey will be a game changer for downstream analysts seeking to make reliable data-driven investment decisions.

RiskSpan made several updates in 2023 to help users automate the end-to-end workflow for loan valuation and surveillance. AI-based data loading combined with the Platform’s loan risk assumptions and flexible data model will enable users to obtain valuation and risk metrics simply by dragging and dropping a loan file into the application.

3. “Insight Support”

Tech support is one of today’s most widely known (and widely experienced) GenAI use cases. Seemingly all-knowing chatbots immediately answer users’ questions, sparing them the inconvenience of having to wait for the next available human agent. Like every other company, RiskSpan is enhancing its traditional tech support processes with GenAI to answer questions faster and and embed user-facing AI help within the Platform itself. But RiskSpan is taking things a step further by also exploring how GenAI can upend and augment its clients’ workflows.

RiskSpan refers to this workflow augmentation as “Insight Support.”

With Insight Support, GenAI evaluates an individual user’s data, dynamically serves up key insights, and automatically completes routine analysis steps without prompting. The resulting application can understand an individual user’s data and recognize what is most important to identify and highlight as part of a loan data analysis workflow.

Insight Support, for example, can leverage insights obtained by the AI-driven “Smarter Mapping” process to identify what specific type of collateral reporting is necessary. It can produce reports that highlight outliers, recognize the typical analytical/valuation run settings a user would want to apply, and then execute the analytical run and summarize the results in management-ready reporting. All in the name of shortening the analysis time needed to evaluate new investment opportunities.

Conclusion

Considered collectively, these three applications are building toward having RiskSpan’s SaaS platform function as a “virtual junior analyst” capable of handling much of the tedious work involved in analyzing loan and structured product investments and freeing up human analysts for higher-order tasks and decision making.

GenAI is the future of data and analytics and is therefore the future of RiskSpan’s Edge Platform. By revolutionizing the way data is analyzed, AI-created and -validated models, dashboards, and sorted data are already allowing experts to redirect their attention away from time-consuming data wrangling tasks and toward more strategic critical thinking. The more complete adoption of fully optimized AI solutions throughout the industry, made possible by a rising generation of “AI-native” data scientists will only accelerate this phenomenon.

RiskSpan’s commitment to pushing the boundaries of innovation in the Loan and Structured Product Space is underscored by its strategic approach to GenAI. While acknowledging the challenges posed by GenAI, RiskSpan remains poised for the future, leveraging its expertise to navigate the evolving landscape. As the industry anticipates the promised benefits of GenAI, RiskSpan’s vision and applications stand as a testament to its role as a thought leader in shaping the future of data analytics.

Stay tuned for more updates on RiskSpan’s innovative solutions, as we continue to lead the way in harnessing the power of GenAI for the benefit of our clients and the industry at large.


Impact of Mr. Cooper’s Cyber Security Incident on Agency Prepayment Reporting

Amid the fallout of the cyberattack against Mr. Cooper on October 31st was an inability on the large servicer’s part to report prepayment activity to investors.

According to Freddie Mac, the incident “resulted in [Mr. Cooper’s] shutting down certain systems as a precautionary measure. As a result, Freddie Mac did not receive loan activity reporting, which includes loan payoffs and payment corrections, from Mr. Cooper during the last few days of the reporting period related to October loan activity.”

Owing to Mr. Cooper’s size, were curious to measure what (if any) impact its missing days of reporting might have on overall agency speeds.

Not a whole lot, it turns out.

This came as little surprise given the very low prepayment environment in which we find ourselves, but we wanted to run the numbers to be sure. Here is what we found.

We do not know precisely how much reporting was missed and assumed “the last few days of the reporting period” to mean 3 days.

Assuming 3 days means that Mr. Cooper’s reported speeds of 4.5 CPR to Freddie and 4.6 CPR to Fannie likely should have been 5.2 CPR and 5.4 CPR, respectively. While these differences are relatively small for to Mr. Cooper’s portfolio (less than 1 CPR) the impact on overall Agency speeds is downright trivial — less than 0.05 CPR.

Fannie MBSFreddie MBS
Sch. Bal.195,221,550,383168,711,346,228
CPR (reported)4.64.5
CPR (estimated*)5.45.2
*assumes three days of unreported loan activity and constant daily prepayments for the month

Fannie Mae and Freddie Mac will distribute scheduled principal and interest when servicers do not report the loan activity. Prepayments that were not reported “will be distributed to MBS certificateholders on the first distribution date that follows our receipt and reconciliation of the required prepayment information from Mr. Cooper.”


Validating Vendor Models

RiskSpan validates a diverse range of models, including many that have been developed by third-party vendors. Vendor models present unique implications when it comes to model risk management (MRM). In this article, we seek to describe how we align our approach to validating these models with existing regulatory guidance and provide an explanation of what financial institutions should expect when it comes time to validate their vendor models.  

Our clients use third-party vendor models that touch on virtually every risk function. The most common ones include: 

  • Anti-money laundering (AML) solutions for Suspicious Activity Monitoring (SAM) and Customer Due Diligence (CDD). 
  • Asset-Liability Management models that simulate the whole balance sheet under different interest rate scenarios to provide analytics for interest rate risk monitoring. 
  • Structured assets and mortgage loan analytics platforms (similar to RiskSpan’s Edge Platform). 
  • Mortgage pipeline management platforms, including loan pricing, best execution determination, analytics, and trade tracking. 
  • Climate risk models that quantify the risk associated with the future effects of climate change on assets at different locations. 
  • Artificial intelligence (AI) platforms that help model developers optimize the machine learning (ML) algorithm, feature selection, and hyperparameter tuning automatically with the target performance metric. 

Vendor Models and MRM Considerations

Regardless of whether a model is fully home grown or a “black box” purchased from a third-party vendor, the same basic MRM principles apply. Banks are expected to validate their own use of vendor products [OCC 2011-12, p.15] and thus institutions should understand the specifics of vendor models that pose model risk and require considerations for validation. The following table outlines specific risks that vendor models pose, along with mitigating considerations and strategies model risk managers should consider. 

SpecificsDescriptionMRM and Validation Implications
Complexity Some vendor models offer many functionalities and sub-models dedicated to different tasks. These various models are often highly integrated into the client’s internal systems and databases. Well-crafted model documentation is important to make the validation efficient. Validation requires more time since all model functionalities and components must be mapped.
Specialized ExpertiseVendor models are often developed based on accumulated know-how of a specific field of study. Validation requires professionals with specific field of study experience and who understand the model in relation to industry standards.
Regulatory Requirements and Compliance Many models need to comply with existing regulations (ex: fair lending in credit scoring) or are implemented to ensure compliance (BSA/AML and the PATRIOT Act).Validation requires expertise in specific regulatory compliance.
Opaque design, assumptions, and imitations Vendors usually do not provide code for review and some aspects of the model may be based on proprietary research or data. Banks should require the vendor to provide developmental evidence explaining the product components, design, and intended use, to determine whether the model is appropriate for the bank’s products, exposures, and risks. They should also clearly indicate the model’s limitations and assumptions and where the product’s use may be problematic. [OCC 2011-12, pp. 15-16].
Vague or incomplete documentation from the Vendor Often in the name of protecting IP, model documentation provided by the vendor may be vague or incomplete.Banks should ensure that appropriate documentation of the third-party approach is available so that the model can be appropriately validated [OCC 2011-12, p.21].

Institutions must also develop their own internal documentation that describes the intended use of the model, lists all inputs and outputs, lists model assumptions and limitations, and summarizes all relevant information about the model provided by the vendor such as model design, methodology, etc.
Limited Model Testing Model Testing is critical in assessing whether a model is performing as intended.

However, vendors may not provide detailed results of their thorough testing of model performance, outcomes, sensitivity, assumptions appropriateness, or the results of ongoing monitoring.

Moreover, there are usually limited possibilities to perform testing by the client or the validator since many parts of the model are proprietary.
Vendors should provide appropriate testing results demonstrating that the model works as expected. Banks should expect vendors to conduct ongoing performance monitoring and outcomes analysis [OCC 2011-12, pp. 15-16]. A bank also should conduct ongoing monitoring and outcomes analysis of vendor model performance using the bank’s own outcomes [OCC 2011-12, pp. 15-16].

Validation should consist of a review of the testing results provided by the vendor and of any additional testing that is feasible and practical. This usually includes analysis of outcomes and benchmarking, sometimes also manual replication, sensitivity analysis, or stress testing. Benchmarking may, however, be limited due to the uniqueness or complexity of the model, or because proprietary data were used for development.
CustomizationOut-of-the-box solutions often need to be customized to meet the internal systems, policies, and specific intended use of a particular institution.

A bank’s customization choices should be documented and justified as part of the validation [OCC 2011-12, p.15].
External DataVendor models often rely on external input data or external data used for its development. An important part of any validation is to determine all input data sources and assess the quality, completeness, and appropriateness of the data.

OCC 2011-12, p. 16, states that banks should obtain information regarding the data used to develop the model and assess the extent to which that data is representative of the bank’s situation.

OCC 2011-12, p.6, stresses that a rigorous review is particularly important for external data and information (from a vendor or outside party), especially as they relate to new products, instruments, or activities.

Moreover, AB-2022-03, p.3, states that regulated entities should map their external dependencies to significant internal systems and processes to determine their systemic dependencies and interconnections. In particular, the regulated entities should have an inventory of key dependencies on externally sourced models, data, software, and cloud providers. This inventory should be regularly updated and reviewed by senior management and presented to the board of directors, as deemed appropriate.
Reliance on Vendor’s Support Since the access to the code and implementation details is limited for vendor models, ongoing servicing and support is necessary.Roles and responsibilities around the model should be defined and the bank’s point of contact with their vendor should not rely solely on one person. It is also critical that the bank has in-house knowledge, in case the vendor or the bank terminates the contract for any reason, or if the vendor goes out of business or otherwise ceases to support the model [OCC 2011-12, p. 16].


Validation Approach

Validation of vendor models follows the same general principles as validation of any other model. These principles are laid out in regulatory guidance. This guidance, along with general MRM principles, provides details specifically about model risk management related to vendor models and specifically addresses vendor and other third-party products. Based on these guidelines and our experience validating numerous vendor models, RiskSpan’s approach includes the following:

  • Request documents and access to:
    • internal model documentation,
    • vendor documentation and user manual,
    • implementation documentation with a description of any customizations to the model (see Customization point in the section above), 
    • performance testing conducted by the model owner or vendor,
    • vendor certifications,
    • the model interface, if applicable, to conduct independent testing. 
  • Documentation review: We review both the internal documentation and vendor documentation and assess its thoroughness and completeness. According to OCC 11-12, p.21, documentation should be sufficiently detailed so that parties unfamiliar with a model can understand how the model operates, its limitations, and its key assumptions. For internal documentation, we focus on the statement of model purpose, list of inputs and their sources, documentation of assumptions and limitation, description of outputs and their use, controls and governance, and any testing conducted internally. We also review the documentation of the customizations made to the vendor model. 
  • Conceptual soundness review: Combining information from both the internal and vendor documentation, information from the model owner, and the industry expertise of our SMEs, we assess whether the model meets the stated model purpose, as well as whether the design, underlying theory, and logic are justifiable and supportable by existing research and industry standards. We also critically assess all known model assumptions and limitations and possibly identify additional assumptions that might be hidden or limitations that were not documented.  
  • Data review: We aim to identify all data inputs, their sources, and controls related to gathering, loading, and quality of data. We also assess the quality of data by performing exploratory data analysis. Assessing development data is often not possible as the data are proprietary to the vendor. 
  • Independent testing: To supplement, update, or verify the testing performed by the vendor, we perform internal testing where applicable. Typically, different models allow different testing methods but permission to access model interfaces is often needed for validators. This is also acknowledged in OCC 11-12, p.15: External models may not allow full access to computer coding and implementation details, so the bank may have to rely more on sensitivity analysis and benchmarking. The following are the testing methods we often use to devise effective challenges for specific models in our practice:
    • AML systems for transaction monitoring and customer due diligence: manual replication for a sample of customers/alerts, exploratory data analysis, outcomes analysis  
    • Asset-Liability Management models: outcomes analysis and review of reporting, sensitivity analysis and stress testing 
    • Loan pricing models: manual replication, outcomes analysis, sensitivity analysis, stress testing, benchmarking to RS Edge 
    • Climate risk models that quantify the risk associated with the future effects of climate change on assets at different locations: Outcomes analysis, benchmarking to online services with open access such as National Risk Index, ClimateCheck, and Risk Factor. 
    • ML AI system: outcome analysis based on the performance metrics, manual replication of the final model in Python, benchmarking with the alternative algorithm. 
  • Ongoing monitoring review: As explained in the previous section, vendors are expected to conduct ongoing monitoring of their models, but banks should monitor their own outcome as well. Our review thus consists of an assessment of the client’s ongoing monitoring plan as well as the results of both the client’s and vendor’s monitoring results. When the validated model does not produce predictions or estimations such as AML models, the ongoing monitoring typically consists of periodical revalidations and data quality monitoring. 
  • Governance review: We review the client’s policies, roles, and responsibilities defined for the model. We also investigate whether a contingency plan is in place for instances when the vendor is no longer supporting the model. We also typically investigate and assess controls around the model’s access and use. 
  • Compliance review: If a model is implemented to make the institution compliant to certain regulations (BSA/AML, PATRIOT Act) or the model itself must comply to regulations, we conduct a compliance review with the assistance of subject matter experts (SMEs) who possess industry experience. This review is conducted to verify that the model and its implementation align with the regulatory requirements and standards set forth by the relevant authorities. The expertise of the SMEs helps ensure that the model effectively addresses compliance concerns and operates within the legal and ethical boundaries of the industry. 

Project Management Considerations

In order for validation projects to be successful, a strong project management discipline must be followed to ensure it is completed on schedule, within budget and meets all key stakeholder objectives. In addition to adapting our validation approach, we thus also take our project management approach into consideration. For vendor model validation projects, we specifically follow these principles: 

  • Schedule a periodical status meeting: We typically hold weekly meetings with the client’s MRM to communicate the status of the validation, align client’s expectation, discuss observations, and address any concerns. Since vendor models are often complex, these meetings also serve as a place to discuss any road blockers such as access to the model’s UI, shared folders, database, etc. 
  • Schedule a model walkthrough session with the model owner: Vendor models are often complex and the client may use only specific components/functionalities. The most efficient way to understand the big picture and the particular way the model is used proved to be a live (typically remote) session with the model owner. Asking targeted questions right at the beginning of the engagement helps us to quickly get grasp of the critical areas to focus on during the validation. 
  • Establish a communication channel with the model owner: Be it direct messages or emails sent to and forwarded by the client’s MRM, it is important to be in touch with the model owner as not every detail may be documented. 

Conclusion

Vendor models pose unique risks and challenges for MRM and validation. Taking additional steps to mitigate these risks is vital to ensuring a well-functioning MRM program. An effective model validation approach takes these unique considerations into account and directly applies guidelines related specifically to validation of vendor models outlined in SR 11-7 (OCC 11-12). Effectively carrying out this type of testing often requires making adjustments to the management of vendor model validation projects. 

References

OCC 2011-12, p.6: The data and other information used to develop a model are of critical importance; there should be rigorous assessment of data quality and relevance, and appropriate documentation. Developers should be able to demonstrate that such data and information are suitable for the model and that they are consistent with the theory behind the approach and with the chosen methodology. If data proxies are used, they should be carefully identified, justified, and documented. If data and information are not representative of the bank’s portfolio or other characteristics, or if assumptions are made to adjust the data and information, these factors should be properly tracked and analyzed so that users are aware of potential limitations. This is particularly important for external data and information (from a vendor or outside party), especially as they relate to new products, instruments, or activities. 

OCC 2011-12, p.9: All model components, including input, processing, and reporting, should be subject to validation; this applies equally to models developed in-house and to those purchased from or developed by vendors or consultants. The rigor and sophistication of validation should be commensurate with the bank’s overall use of models, the complexity and materiality of its models, and the size and complexity of the bank’s operations. 

OCC 2011-12, p.12: Many of the tests employed as part of model development should be included in ongoing monitoring and be conducted on a regular basis to incorporate additional information as it becomes available. New empirical evidence or theoretical research may suggest the need to modify or even replace original methods. Analysis of the integrity and applicability of internal and external information sources, including information provided by third-party vendors, should be performed regularly. 

A whole section in OCC 2011-12 dedicated to validation of vendor models on pp. 15-16: 

Validation of Vendor and Other Third-Party Products  

The widespread use of vendor and other third-party products—including data, parameter values, and complete models—poses unique challenges for validation and other model risk management activities because the modeling expertise is external to the user and because some components are considered proprietary. Vendor products should nevertheless be incorporated into a bank’s broader model risk management framework following the same principles as applied to in-house models, although the process may be somewhat modified. 

As a first step, banks should ensure that there are appropriate processes in place for selecting vendor models. Banks should require the vendor to provide developmental evidence explaining the product components, design, and intended use, to determine whether the model is appropriate for the bank’s products, exposures, and risks. Vendors should provide appropriate testing results that show their product works as expected. They should also clearly indicate the model’s limitations and assumptions and where use of the product may be problematic. Banks should expect vendors to conduct ongoing performance monitoring and outcomes analysis, with disclosure to their clients, and to make appropriate modifications and updates over time. Banks are expected to validate their own use of vendor products. External models may not allow full access to computer coding and implementation details, so the bank may have to rely more on sensitivity analysis and benchmarking. Vendor models are often designed to provide a range of capabilities and so may need to be customized by a bank for its particular circumstances. A bank’s customization choices should be documented and justified as part of validation. If vendors provide input data or assumptions, or use them to build models, their relevance to the bank’s situation should be investigated. Banks should obtain information regarding the data used to develop the model and assess the extent to which that data is representative of the bank’s situation. The bank also should conduct ongoing monitoring and outcomes analysis of vendor model performance using the bank’s own outcomes. Systematic procedures for validation help the bank to understand the vendor product and its capabilities, applicability, and limitations. Such detailed knowledge is necessary for basic controls of bank operations. It is also very important for the bank to have as much knowledge in-house as possible, in case the vendor or the bank terminates the contract for any reason, or the vendor is no longer in business. Banks should have contingency plans for instances when the vendor model is no longer available or cannot be supported by the vendor. 

OCC 2011-12, p.17: Policies should emphasize testing and analysis, and promote the development of targets for model accuracy, standards for acceptable levels of discrepancies, and procedures for review of and response to unacceptable discrepancies. They should include a description of the processes used to select and retain vendor models, including the people who should be involved in such decisions. 

OCC 2011-12, p.21, Documentation: For cases in which a bank uses models from a vendor or other third party, it should ensure that appropriate documentation of the third-party approach is available so that the model can be appropriately validated. 

AB 2022-03, p.3: Since the publication of AB 2013-07, FHFA has observed a wider adoption of technologies in the mortgage industry. Many of these technologies reside externally to the regulated entities and are largely outside of the regulated entities’ control. Examples of such technologies are cloud servers, vendor models, and external data used by the regulated entities as inputs for their models. Although FHFA has published guidance related to externally sourced technologies, such as AB 2018-04: Cloud Computing Risk Management (Aug. 14, 2018) and AB 2018-08: Oversight of Third-Party Provider Relationships (Sept. 28, 2018), FHFA expects the regulated entities to take a macro-prudential view of the risks posed by externally sourced data and technologies. The regulated entities should map their external dependencies to significant internal systems and processes to determine their systemic dependencies and interconnections. In particular, the regulated entities should have an inventory of key dependencies on externally sourced models, data, software, and cloud providers. This inventory should be regularly updated and reviewed by senior management and presented to the board of directors, as deemed appropriate. 

AB-2022-03, p.3: The regulated entities should map their external dependencies to significant internal systems and processes to determine their systemic dependencies and interconnections. In particular, the regulated entities should have an inventory of key dependencies on externally sourced models, data, software, and cloud providers. This inventory should be regularly updated and reviewed by senior management and presented to the board of directors, as deemed appropriate. 

AB 2022-03, p.5: When using an external vendor to complete an independent model validation, the regulated entity’s model validation group is accountable for the quality, recommendations, and opinions of any third-party review. When evaluating a third-party model validation, a regulated entity should implement model risk management policies and practices that align the vendor-completed specific standards for an independent validation with the specific standards included in AB 2013-07. 


What Do 2023 Origination Trends Mean for MSRs?

When it comes to forecasting MSR performance and valuations, much is made of the interest rate environment, and rightly so. But other loan characteristics also play a role, particularly when it comes to predicting involuntary prepayments.

So let’s take a look at what 2023 mortgage originations might be telling us.

Average credit scores, which were markedly higher than normal during the pandemic years, have returned during the first part of 2023 to averages observed during the latter half of the 2010s.

FICO

The most credible explanation for this most recent reversion to the mean is the fact that the Covid years were accompanied by an historically strong refinance market. Refis traditionally have higher FICO scores than purchase mortgages, and this is apparent in the recent trend.

Purchase markets are also associated with higher average LTV ratios than are refi markets, which accounts for their sharp rise during the same period

LTV

Consequently, in 2023, with high home prices persisting despite extremely high interest rates, new first-time homebuyers with good credit continue to be approved for loans, but with higher LTV and DTI ratios.

DTI

Between rates and home prices,​​borrowers simply need to borrow more now than they would have just a few years ago to buy a comparable house. This is reflected not just in the average DTI and LTV, but also the average loan size (below) which, unsurprisingly, is trending higher as well.

Recent large increases to the conforming loan limit are clearly also contributing to the higher average loan size.

WOLS

What, then, do these origination trends mean for the MSR market?

The very high rates associated with newer originations clearly translate to higher risk of prepayments. We have seen significant spikes in actual speeds when rates have taken a leg down — even though the loans are still very new. FICO/LTV/DTI trends also potentially portend higher delinquencies down the line, which would negatively impact MSR valuations.

Nevertheless, today’s MSR trading market remains healthy, and demand is starting to catch up with the high supply as more money is being raised and put to work by investors in this space. Supply remains high due to the need for mortgage originators to monetize the value of MSR to balance out the impact from declining originations.

However, the nature of the MSR trade has evolved from the investor’s perspective. When rates were at historic lows for an extended period, the MSR trade was relatively straightforward as there was a broader secular rate play in motion. Now, however, bidders are scrutinizing available deals more closely — evaluating how speeds may differ from historical trends or from what the models would typically forecast.

These more granular reviews are necessarily beginning to focus on how much lower today’s already very low turnover speeds can actually go and the extent of lock-in effects for out-of-the-money loans at differing levels of negative refi incentive. Investors’ differing views on prepays across various pools in the market will often be the determining factor on who wins the bid.

Investor preference may also be driven by the diversity of an investor’s other holdings. Some investors are looking for steady yield on low-WAC MSRs that have very small prepayment risk while other investors are seeking the higher negative convexity risk of higher-WAC MSRs — for example, if their broader portfolio has very limited negative convexity risk.

In sum, investors have remained patient and selective — seeking opportunities that best fit their needs and preferences.

So what else do MSR holders need to focus on that may may impact MSR valuations going forward? 

The impact from changes in HPI is one key area of focus.

While year-over-year HPI remains positive nationally, servicers and other investors really need to look at housing values region by region. The real risk comes in the tails of local home price moves that are often divorced from national trends. 

For example, HPIs in Phoenix, Austin, and Boise (to name three particularly volatile MSAs) behaved quite differently from the nation as a whole as HPIs in these three areas in particular first got a boost from mass in-migration during the pandemic and have since come down to earth.

Geographic concentrations within MSR books will be a key driver of credit events. To that end, we are seeing clients beginning to examine their portfolio concentration as granularly as zipcode level. 

Declining home values will impact most MSR valuation models in two offsetting ways: slower refi speeds will result in higher MSR values, while the increase in defaults will push MSRs back downward. Of these two factors, the slower speeds typically take precedence. In today’s environment of slow speeds driven primarily by turnover, however, lower home prices are going to blunt the impact of speeds, leaving MSR values more exposed to the impact of higher defaults.


Edge: Zombie Banks

At the market highs, banks gorged themselves on assets, lending and loading their balance sheets in an era of cheap money and robust valuations. As asset prices drop, these same companies find their balance sheets functionally impaired and in some cases insolvent. They are able to stay alive with substantial help from the central bank but require ongoing support. This support and an unhealthy balance sheet preclude them from fulfilling their role in the economy.

We are describing, of course, the situation in Japan in the late 1980s and early 1990s, when banks lent freely, and companies purchased both real estate and equity at the market highs. When the central bank tightened monetary policy and the stock market tanked, many firms became distressed and had to rely on support from the central bank to stay afloat. But with sclerotic balance sheets, they were unable to thrive, leading to the “lost decade” (or two or three) of anemic growth.

While there are substantial parallels between the U.S. today and Japan of three decades ago, there are differences as well. Firstly, the U.S. has a dynamic non-bank sector that can fill typical roles of lending and financial intermediation. And second, much of the bank impairment comes from Agency MBS, which slowly, but surely, will prepay and relieve pressure on their HTM assets.

Chart
Source: The Wall Street Journal

How fast will these passthroughs pay off? It will vary greatly from bank to bank and depends on their mix of passthroughs and their loan rates relative to current market rates, what MBS traders call “refi incentive” or “moniness.” It is helpful to remember that incentive also matters to housing turnover, which is a form of mortgage prepayment. For example, a borrower with a note rate that is 100bp below prevailing rates is much more likely to move to a new house than a borrower with a note rate that is 200bp out of the money, a trait that mortgage practitioners call “lock-in”.

Chart
Source: RiskSpan’s Edge Platform

As a proxy for the aggregate bank’s balance sheet, we look at the universe of conventional and GNMA passthroughs and remove the MBS held by the Federal Reserve.

1

The Fed’s most substantial purchases flowed from their balance sheet expansion during COVID, when mortgage rates were at all-time lows. Consequently, the Fed owns a skew of the MBS market. Two-thirds of the Fed’s position of 30yr MBS have a note rate of 3.25% or lower. In contrast, the market ex Fed has just under 50% of the same note rates.

Chart
Source: RiskSpan’s Edge Platform

From here, we can estimate prepayments on the remaining universe. Prepay estimates from dealers and analytics providers like RiskSpan vary, but generally fall in the 4 to 6 CPR range for out-of-the-money coupons. This, coupled with scheduled principal amortization of roughly 2-3% per annum means that for this level in rates, runoff in HTM MBS should occur around 8% per annum — slow, but not zero. After five years, approximately 1/3 of the MBS should pay off. Naturally, the pace of runoff can change as both mortgage rates and home sales change.

While the current crisis contains echoes of the Japanese zombie bank crisis of the 1990s, there are notable differences. U.S. banks may be hamstrung over the next few years, with reduced capacity to make new loans as MBS in their HTM balance sheets run off over the next few years. But they will run off — slowly but surely.


Get Started
Log in

Linkedin   

risktech2024