Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: Data Management

Optimizing Analytics Computational Processing 

We met with RiskSpan’s Head of Engineering and Development, Praveen Vairavan, to understand how his team set about optimizing analytics computational processing for a portfolio of 4 million mortgage loans using a cloud-based compute farm.

This interview dives deeper into a case study we discussed in a recent interview with RiskSpan’s co-founder, Suhrud Dagli.

Here is what we learned from Praveen. 


Speak to an Expert

Could you begin by summarizing for us the technical challenge this optimization was seeking to overcome? 

PV: The main challenge related to an investor’s MSR portfolio, specifically the volume of loans we were trying to run. The client has close to 4 million loans spread across nine different servicers. This presented two related but separate sets of challenges. 

The first set of challenges stemmed from needing to consume data from different servicers whose file formats not only differed from one another but also often lacked internal consistency. By that, I mean even the file formats from a single given servicer tended to change from time to time. This required us to continuously update our data mapping and (because the servicer reporting data is not always clean) modify our QC rules to keep up with evolving file formats.  

The second challenge relates to the sheer volume of compute power necessary to run stochastic paths of Monte Carlo rate simulations on 4 million individual loans and then discount the resulting cash flows based on option adjusted yield across multiple scenarios. 

And so you have 4 million loans times multiple paths times one basic cash flow, one basic option-adjusted case, one up case, and one down case, and you can see how quickly the workload adds up. And all this needed to happen on a daily basis. 

To help minimize the computing workload, our client had been running all these daily analytics at a rep-line level—stratifying and condensing everything down to between 70,000 and 75,000 rep lines. This alleviated the computing burden but at the cost of decreased accuracy because they couldn’t look at the loans individually. 

What technology enabled you to optimize the computational process of running 50 paths and 4 scenarios for 4 million individual loans?

PV: With the cloud, you have the advantage of spawning a bunch of servers on the fly (just long enough to run all the necessary analytics) and then shutting it down once the analytics are done. 

This sounds simple enough. But in order to use that level of compute servers, we needed to figure out how to distribute the 4 million loans across all these different servers so they can run in parallel (and then we get the results back so we could aggregate them). We did this using what is known as a MapReduce approach. 

Say we want to run a particular cohort of this dataset with 50,000 loans in it. If we were using a single server, it would run them one after the other – generate all the cash flows for loan 1, then for loan 2, and so on. As you would expect, that is very time-consuming. So, we decided to break down the loans into smaller chunks. We experimented with various chunk sizes. We started with 1,000 – we ran 50 chunks of 1,000 loans each in parallel across the AWS cloud and then aggregated all those results.  

That was an improvement, but the 50 parallel jobs were still taking longer than we wanted. And so, we experimented further before ultimately determining that the “sweet spot” was something closer to 5,000 parallel jobs of 100 loans each. 

Only in the cloud is it practical to run 5,000 servers in parallel. But this of course raises the question: Why not just go all the way and run 50,000 parallel jobs of one loan each? Well, as it happens, running an excessively large number of jobs carries overhead burdens of its own. And we found that the extra time needed to manage that many jobs more than offset the compute time savings. And so, using a fair bit of trial and error, we determined that 100-loan jobs maximized the runtime savings without creating an overly burdensome number of jobs running in parallel.  

Get A Demo

You mentioned the challenge of having to manage a large number of parallel processes. What tools do you employ to work around these and other bottlenecks? 

PV: The most significant bottleneck associated with this process is finding the “sweet spot” number of parallel processes I mentioned above. As I said, we could theoretically break it down into 4 million single-loan processes all running in parallel. But managing this amount of distributed computation, even in the cloud, invariably creates a degree of overhead which ultimately degrades performance. 

And so how do we find that sweet spot – how do we optimize the number of servers on the distributed computation engine? 

As I alluded to earlier, the process involved an element of trial and error. But we also developed some home-grown tools (and leveraged some tools available in AWS) to help us. These tools enable us to visualize computation server performance – how much of a load they can take, how much memory they use, etc. These helped eliminate some of the optimization guesswork.   

Is this optimization primarily hardware based?

PV: AWS provides essentially two “flavors” of machines. One “flavor” enables you to take in a lot of memory. This enables you to keep a whole lot of loans in memory so it will be faster to run. The other flavor of hardware is more processor based (compute intensive). These machines provide a lot of CPU power so that you can run a lot of processes in parallel on a single machine and still get the required performance. 

We have done a lot of R&D on this hardware. We experimented with many different instance types to determine which works best for us and optimizes our output: Lots of memory but smaller CPUs vs. CPU-intensive machines with less (but still a reasonably amount of) memory. 

We ultimately landed on a machine with 96 cores and about 240 GB of memory. This was the balance that enabled us to run portfolios at speeds consistent with our SLAs. For us, this translated to a server farm of 50 machines running 70 processes each, which works out to 3,500 workers helping us to process the entire 4-million-loan portfolio (across 50 Monte Carlo simulation paths and 4 different scenarios) within the established SLA.  

What software-based optimization made this possible? 

PV: Even optimized in the cloud, hardware can get pricey – on the order of $4.50 per hour in this example. And so, we supplemented our hardware optimization with some software-based optimization as well. 

We were able to optimize our software to a point where we could use a machine with just 30 cores (rather than 96) and 64 GB of RAM (rather than 240). Using 80 of these machines running 40 processes each gives us 2,400 workers (rather than 3,500). Software optimization enabled us to run the same number of loans in roughly the same amount of time (slightly faster, actually) but using fewer hardware resources. And our cost to use these machines was just one-third what we were paying for the more resource-intensive hardware. 

All this, and our compute time actually declined by 10 percent.  

The software optimization that made this possible has two parts: 

The first part (as we discussed earlier) is using the MapReduce methodology to break down jobs into optimally sized chunks. 

The second part involved optimizing how we read loan-level information into the analytical engine.  Reading in loan-level data (especially for 4 million loans) is a huge bottleneck. We got around this by implementing a “pre-processing” procedure. For each individual servicer, we created a set of optimized loan files that can be read and rendered “analytics ready” very quickly. This enables the loan-level data to be quickly consumed and immediately used for analytics without having to read all the loan tapes and convert them into a format that analytics engine can understand. Because we have “pre-processed” all this loan information, it is immediately available in a format that the engine can easily digest and run analytics on.  

This software-based optimization is what ultimately enabled us to optimize our hardware usage (and save time and cost in the process).  

Contact us to learn more about how we can help you optimize your mortgage analytics computational processing.


Asset Managers Improving Yields With Resi Whole Loans

An unmistakable transformation is underway among asset managers and insurance companies with respect to whole loan investments. Whereas residential mortgage loan investing has historically been the exclusive province of commercial banks, a growing number of other institutional investors – notably life insurance companies and third-party asset managers – have shifted their attention toward this often-overlooked asset class.

Life companies and other asset managers with primarily long-term, risk-sensitive objectives are no strangers to residential mortgages. Their exposure, however, has traditionally been in the form of mortgage-backed securities, generally taking refuge in the highest-rated bonds. Investors accustomed to the AAA and AA tranches may understandably be leery of whole-loan credit exposure. Infrastructure investments necessary for managing a loan portfolio and the related credit-focused surveillance can also seem burdensome. But a new generation of tech is alleviating more of the burden than ever before and making this less familiar and sometimes misunderstood asset class increasingly accessible to a growing cadre of investors.

Maximizing Yield

Following a period of low interest rates, life companies and other investment managers are increasingly embracing residential whole-loan mortgages as they seek assets with higher returns relative to traditional fixed-income investments (see chart below). As highlighted in the chart below, residential mortgage portfolios, on a loss-adjusted basis, consistently outperform other investments, such as corporate bonds, and look increasingly attractive relative to private-label residential mortgage-backed securities as well.

Nearly one-third of the $12 trillion in U.S. residential mortgage debt outstanding is currently held in the form of loans.

And while most whole loans continue to be held in commercial bank portfolios, a growing number of third-party asset managers have entered the fray as well, often on behalf of their life insurance company clients.

Investing in loans introduces a dimension of credit risk that investors do need to understand and manage through thoughtful surveillance practices. As the chart below (generated using RiskSpan’s Edge Platform) highlights, when evaluating yields on a loss-adjusted basis, resi whole loans routinely generate yield.

REQUEST A DEMO OR TRIAL

In addition to higher yields, whole loans investments offer investors other key advantages over securities. Notably:

Data Transparency

Although transparency into private label RMBS has improved dramatically since the 2008 crisis, nothing compares to the degree of loan-level detail afforded whole-loan investors. Loan investors typically have access to complete loan files and therefore complete loan-level datasets. This allows for running analytics based on virtually any borrower, property, or loan characteristic and contributes to a better risk management environment overall. The deeper analysis enabled by loan-level and property-specific information also permits investors to delve into ESG matters and better assess climate risk.

Daily Servicer Updates

Advancements in investor reporting are increasingly granting whole loan investors access to daily updates on their portfolio performance. Daily updating provides investors near real-time updates on prepayments and curtailments as well as details regarding problem loans that are seriously delinquent or in foreclosure and loss mitigation strategies. Eliminating the various “middlemen” between primary servicers and investors (many of the additional costs of securitization outlined below—master servicers, trustees, various deal and data “agents,” etc.—have the added negative effect of adding layers between security investors and the underlying loans) is one of the things that makes daily updates possible.

Lower Transaction Costs

Driven largely by a lack of trust in the system and lack of transparency into the underlying loan collateral, private-label securities investments incur a series of yield-eroding transactions costs that whole-loan investors can largely avoid. Consider the following transaction costs in a typical securitization:

  • Loan Data Agent costs: The concept of a loan data agent is unique to securitization. Data agents function essentially as middlemen responsible for validating the performance of other vendors (such as the Trustee). The fee for this service is avoided entirely by whole loan investors, which generally do not require an intermediary to get regularly updated loan-level data from servicers.
  • Securities Administrator/Custodian/Trustee costs: These roles present yet another layer of intermediary costs between the borrower/servicer and securities investors that are not incurred in whole loan investing.
  • Deal Agent costs: Deal agents are third party vendors typically charged with enhancing transparency in a mortgage security and ensuring that all parties’ interests are protected. The deal agent typically performs a surveillance role and charges investors ongoing annual fees plus additional fees for individual loan file reviews. These costs are not borne by whole loan investors.
  • Due diligence costs: While due diligence costs factor into loan and security investments alike, the additional layers of review required for agency ratings tends to drive these costs higher for securities. While individual file reviews are also required for both types of investments, purchasing loans only from trusted originators allows investors to get comfortable with reviewing a smaller sample of new loans. This can push due diligence costs on loan portfolios to much lower levels when compared to securities.
  • Servicing costs: Mortgage servicing costs are largely unavoidable regardless of how the asset is held. Loan investors, however, tend to have more options at their disposal. Servicing fees for securities vary from transaction to transaction with little negotiating power by the security investors. Further, securities investors incur master servicing fees which is generally not a required function for managing whole loan investments.

Emerging technology is streamlining the process of data cleansing, normalization and aggregation, greatly reducing the operational burden of these processes, particularly for whole loan investors, who can cut out many of these intermediary parties entirely.

Overcoming Operational Hurdles

Much of investor reluctance to delve into loans has historically stemmed from the operational challenges (real and perceived) associated with having to manage and make sense of the underlying mountain of loan, borrower, and property data tied to each individual loan. But forward-thinking asset managers are increasingly finding it possible to offload and outsource much of this burden to cloud-native solutions purpose built to store, manage, and provide analytics on loan-level mortgage data, such as RiskSpan’s Edge Platform supporting loan data management and analytics. RiskSpan solutions make it easy to mine available loan portfolios for profitable sub-cohorts, spot risky loans for exclusion, apply a host of credit and prepay scenario analyses, and parse static and performance data in any way imaginable.

At an increasing number of institutions, demonstrating the power of analytical tools and the feasibility of applying them to the operational and risk management challenges at hand will solve many if not most of the hurdles standing in the way of obtaining asset class approval for mortgage loans. The barriers to access are coming down, and the future is brighter than ever for this fascinating, dynamic and profitable asset class.


RiskSpan a Winner of 2022 HousingWire’s Tech100 Mortgage Award

RiskSpan named to HousingWire’s Tech100 for a fourth consecutive year — recognition of the firm’s continuous commitment to advancing mortagage, technology, data and analytics.

Our cloud-native data and predictive modeling analytical platform uncovers insights and mitigates risks for loans and structured products.

HWrech100

SCHEDULE A DEMO OR TRIAL

HousingWire is the most influential source of news and information for the U.S. mortgage and housing markets. Built on a foundation of independent and original journalism, HousingWire reaches over 60,000 newsletter subscribers daily and over 1.0 million unique visitors each month. Our audience of mortgage, real estate and fintech professionals rely on us to Move Markets Forward. 


Mortgage Data and the Cloud – Now is the Time

As the trend toward cloud computing continues its march across an ever-expanding set of industries, it is worth pausing briefly to contemplate how it can benefit those of us who work with mortgage data for a living.  

The inherent flexibility, efficiency and scalability afforded by cloud-native systems driving this trend are clearly of value to users of financial services data. Mortgages in particular, each accompanied by a dizzying array of static and dynamic data about borrower incomes, employment, assets, property valuations, payment histories, and detailed loan terms, stand to reap the benefits of cloud and the shift to this new form of computing.  

And yet, many of my colleagues still catch themselves referring to mortgage data files as “tapes.” 

Migrating to cloud evokes some of the shiniest words in the world of computing – cost reduction, security, reliability, agility – and that undoubtedly creates a stir. Cloud’s ability to provide on-demand access to servers, storage locations, databases, software and applications via the internet, along with the promise to ‘only pay for what you use’ further contributes to its popularity. 

These benefits are especially well suited to mortgage data. They include:  

  • On-demand self-service and the ability to provision resources without human interference – of particular use for mortgage portfolios that are constantly changing in both size and composition. 
  • Broad network access, diverse platforms having access to multiple resources available over the network – valuable when origination, secondary marketing, structuring, servicing, and modeling tools are seeking to simultaneously access the same evolving datasets for different purposes. 
  • Multi-tenancy and resource pooling, allowing resource sharing while maintaining privacy and security. 
  • Rapid elasticity and scalability, quick acquiring and disposing of resources and allowing quick but measured scaling based on demand. 

Cloud-native systems reduce ownership and operational expenses, increase speed and agility, facilitate innovation, improve client experience, and even enhance security controls. 

There is nothing quite like mortgage portfolios when it comes to massive quantities of financial data, often PII-laden, with high security requirements. The responsibility for protecting borrower privacy is the most frequently cited reason for financial institution reluctance when it comes to cloud adoption. But perhaps counterintuitively, migrating on-premises applications to cloud actually results in a more controlled environment as it provides for backup and access protocols that are not as easily implemented with on-premise solutions. 

The cloud affords a sophisticated and more efficient way of securing mortgage data. In addition to eliminating costs associated with running and maintaining data centers, the cloud enables easy and fast access to data and applications anywhere and at any time. As remote work takes hold as a more long-term norm, cloud-native platform help ensure employees can work effectively regardless of their location. Furthermore, the scalability of cloud-native data centers allows holders of mortgage assets to grow and expand storage capabilities as the portfolio grows and reduce it when it contracts. The cloud protects mortgage data from security breaches or disaster events, because the loan files are (by definition) backed up in a secure, remote location and easily restored without having to invest in expensive data retrieval methods.  

This is not to say that migrating to the cloud is without its challenges. Entrusting sensitive data to a new third-party partner and relying on its tech to remain online will always carry some measure of risk. Cloud computing, like any other innovation, comes with its own advantages and disadvantages, and redundancies mitigate virtually all of these uncertainties. Ultimately, the upside of being able work with mortgage data on cloud-native solutions far outweighs the drawbacks. The cloud makes it possible for processes to become more efficient in real-time, without having to undergo expensive hardware enhancements. This in turn creates a more productive environment for data analysts and modelers seeking to give portfolio managers, servicers, securitizers, and others who routinely deal with mortgage assets the edge they are looking for.

Kriti Asrani is an associate data analyst at RiskSpan.


Want to read more on this topic? Check out COVID-19 and the Cloud.


Anomaly Detection and Quality Control

In our most recent workshop on Anomaly Detection and Quality Control (Part I), we discussed how clean market data is an integral part of producing accurate market risk results. As incorrect and inconsistent market data is so prevalent in the industry, it is not surprising that the U.S. spends over $3 trillion on processes to identify and correct market data.

In taking a step back, it is worth noting what drives accurate market risk analytics. Clearly, having accurate portfolio holdings with correct terms and conditions for over-the-counter trades is central to calculating consistent risk measures that are scaled to the market value of the portfolio. The use of well-tested and integrated industry-standard pricing models is another key factor in producing reliable analytics. In comparison to the two categories above, clean, and consistent market data are the largest contributors that could lead to poor market risk analytics. The key driving factor behind detecting and correcting/transforming market data is risk and portfolio managers expectation that risk results are accurate at the start of the business day with no need to perform any time-consuming re-runs during the day to correct issues found. 

Broadly defined, market data is defined as any data that is used as input to the re-valuation models. This includes equity prices, interest rates, credit spreads. FX rates, volatility surfaces, etc.

Market data needs to be:

  • Complete – no true gaps when looking back historically.
  • Accurate
  • Consistent – data must be viewed across other data points to determine its accuracy (e.g., interest rates across tenor buckets, volatilities across volatility surface)

Anomaly types can be broken down into four major categories:

  • Spikes
  • Stale data
  • Missing data
  • Inconsistencies

Here are three example of “bad” market data:

Credit Spreads

The following chart depicts day-over-day changes in credit spreads for the 10-year consumer cyclical time series, returned from an external vendor. The changes indicate a significant spike on 12/3 that caused big swings, up and down, across multiple rating buckets​. Without an adjustment to this data, key risk measures would show significant jumps, up and down, depending on the dollar value of positions on two consecutive days​.

Anomaly Detection

Swaption Volatilities

Market data also includes volatilities, which drive delta and possible hedging. The following chart shows implied swaption volatilities for different maturities of swaptions and their underlying swaps. The following chart shows implied swaption volatilities for different maturity of swaption and underlying swap​. Note the spikes in 7×10 and 10×10 swaptions. The chart also highlights inconsistencies between different tenors and maturities.

Anomaly-Detection

Equity Implied Volatilities

The 146 and 148 strikes in the table below reflect inconsistent vol data, as often occurs around expiration.

Anomaly-Detection

The detection of market data inconsistencies needs to be an automated process with multiple approaches targeted for specific types of market data. The detection models need to evolve over time as added information is gathered with the goal of reducing false negatives to a manageable level. Once the models detect the anomalies, the next step is to automate the transformation of the market data (e.g., backfill, interpolate, use prior day value). Together with the transformation, transparency must be recorded such that it is known what values were either changed or populated if not available. This should be shared with clients which could lead to alternative transformations or model detection routines.

Detector types typically fall into the following categories:

  • Extreme Studentized Deviate (ESD): finds outliers in a single data series (helpful for extreme cases.)
  • Level Shift: detects change in level by comparing means of two sliding time windows (useful for local outliers.)
  • Local Outliers: detects spikes in near values.
  • Seasonal Detector: detects seasonal patterns and anomalies (used for contract expirations and other events.)
  • Volatility Shift: detects shift of volatility by tracking changes in standard deviation.

On Wednesday, May 19th, we will present a follow-up workshop focusing on:

  • Coding examples
    • Application of outlier detection and pipelines
    • PCA
  • Specific loan use cases
    • Loan performance
    • Entity correction
  • Novelty Detection
    • Anomalies are not always “bad”
    • Market monitoring models

You can register for this complimentary workshop here.


Leveraging ML to Enhance the Model Calibration Process

Last month, we outlined an approach to continuous model monitoring and discussed how practitioners can leverage the results of that monitoring for advanced analytics and enhanced end-user reporting. In this post, we apply this idea to enhanced model calibration.

Continuous model monitoring is a key part of a modern model governance regime. But testing performance as part of the continuous monitoring process has value that extends beyond immediate governance needs. Using machine learning and other advanced analytics, testing results can also be further explored to gain a deeper understanding of model error lurking within sub-spaces of the population.

Below we describe how we leverage automated model back-testing results (using our machine learning platform, Edge Studio) to streamline the calibration process for our own residential mortgage prepayment model.

The Problem:

MBS prepayment models, RiskSpan’s included, often provide a number of tuning knobs to tweak model results. These knobs impact the various components of the S-curve function, including refi sensitivity, turnover lever, elbow shift, and burnout factor.

The knob tuning and calibration process is typically messy and iterative. It usually involves somewhat-subjectively selecting certain sub-populations to calibrate, running back-testing to see where and how the model is off, and then tweaking knobs and rerunning the back-test to see the impacts. The modeler may need to iterate through a series of different knob selections and groupings to figure out which combination best fits the data. This is manually intensive work and can take a lot of time.

As part of our continuous model monitoring process, we had already automated the process of generating back-test results and merging them with actual performance history. But we wanted to explore ways of taking this one step further to help automate the tuning process — rerunning the automated back-testing using all the various permutations of potential knobs, but without all the manual labor.

The solution applies machine learning techniques to run a series of back-tests on MBS pools and automatically solve for the set of tuners that best aligns model outputs with actual results.

We break the problem into two parts:

  1. Find Cohorts: Cluster pools into groups that exhibit similar key pool characteristics and model error (so they would need the same tuners).

TRAINING DATA: Back-testing results for our universe of pools with no model tuning knobs applied

  1. Solve for Tuners: Minimize back-testing error by optimizing knob settings.

TRAINING DATA: Back-testing results for our universe of pools under a variety of permutations of potential tuning knobs (Refi x Turnover)

  1. Tuning knobs validation: Take optimized tuning knobs for each cluster and rerun pools to confirm that the selected permutation in fact returns the lowest model errors.

Part 1: Find Cohorts

We define model error as the ratio of the average modeled SMM to the average actual SMM. We compute this using back-testing results and then use a hierarchical clustering algorithm to cluster the data based on model error across various key pool characteristics.

Hierarchical clustering is a general family of clustering algorithms that build nested clusters by either merging or splitting observations successively. The hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the root cluster that contains all samples, while the leaves represent clusters with only one sample. [1]

Agglomerative clustering is an implementation of hierarchical clustering that takes the bottom-up approach (merging approach). Each observation starts in its own cluster, and clusters are then successively merged together. There are multiple linkage criteria that could be chosen from. We have used Ward linkage criteria.

Ward linkage strategy minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach.[2]

Part 2: Solving for Tuners

Here our training data is expanded to be a set of back-test results to include multiple results for each pool under different permutations of tuning knobs.  

Process to Optimize the Tuners for Each Cluster

Training Data: Rerun the back-test with permutations of REFI and TURNOVER tunings, covering all reasonably possible combinations of tuners.

  1. These permutations of tuning results are fed to a multi-output regressor, which trains the machine learning model to understand the interaction between each tuning parameter and the model as a fitting step.
    • Model Error and Pool Features are used as Independent Variables
    • Gradient Tree Boosting/Gradient Boosted Decision Trees (GBDT)* methods are used to find the optimized tuning parameters for each cluster of pools derived from the clustering step
    • Two dependent variables — Refi Tuner and Turnover Tuner – are used
    • Separate models are estimated for each cluster
  2. We solve for the optimal tuning parameters by running the resulting model with a model error ratio of 1 (no error) and the weighted average cluster features.

* Gradient Tree Boosting/Gradient Boosted Decision Trees (GBDT) is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. When a decision tree is a weak learner, the resulting algorithm is called gradient boosted trees, which usually outperforms random forest. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of arbitrary differentiable loss function. [3]

*We used scikit-learn’s GBDT implementation to optimize and solve for best Refi and Turnover tuner. [4]

Results

The resultant suggested knobs show promise in improving model fit over our back-test period. Below are the results for two of the clusters using the knobs that suggested by the process. To further expand the results, we plan to cross-validate on out-of-time sample data as it comes in.

Conclusion

These advanced analytics show promise in their ability to help streamline the model calibration and tuning process by removing many of the time-consuming and subjective components from the process altogether. Once a process like this is established for one model, applying it to new populations and time periods becomes more straightforward. This analysis can be further extended in a number of ways. One in particular we’re excited about is the use of ensemble models—or a ‘model of models’ approach. We will continue to tinker with this approach as we calibrate our own models and keep you apprised on what we learn.


Too Many Documentation Types? A Data-Driven Approach to Consolidating Them

The sheer volume of different names assigned to various documentation types in the non-agency space has really gotten out of hand, especially in the last few years. As of February 2021, an active loan in the CoreLogic RMBS universe could have any of over 250 unique documentation type names, with little or no standardization from issuer to issuer. Even within a single issuer, things get complicated when every possible permutation of the same basic documentation level gets assigned its own type. One issuer in the database has 63 unique documentation names!

In order for investors to be able to understand and quantify their exposure, we need a way of consolidating and mapping all these different documentation types to a simpler, standard nomenclature. Various industry reports attempt to group all the different documentation levels into meaningful categories. But these classifications often fail to capture important distinctions in delinquency performance among different documentation levels.

There is a better way. Taking some of the consolidated group names from the various industry papers and rating agency papers as a starting point, we took another pass focusing on two main elements:

  • The delinquency performance of the group. We focused on the 60-DPD rate while also considering other drivers of loan performance (e.g., DTI, FICO, and LTV) and their correlation to the various doc type groups.
  • The size of the sub-segment. We ensured our resulting groupings were large enough to be meaningful.

What follows is how we thought about it and ultimately landed where we did. These mappings are not set in stone and will likely need to undergo revisions as 1) new documentation types are generated, and 2) additional performance data and feedback from clients on what they consider most important become available. Releasing these mappings into RiskSpan’s Edge Platform will then make it easier for users to track performance.

Data Used

We take a snapshot of all loans outstanding in non-agency RMBS issued after 2013, as of the February 2021 activity period. The data comes from CoreLogic and we exclude loans in seasoned or reperforming deals. We also exclude loans whose documentation type is not reported, some 14 percent of the population.

Approach

We are seeking to create sub-groups that generally conform to the high-level groups on which the industry seems to be converging while also identifying subdivisions with meaningfully different delinquency performance. We will rely on these designations as we re-estimate our credit model.

Steps in the process:

  1. Start with high-level groupings based on how the documentation type is currently named.
    • Full Documentation: Any name referencing ‘Agency,’ ‘Agency AUS,’ or similar.
    • Bank Statements: Any name including the term “Bank Statement[s].”
    • Investor/DSCR: Any name indicating that the underwriting relied on net cash flows to the secured property.
    • Alternative Documentation: A wide-ranging group consolidating many different types, including: asset qualifier, SISA/SIVA/NINA, CPA letters, etc.
    • Other: Any name that does not easily classify into one of the groups above, such as Foreign National Income, and any indecipherable names.

Chart

  1. We subdivided the Alternative Documentation group by some of the meaningfully sized natural groupings of the names:
    • Asset Depletion or Asset Qualifier
    • CPA and P&L statements
    • Salaried/Wage Earner: Includes anything with W2 tax return
    • Tax Returns or 1099s: Includes anything with ‘1099’ or ‘Tax Return, but not ‘W2.’
    • Alt Doc: Anything that remained, included items like ‘VIVA, ‘SISA,’ ‘NINA,’ ‘Streamlined,’ ‘WVOE,’ and ‘Alt Doc.’
  1. From there we sought to identify any sub-groups that perform differently (as measured by 60-DPD%).
    • Bank Statement: We evaluated a subdivision by the number of statements provided (less than 12 months, 12 months, and greater than 12 months). However, these distinctions did not significantly impact delinquency performance. (Also, very few loans fell into the under 12 months group.) Distinguishing ‘Business Bank Statement’ loans from the general ‘Bank Statements’ category, however, did yield meaningful performance differences.

High Level

    • Alternative Documentation: This group required the most iteration. We initially focused our attention on documentation types that included terms like ‘streamlined’ or ‘fast.’ This, however, did not reveal any meaningful performance differences relative to other low doc loans. We also looked at this group by issuer, hypothesizing that some programs might perform better than others. The jury is still out on this analysis and we continue to track it. The following subdivisions yielded meaningful differences:
      • Limited Documentation: This group includes any names including the terms ‘reduced,’ ‘limited,’ ‘streamlined,’ and ‘alt doc.’ This group performed substantially better than the next group.
      • No Doc/Stated: Not surprisingly, these were the worst performers in the ‘Alt Doc’ universe. The types included here are a throwback to the run-up to the housing crisis. ‘NINA,’ ‘SISA,’ ‘No Doc,’ and ‘Stated’ all make a reappearance in this group.
      • Loans with some variation of ‘WVOE’ (written verification of employment) showed very strong performance, so much so that we created an entirely separate group for them.
  • Full Documentation: Within the variations of ‘Full Documentation’ was a whole sub-group with qualifying terms attached. Examples include ‘Full Doc 12 Months’ or ‘Full w/ Asset Assist.’ These full-doc-with-qualification loans were associated with higher delinquency rates. The sub-groupings reflect this reality:
      • Full Documentation: Most of the straightforward types indicating full documentation, including anything with ‘Agency/AUS.’
      • Full with Qualifications (‘Full w/ Qual’): Everything including the term ‘Full’ followed by some sort of qualifier.
  • Investor/DSCR: The sub-groups here either were not big enough or did not demonstrate sufficient performance difference.
  • Other: Even though it’s a small group, we broke out all the ‘Foreign National’ documentation types into a separate group to conform with other industry reporting.

High Level

Among the challenges of this sort of analysis is that the combinations to explore are virtually limitless. Perhaps not surprisingly, most of the potential groupings we considered did not make it into our final mapping. Some of the cuts we are still looking at include loan purpose with respect to some of the alternative documentation types.

We continue to evaluate these and other options. We can all agree that 250 documentation types is way too many. But in order to be meaningful, the process of consolidation cannot be haphazard. Fortunately, the tools for turning sub-grouping into a truly data-driven process are available. We just need to use them.


RiskSpan a Winner of HousingWire’s RiskTech100 Award

For the third consecutive year, RiskSpan is a winner of HousingWire’s prestigious annual HW Tech100 Mortgage award, recognizing the most innovative technology companies in the housing economy.

The recognition is the latest in a parade of 2021 wins for the data and analytics firm whose unique blend of tech and talent enables traders and portfolio managers to transact quickly and intelligently to find opportunities. RiskSpan’s comprehensive solution also provides risk managers access to modeling capabilities and seamless access to the timely data they need to do their jobs effectively.

“I’ve been involved in choosing Tech100 winners since we started the program in 2014, and every year it manages to get more competitive,” HousingWire Editor and Chief Sarah Wheeler said. “These companies are truly leading the way to a more innovative housing market!”

Other major awards collected by RiskSpan and its flagship Edge Platform in 2021 include winning Chartis Research’s “Risk as a Service” category and being named “Buy-side Market Risk Management Product of the Year” by Risk.net.

RiskSpan’s cloud-native Edge platform is valued by users seeking to run structured products analytics fast and granularly. It provides a one-stop shop for models and analytics that previously had to be purchased from multiple vendors. The platform is supported by a first-rate team, most of whom come from industry and have walked in the shoes of our clients.

“After the uncertainty and unpredictability of last year, we expected a greater adoption of technology. However, these 100 real estate and mortgage companies took digital disruption to a whole new level and propelled a complete digital revolution, leaving a digital legacy that will impact borrowers, clients and companies for years to come,” said Brena Nath, HousingWire’s HW+ Managing Editor. ”Knowing what these companies were able to navigate and overcome, we’re excited to announce this year’s list of the most innovative technology companies serving the mortgage and real estate industries.”


Get in touch with us to explore why RiskSpan is a best-in-class partner for data and analytics in mortgage and structured finance. 

HousingWire is the most influential source of news and information for the U.S. mortgage and housing markets. Built on a foundation of independent and original journalism, HousingWire reaches over 60,000 newsletter subscribers daily and over 1.0 million unique visitors each month. Our audience of mortgage, real estate and fintech professionals rely on us to Move Markets Forward. Visit www.housingwire.com or www.solutions.housingwire.com to learn more


Is Free Public Data Worth the Cost?

No such thing as a free lunch.

The world is full of free (and semi-free) datasets ripe for the picking. If it’s not going to cost you anything, why not supercharge your data and achieve clarity where once there was only darkness?

But is it really not going to cost you anything? What is the total cost of ownership for a public dataset, and what does it take to distill truly valuable insights from publicly available data? Setting aside the reliability of the public source (a topic for another blog post), free data is anything but free. Let us discuss both the power and the cost of working with public data.

To illustrate the point, we borrow from a classic RiskSpan example: anticipating losses to a portfolio of mortgage loans due to a hurricane—a salient example as we are in the early days of the 2020 hurricane season (and the National Oceanic and Atmospheric Administration (NOAA) predicts a busy one). In this example, you own a portfolio of loans and would like to understand the possible impacts to that portfolio (in terms of delinquencies, defaults, and losses) of a recent hurricane. You know this will likely require an external data source because you do not work for NOAA, your firm is new to owning loans in coastal areas, and you currently have no internal data for loans impacted by hurricanes.

Know the Data.

The first step in using external data is understanding your own data. This may seem like a simple task. But data, its source, its lineage, and its nuanced meaning can be difficult to communicate inside an organization. Unless you work with a dataset regularly (i.e., often), you should approach your own data as if it were provided by an external source. The goal is a full understanding of the data, the data’s meaning, and the data’s limitations, all of which should have a direct impact on the types of analysis you attempt.

Understanding the structure of your data and the limitations it puts on your analysis involves questions like:

  • What objects does your data track?
  • Do you have time series records for these objects?
  • Do you only have the most recent record? The most recent 12 records?
  • Do you have one record that tries to capture life-to-date information?

Understanding the meaning of each attribute captured in your data involves questions like:

  • What attributes are we tracking?
  • Which attributes are updated (monthly or quarterly) and which remain static?
  • What are the nuances in our categorical variables? How exactly did we assign the zero-balance code?
  • Is original balance the loan’s balance at mortgage origination, or the balance when we purchased the loan/pool?
  • Do our loss numbers include forgone interest?

These same types of questions also apply to understanding external data sources, but the answers are not always as readily available. Depending on the quality and availability of the documentation for a public dataset, this exercise may be as simple as just reading the data dictionary, or as labor intensive as generating analytics for individual attributes, such as mean, standard deviation, mode, or even histograms, to attempt to derive an attribute’s meaning directly from the delivered data. This is the not-free part of “free” data, and skipping this step can have negative consequences for the quality of analysis you can perform later.

Returning to our example, we require at least two external data sets:  

  1. where and when hurricanes have struck, and
  2. loan performance data for mortgages active in those areas at those times.

The obvious choice for loan performance data is the historical performance datasets from the GSEs (Fannie Mae and Freddie Mac). Providing monthly performance information and loss information for defaulted loans for a huge sample of mortgage loans over a 20-year period, these two datasets are perfect for our analysis. For hurricanes, some manual effort is required to extract date, severity, and location from NOAA maps like these (you could get really fancy and gather zip codes covered in the landfall area—which, by leaving out homes hundreds of miles away from expected landfall, would likely give you a much better view of what happens to loans actually impacted by a hurricane—but we will stick to state-level in this simple example).

Make new data your own.

So you’ve downloaded the historical datasets, you’ve read the data dictionaries cover-to-cover, you’ve studied historical NOAA maps, and you’ve interrogated your own data teams for the meaning of internal loan data. Now what? This is yet another cost of “free” data: after all your effort to understand and ingest the new data, all you have is another dataset. A clean, well-understood, well-documented (you’ve thoroughly documented it, haven’t you?) dataset, but a dataset nonetheless. Getting the insights you seek requires a separate effort to merge the old with the new. Let us look at a simplified flow for our hurricane example:

  • Subset the GSE data for active loans in hurricane-related states in the month prior to landfall. Extract information for these loans for 12 months after landfall.
  • Bucket the historical loans by the characteristics you use to bucket your own loans (LTV, FICO, delinquency status before landfall, etc.).
  • Derive delinquency and loss information for the buckets for the 12 months after the hurricane.
  • Apply the observed delinquency and loss information to your loan portfolio (bucketed using the same scheme you used for the historical loans).

And there you have it—not a model, but a grounded expectation of loan performance following a hurricane. You have stepped out of the darkness and into the data-driven light. And all using free (or “free”) data!

Hyperbole aside, nothing about our example analysis is easy, but it plainly illustrates the power and cost of publicly available data. The power is obvious in our example: without the external data, we have no basis for generating an expectation of losses after a hurricane. While we should be wary of the impacts of factors not captured by our datasets (like the amount and effectiveness of government intervention after each storm – which does vary widely), the historical precedent we find by averaging many storms can form the basis for a robust and defensible expectation. Even if your firm has had experience with loans in hurricane-impacted areas, expanding the sample size through this exercise bolsters confidence in the outcomes. Generally speaking, the use of public data can provide grounded expectations where there had been only anecdotes.

But this power does come at a price—a price that should be appreciated and factored into the decision whether to use external data in the first place. What is worse than not knowing what to expect after a hurricane? Having an expectation based on bad or misunderstood data. Failing to account for the effort required to ingest and use free data can lead to bad analysis and the temptation to cut corners. The effort required in our example is significant: the GSE data is huge, complicated, and will melt your laptop’s RAM if you are not careful. Turning NOAA PDF maps into usable data is not a trivial task, especially if you want to go deeper than the state level. Understanding your own data can be a challenge. Applying an appropriate bucketing to the loans can make or break the analysis. Not all public datasets present these same challenges, but all public datasets present costs. There simply is no such thing as a free lunch. The returns on free data frequently justify these costs. But they should be understood before unwittingly incurring them.


Webinar: Data Analytics and Modeling in the Cloud – June 24th

On Wednesday, June 24th, at 1:00 PM EDT, join Suhrud Dagli, RiskSpan’s co-founder and chief innovator, and Gary Maier, managing principal of Fintova for a free RiskSpan webinar.

Suhrud and Gary will contrast the pros and cons of analytic solutions native to leading cloud platforms, as well as tips for ensuring data security and managing costs.

Click here to register for the webinar.


Get Started
Log in

Linkedin   

risktech2024