Linkedin    Twitter   Facebook

Get Started
Get a Demo
Articles Tagged with: Credit Analytics

How Rithm Capital leverages RiskSpan’s expertise and Edge Platform to enhance data management and achieve economies of scale

 

BACKGROUND

 

One of the nation’s largest mortgage loan and MSR investors was hampered by a complex data ingestion process as well as slow and cumbersome on-prem software for pricing and market risk.

A complicated data wrangling process was taking up significant time and led to delays in data processing. Further, month-end risk and financial reporting processes were manual and time-pressured. The data and risk teams were consumed with maintaining the day-to-day with little time available to address longer-term data strategies and enhance risk and modeling processes.

 

OBJECTIVES

  1. Modernize Rithm’s mortgage loan and MSR data intake from servicers — improve overall quality of data through automated processes and development of a data QC framework that would bring more confidence in the data and associated use cases, such as for calculating historical performance.

  2. Streamline portfolio valuation and risk analytics while enhancing granularity and flexibility through loan-level valuation/risk.

  3. Ensure data availability for accounting, finance and other downstream processes.

  4. Bring scalability and internal consistency to all of the processes above.

THE SOLUTION



THE EDGE WE PROVIDED

By adopting RiskSpan’s cloud-native data management, managed risk, and SaaS solutions, Rithm Capital saved time and money by streamlining its processes

Adopting Edge has enabled Rithm to access enhanced and timely data for better performance tracking and risk management by:

  • Managing data on 5.5 million loans, including source information and monthly updates from loan servicers (with ability in the future to move to daily updates)
  • Ingesting, validating and normalizing all data for consistency across servicers and assets
  • Implementing automated data QC processes
  • Performing granular, loan-level analysis​

 


With more than 5 million mortgage loans spread across nine servicers, Rithm needed a way to consume data from different sources whose file formats varied from one another and also often lacked internal consistency. Data mapping and QC rules constantly had to be modified to keep up with evolving file formats. 

Once the data was onboarded Rithm required an extraordinary amount of compute power to run stochastic paths of Monte Carlo rate simulations on all 4 million of those loans individually and then discount the resulting cash flows based on option adjusted yield across multiple scenarios.

To help minimize the computing workload, Rithm had been running all these daily analytics at a rep-line level—stratifying and condensing everything down to between 70,000 and 75,000 rep lines. This alleviated the computing burden but at the cost of decreased accuracy and limited reporting flexibility because results were not at the loan-level.

Enter RiskSpan’s Edge Platform.

Combining the strength of RiskSpan’s subject matter experts, quantitative analysts, and technologists together with the power of the Edge platform, RiskSpan has helped Rithm achieve its objectives across the following areas: 

Data management and performance reporting

  • Data intake and quality control for 9 servicers across loan and MSR portfolios
  • Servicer data enrichment
  • Automated data loads leading to reduced processing time for rolling tapes
  • Ongoing data management support and resolution
  • Historical performance review and analysis (portfolio and universe)

Valuation and risk

  • Daily reporting of MSR, mortgage loan and security valuation and risk analytics based on customized Tableau reports
  • MSR and whole loan valuation/risk calculated based at the loan-level leveraging the scalability of the cloud-native infrastructure
  • Additional scenario analysis and other requirements needed for official accounting and valuation purposes

Interactive tools for portfolio management

  • Fast and accurate tape cracking for purchase/sale decision support
  • Ad-hoc scenario analyses based on customized dials and user-settings

The implementation of these enhanced data and analytics processes and increased ability to scale these processes has allowed Rithm to spend less time on day-to-day data wrangling and focus more on higher-level data analysis and portfolio management. The quality of data has also improved, which has led to more confidence in the data that is used across many parts of the organization.


LET US BUILD YOUR SOLUTION

Models + Data management = End-to-end Managed Process

The economies of scale we have achieved by being able to consolidate all of our portfolio risk, interactive analytics, and data warehousing onto a single platform are substantial. RiskSpan’s experience with servicer data and MSR analytics have been particularly valuable to us.

          — Head of Analytics


Optimizing Analytics Computational Processing 

We met with RiskSpan’s Head of Engineering and Development, Praveen Vairavan, to understand how his team set about optimizing analytics computational processing for a portfolio of 4 million mortgage loans using a cloud-based compute farm.

This interview dives deeper into a case study we discussed in a recent interview with RiskSpan’s co-founder, Suhrud Dagli.

Here is what we learned from Praveen. 


Speak to an Expert

Could you begin by summarizing for us the technical challenge this optimization was seeking to overcome? 

PV: The main challenge related to an investor’s MSR portfolio, specifically the volume of loans we were trying to run. The client has close to 4 million loans spread across nine different servicers. This presented two related but separate sets of challenges. 

The first set of challenges stemmed from needing to consume data from different servicers whose file formats not only differed from one another but also often lacked internal consistency. By that, I mean even the file formats from a single given servicer tended to change from time to time. This required us to continuously update our data mapping and (because the servicer reporting data is not always clean) modify our QC rules to keep up with evolving file formats.  

The second challenge relates to the sheer volume of compute power necessary to run stochastic paths of Monte Carlo rate simulations on 4 million individual loans and then discount the resulting cash flows based on option adjusted yield across multiple scenarios. 

And so you have 4 million loans times multiple paths times one basic cash flow, one basic option-adjusted case, one up case, and one down case, and you can see how quickly the workload adds up. And all this needed to happen on a daily basis. 

To help minimize the computing workload, our client had been running all these daily analytics at a rep-line level—stratifying and condensing everything down to between 70,000 and 75,000 rep lines. This alleviated the computing burden but at the cost of decreased accuracy because they couldn’t look at the loans individually. 

What technology enabled you to optimize the computational process of running 50 paths and 4 scenarios for 4 million individual loans?

PV: With the cloud, you have the advantage of spawning a bunch of servers on the fly (just long enough to run all the necessary analytics) and then shutting it down once the analytics are done. 

This sounds simple enough. But in order to use that level of compute servers, we needed to figure out how to distribute the 4 million loans across all these different servers so they can run in parallel (and then we get the results back so we could aggregate them). We did this using what is known as a MapReduce approach. 

Say we want to run a particular cohort of this dataset with 50,000 loans in it. If we were using a single server, it would run them one after the other – generate all the cash flows for loan 1, then for loan 2, and so on. As you would expect, that is very time-consuming. So, we decided to break down the loans into smaller chunks. We experimented with various chunk sizes. We started with 1,000 – we ran 50 chunks of 1,000 loans each in parallel across the AWS cloud and then aggregated all those results.  

That was an improvement, but the 50 parallel jobs were still taking longer than we wanted. And so, we experimented further before ultimately determining that the “sweet spot” was something closer to 5,000 parallel jobs of 100 loans each. 

Only in the cloud is it practical to run 5,000 servers in parallel. But this of course raises the question: Why not just go all the way and run 50,000 parallel jobs of one loan each? Well, as it happens, running an excessively large number of jobs carries overhead burdens of its own. And we found that the extra time needed to manage that many jobs more than offset the compute time savings. And so, using a fair bit of trial and error, we determined that 100-loan jobs maximized the runtime savings without creating an overly burdensome number of jobs running in parallel.  

Get A Demo

You mentioned the challenge of having to manage a large number of parallel processes. What tools do you employ to work around these and other bottlenecks? 

PV: The most significant bottleneck associated with this process is finding the “sweet spot” number of parallel processes I mentioned above. As I said, we could theoretically break it down into 4 million single-loan processes all running in parallel. But managing this amount of distributed computation, even in the cloud, invariably creates a degree of overhead which ultimately degrades performance. 

And so how do we find that sweet spot – how do we optimize the number of servers on the distributed computation engine? 

As I alluded to earlier, the process involved an element of trial and error. But we also developed some home-grown tools (and leveraged some tools available in AWS) to help us. These tools enable us to visualize computation server performance – how much of a load they can take, how much memory they use, etc. These helped eliminate some of the optimization guesswork.   

Is this optimization primarily hardware based?

PV: AWS provides essentially two “flavors” of machines. One “flavor” enables you to take in a lot of memory. This enables you to keep a whole lot of loans in memory so it will be faster to run. The other flavor of hardware is more processor based (compute intensive). These machines provide a lot of CPU power so that you can run a lot of processes in parallel on a single machine and still get the required performance. 

We have done a lot of R&D on this hardware. We experimented with many different instance types to determine which works best for us and optimizes our output: Lots of memory but smaller CPUs vs. CPU-intensive machines with less (but still a reasonably amount of) memory. 

We ultimately landed on a machine with 96 cores and about 240 GB of memory. This was the balance that enabled us to run portfolios at speeds consistent with our SLAs. For us, this translated to a server farm of 50 machines running 70 processes each, which works out to 3,500 workers helping us to process the entire 4-million-loan portfolio (across 50 Monte Carlo simulation paths and 4 different scenarios) within the established SLA.  

What software-based optimization made this possible? 

PV: Even optimized in the cloud, hardware can get pricey – on the order of $4.50 per hour in this example. And so, we supplemented our hardware optimization with some software-based optimization as well. 

We were able to optimize our software to a point where we could use a machine with just 30 cores (rather than 96) and 64 GB of RAM (rather than 240). Using 80 of these machines running 40 processes each gives us 2,400 workers (rather than 3,500). Software optimization enabled us to run the same number of loans in roughly the same amount of time (slightly faster, actually) but using fewer hardware resources. And our cost to use these machines was just one-third what we were paying for the more resource-intensive hardware. 

All this, and our compute time actually declined by 10 percent.  

The software optimization that made this possible has two parts: 

The first part (as we discussed earlier) is using the MapReduce methodology to break down jobs into optimally sized chunks. 

The second part involved optimizing how we read loan-level information into the analytical engine.  Reading in loan-level data (especially for 4 million loans) is a huge bottleneck. We got around this by implementing a “pre-processing” procedure. For each individual servicer, we created a set of optimized loan files that can be read and rendered “analytics ready” very quickly. This enables the loan-level data to be quickly consumed and immediately used for analytics without having to read all the loan tapes and convert them into a format that analytics engine can understand. Because we have “pre-processed” all this loan information, it is immediately available in a format that the engine can easily digest and run analytics on.  

This software-based optimization is what ultimately enabled us to optimize our hardware usage (and save time and cost in the process).  

Contact us to learn more about how we can help you optimize your mortgage analytics computational processing.


Rethink Analytics Computational Processing – Solving Yesterday’s Problems with Today’s Technology and Access 

We sat down with RiskSpan’s co-founder and chief technology officer, Suhrud Dagli, to learn more about how one mortgage investor successfully overhauled its analytics computational processing. The investor migrated from a daily pricing and risk process that relied on tens of thousands of rep lines to one capable of evaluating each of the portfolio’s more than three-and-a-half million loans individually (and how they actually saved money in the process).  

Here is what we learned. 


Could you start by talking a little about this portfolio — what asset class and what kind of analytics the investor was running? 

SD: Our client was managing a large investment portfolio of mortgage servicing rights (MSR) assets, residential loans and securities.  

The investor runs a battery of sophisticated risk management analytics that rely on stochastic modeling. Option-adjusted spread, duration, convexity, and key rate durations are calculated based on more than 200 interest rate simulations. 

GET A FREE DEMO OR FREE TRIAL

Why was the investor running their analytics computational processing using a rep line approach? 

SD: They used rep lines for one main reason: They needed a way to manage computational loads on the server and improve calculation speeds. Secondarily, organizing the loans in this way simplified their reporting and accounting requirements to a degree (loans financed by the same facility were grouped into the same rep line).  

This approach had some downsides. Pooling loans by finance facility was sometimes causing loans with different balances, LTVs, credit scores, etc., to get grouped into the same rep line. This resulted in prepayment and default assumptions getting applied to every loan in a rep line that differed from the assumptions that likely would have been applied if the loans were being evaluated individually.  

The most obvious solution to this would seem to be one that disassembles the finance facility groups into their individual loans, runs all those analytics at the loan level, and then re-aggregates the results into the original rep lines. Is this sort of analytics computational processing possible without taking all day and blowing up the server? 

SD: That is effectively what we are doing. The process is not a speedy as we’d like it to be (and we are working on that). But we have worked out a solution that does not overly tax computational resources.  

The analytics computational processing we are implementing ignores the rep line concept entirely and just runs the loans. The scalability of our cloud-native infrastructure enables us to take the three-and-a-half million loans and bucket them equally for computation purposes. We run a hundred loans on each processor and get back loan-level cash flows and then generate the output separately, which brings the processing time down considerably. 

SPEAK TO AN EXPERT

So we have a proof of concept that this approach to analytics computational processing works in practice for running pricing and risk on MSR portfolios. Is it applicable to any other asset classes?

SD: The underlying principles that make analytics computational processing possible at the loan level for MSR portfolios apply equally well to whole loan investors and MBS investors. In fact, the investor in this example has a large whole-loan portfolio alongside its MSR portfolio. And it is successfully applying these same tactics on that portfolio.   

An investor in any mortgage asset benefits from the ability to look at and evaluate loan characteristics individually. The results may need to be rolled up and grouped for reporting purposes. But being able to run the cash flows at the loan level ultimately makes the aggregated results vastly more meaningful and reliable. 

A loan-level framework also affords whole-loan and securities investors the ability to be sure they are capturing the most important loan characteristics and are staying on top of how the composition of the portfolio evolves with each day’s payoffs. 

ESG factors are an important consideration for a growing number of investors. Only a loan-level approach makes it possible for these investors to conduct the kind of property- and borrower-level analyses to know whether they are working toward meeting their ESG goals. It also makes it easier to spot areas of geographic concentration risk, which simplifies climate risk management to some degree.  

Say I am a mortgage investor who is interested in moving to loan-level pricing and risk analytics. How do I begin? 

 SD: Three things: 

  1.  It begins with having the data. Most investors have access to loan-level data. But it’s not always clean. This is especially true of origination data. If you’re acquiring a pool – be it a seasoned pool or a pool right after origination – you don’t have the best origination data to drive your model. You also need a data store that can generate loan-loan level output to drive your analytics and models.
  2. The second factor is having models that work at the loan level – models that have been calibrated using loan-level performance and that are capable of generating loan-level output. One of the constraints of several existing modeling frameworks developed by vendors is they were created to run at a rep line level and don’t necessarily work very well for loan-level projections.  
  3. The third thing you need is a compute farm. It is virtually impossible to run loan-level analytics if you’re not on the cloud because you need to distribute the computational load. And your computational distribution requirements will change from portfolio to portfolio based on the type of analytics that you are running, based on the types of scenarios that you are running, and based on the models you are using. 

The cloud is needed not just for CPU power but also for storage. This is because once you go to the loan level, every loan’s data must be made available to every processor that’s performing the calculation. This is where having the kind of shared databases, which are native to a cloud infrastructure, becomes vital. You simply can’t replicate it using a on-premise setup of computers in your office or in your own data center. 

So, 1) get your data squared away, 2) make sure you’re using models that are optimized for loan-level, and 3) max out your analytics computational processing power by migrating to cloud-native infrastructure. Thank you, Suhrud, for taking the time to speak with us.


“Reject Inference” Methods in Credit Modeling: What are the Challenges?

Reject inference is a popular concept that has been used in credit modeling for decades. Yet, we observe in our work validating credit models that the concept is still dynamically evolving. The appeal of reject inference, whose aim is to develop a credit scoring model utilizing all available data, including that of rejected applicants, is easy enough to grasp. But the technique also introduces a number of fairly vexing challenges.

The technique seeks to rectify a fundamental shortcoming in traditional credit modeling: Models predicting the probability that a loan applicant will repay the loan can be trained to historical loan application data with a binary variable representing whether a loan was repaid or charged off. This information, however, is only available for accepted applications. And many of these applications are not particularly recent. This limitation results in a training dataset that may not be representative of the broader loan application universe.

Credit modelers have devised several techniques for getting around this data representativeness problem and increasing the number of observations by inferring the repayment status of rejected loan applications. These techniques, while well intentioned, are often treated empirically and lack a deeper theoretical basis. They often result in “hidden” modeling assumptions, the reasonableness of which is not fully investigated. Additionally, no theoretical properties of the coefficient estimates, or predictions are guaranteed.

This article summarizes the main challenges of reject inference that we have encountered in our model validation practice.

Speak With A MODEL VALIDATION EXPERT

Selecting the Right Reject Inference Method

Many approaches exist for reject inference, none of which is clearly and universally superior to all the others. Empirical studies have been conducted to compare methods and pick a winner, but the conclusions of these studies are often contradictory. Some authors argue that reject inference cannot improve scorecard models[1]and flatly recommend against their use. Others posit that certain techniques can outperform others[2] based on empirical experiments. The results of these experiments, however, tend to be data dependent. Some of the most popular approaches include the following:

  • Ignoring rejected applications: The simplest approach is to develop a credit scoring model based only on accepted applications. The underlying assumption is that rejected applications can be ignored and that the “missingness” of this data from the training dataset can be classified as missing at random. Supporters of this method point to the simplicity of the implementation, clear assumptions, and good empirical results. Others argue that the rejected applications cannot be dismissed simply as random missing data and thus should not be ignored.
  • Hard cut-off method: In this method, a model is first trained using only accepted application data. This trained model is then used to predict the probabilities of charge-off for the rejected applications. A cut-off value is then chosen. Hypothetical loans from rejected applications with probabilities higher than this cut-off value are considered charged off. Hypothetical loans from the remaining applications are assumed to be repaid. The specified model is then re-trained using a dataset including both accepted and rejected applications.
  • Fuzzy augmentation: Similar to the hard cut-off method, fuzzy augmentation begins by training the model on accepted applications only. The resulting model with estimated coefficients is then used to predict charge-off probabilities for rejected applications. Data from rejected applications is then duplicated and a repaid or charged-off status is assigned to each. The specified model is then retrained on the augmented dataset—including accepted applications and the duplicated rejects. Each rejected application is weighted by either a) the predicted probability of charge-off if its assigned status is “charged-off,” or b) the predicted probability of it being repaid if its assigned status is “repaid.”
  • Parceling: The parceling method resembles the hard cut-off method. However, rather than classifying all rejects above a certain threshold as charged-off, this method classifies the repayment status in proportion to the expected “bad” rate (charge-off frequency) at that score. The predicted charge-off probabilities are partitioned into k intervals. Then, for each interval, an assumption is made about the bad rate, and loan applications in each interval are assigned a repayment status randomly according to the bad rate. Bad rates are assumed to be higher in the reject dataset than among the accepted loans. This method considers the missingness to be not at random (MNAR), which requires the modeler to supplement the additional information about the distribution of charge-offs among rejects.

Proportion of Accepted Applications to Rejects

An institution with a relatively high percentage of rejected applications will necessarily end up with an augmented training dataset whose quality is heavily dependent on the quality of the selected reject inference method and its implementation. One might argue it is best to limit the proportion of rejected applications to acceptances. The level at which such a cap is established should reflect the “confidence” in the method used. Estimating such a confidence level, however, is a highly subjective endeavor.

The Proportion of Bad Rates for Accepts and Rejects

It is reasonable to assume that the “bad rate,” i.e., proportion of charged-off loans to repaid loans, will be higher among rejected applications. Some modelers set a threshold based on their a priori belief that the bad rate among rejects is at least p-times the bad rate among acceptances. If the selected reject inference method produces a dataset with a bad rate that is perceived to be artificially low, actions are taken to increase the bad rate above some threshold. Identifying where to establish this threshold is notoriously difficult to justify.

Variable Selection

As outlined above, most approaches begin by estimating a preliminary model based on accepted applications only. This model is then used to infer how rejected loans would have performed. The preliminary model is then retrained on a dataset consisting both of actual data from accepted applications and of the inferred data from rejects. This means that the underlying variables themselves are selected based only on the actual loan performance data from accepted applications. The statistical significance of the selected variables might change, however, when moving to the complete dataset. Variable selection is sometimes redone using the complete data. This, however, can lead to overfitting.

Measuring Model Performance

From a model validator’s perspective, an ideal solution would involve creating a control group in which applications would not be scored and filtered and every application would be accepted. Then the discriminating power of a credit model could be assessed by comparing the charge-off rate of the control group with the charge-off rate of the loans accepted by the model. This approach of extending credit indiscriminately is impractical, however, as it would require the lender to engage in some degree of irresponsible lending.

Another approach is to create a test set. The dilemma here is whether to include only accepted applications. A test set that includes only accepted applications will not necessarily reflect the population for which the model will be used. Including rejected applications, however, obviously necessitates the use of reject inference. For all the reasons laid out above, this approach risks overstating the model’s performance due to the fact that a similar model (trained only on the accepted cases) was used for reject inference.

A third approach that avoids both of these problems involves using information criteria such as AIC and BIC. This, however, is useful only when comparing different models (for model or variable selection). The values of information criteria cannot be interpreted as an absolute measure of performance.

A final option is to consider utilizing several models in production (the main model and challenger models). Under this scenario, each application would be evaluated by a model selected at random. The models can then be compared retroactively by calculating their bad rates on accepted application after the financed loans mature. Provided that the accept rates are similar, the model with the lowest bad rate is the best.

Conclusion

Reject inference remains a progressing field in credit modeling. Its ability to improve model performance is still the subject of intense debate. Current results suggest that while reject inference can improve model performance, its application can also lead to overfitting, thus worsening the ability to generalize. The lack of a strong theoretical basis for reject inference methods means that applications of reject inference need to rely on empirical results. Thus, if reject inference is used, key model stakeholders need to possess a deep understanding of the modeled population, have strong domain knowledge, emphasize conducting experiments to justify the applied modeling techniques, and, above all, adopt and follow a solid ongoing monitoring plan.

Doing this will result in a modeling methodology that is most likely to produce reliable outputs for the institutions while also satisfying MRM and validator requirements.

Contact Us

[1] https://www.sciencedirect.com/science/article/abs/pii/S0378426603002036

[2] https://economix.fr/pdf/dt/2016/WP_EcoX_2016-10.pdf


Rising Rates; Rising Temperatures: What Higher Interest Rates Portend for Mortgage Climate Risk — An interview with Janet Jozwik  

Janet Jozwik leads RiskSpan’s sustainability analytics (climate risk and ESG) team. She is also an expert in mortgage credit risk and a recognized industry thought leader on incorporating climate risk into credit modeling. We sat down with Janet to get her views on whether the current macroeconomic environment should impact how mortgage investors prioritize their climate risk mitigation strategies.


You contend that higher interest rates are exposing mortgage lenders and investors to increased climate risk. Why is that?

JJ: My concern is primarily around the impact of higher rates on credit risk overall, of which climate risk is merely a subset – a largely overlooked and underappreciated subset, to be sure, and one with potentially devastating consequences, but ultimately one of many. The simple reason is that, because interest rates are up, loans are going to remain on your books longer. The MBA’s recent announcement of refinance applications (and mortgage originations overall) hitting their lowest levels since 2000 is stark evidence of this.

And because these loans are going to be lasting longer, borrowers will have more opportunities to get into trouble (be it a loss of income or a natural disaster) and everybody should be taking credit risk more seriously. One of the biggest challenges posed by a high-rate environment is borrowers don’t have a lot of the “outs” available to them as they do when they encounter stress during more favorable macroeconomic environments. They can no longer simply refi into a lower rate. Modification options become more complicated. They might have no option other than to sell the home – and even that isn’t going to be as easy as it was, say, a year ago. So, we’ve entered this phase where credit risk analytics, both at origination and life of loan, really need to be taken seriously. And credit risk includes climate risk.

So longer durations mean more exposure to credit risk – more time for borrowers to run into trouble and experience credit events. What does climate have to do with it? Doesn’t homeowners’ insurance mitigate most of this risk anyway?

JJ: Each additional month or year that a mortgage loan remains outstanding is another month or year that the underlying property is exposed to some form of natural disaster risk (hurricane, flood, wildfire, earthquake, etc.). When you look at a portfolio in aggregate – one whose weighted average life has suddenly ballooned from four years to, say eight years – it is going to experience more events, more things happening to it. Credit risk is the risk of a borrower failing to make contractual payments. And having a home get blown down or flooded by a hurricane tends to have a dampening effect on timely payment of principal and interest.

As for insurance, yes, insurance mitigates portfolio exposure to catastrophic loss to some degree. But remember that not everyone has flood insurance, and many loans don’t require it. Hurricane-specific policies often come with very high deductibles and don’t always cover all the damage. Many properties lack wildfire insurance or the coverage may not be adequate. Insurance is important and valuable but should not be viewed as a panacea or a substitute for good credit-risk management or taking climate into account when making credit decisions.

But the disaster is going to hit when the disaster is going to hit, isn’t it? How should I be thinking about this if I am a lender who recaptures a considerable portion of my refis? Haven’t I just effectively replaced three shorter-lived assets with a single longer-lived one? Either way, my portfolio’s going to take a hit, right?

JJ: That is true as far as it goes. And if in the steady state that you are envisioning, one where you’re just churning through your portfolio, prepaying existing loans with refis that look exactly like the loans they’re replacing, then, yes, the risk will be similar, irrespective of expected duration.

But do not forget that each time a loan turns over, a lender is afforded an opportunity to reassess pricing (or even reassess the whole credit box). Every refi is an opportunity to take climate and other credit risks into account and price them in. But in a high-rate environment, you’re essentially stuck with your credit decisions for the long haul.

Do home prices play any role in this?

JJ: Near-zero interest rates fueled a run-up in home prices like nothing we’ve ever seen before. This arguably made disciplined credit-risk management less important because, worst case, all the new equity in a property served as a buffer against loss.

But at some level, we all had to know that these home prices were not universally sustainable. And now that interest rates are back up, existing home prices are suddenly starting to look a little iffy. Suddenly, with cash-out refis off the table and virtually no one in the money for rate and term refis, weighted average lives have nowhere to go but up. This is great, of course, if your only exposure is prepayment risk. But credit risk is a different story.

And so, extremely low interest rates over an extended period played a significant role in unsustainably high home values. But the pandemic had a lot to do with it, as well. It’s well documented that the mass influx of home buyers into cities like Boise from larger, traditionally more expensive markets drove prices in those smaller cities to astronomical levels. Some of these markets (like Boise) have not only reached an equilibrium point but are starting to see property values decline. Lenders with excessive exposure to these traditionally smaller markets that experienced the sharpest home price increases during the pandemic will need to take a hard look at their credit models’ HPI assumptions (in addition to those properties’ climate risk exposure).

What actions should lenders and investors be considering today?

JJ: If you are looking for a silver lining in the fact that origination volumes have fallen off a cliff, it has afforded the market an opportunity to catch its breath and reassess where it stands risk-wise. Resources that had been fully deployed in an effort simply to keep up with the volume can now be reallocated to taking a hard look at where the portfolio stands in terms of credit risk generally and climate risk in particular.

This includes assessing where the risks and concentrations are in mortgage portfolios and, first, making sure not to further exacerbate existing concentration risks by continuing to acquire new assets in overly exposed geographies. Investors may be wise to go so far even to think about selling certain assets if they feel like they have too much risk in problematic areas.

Above all, this is a time when lenders need to be taking a hard look at the fundamentals underpinning their underwriting standards. We are coming up on 15 years since the start of the “Great Recession” – the last time mortgage underwriting was really “tight.” For the past decade, the industry has had nothing but calm waters – rising home values and historically low interest rates. It’s been like tech stocks in the ‘90s. Lenders couldn’t help but make money.

I am concerned that this has allowed complacency to take hold. We’re in a new world now. One with shaky home prices and more realistic interest rates. The temptation will be to loosen underwriting standards in order to wring whatever volume might be available out of the economy. But in reality, they need to be doing precisely the opposite. Underwriting standards are going to have tighten a bit in order effectively manage the increased credit (and climate) risks inherent to longer-duration lending.

It’s okay for lenders and investors to be taking these new risks on. They just need to be doing it with their eyes wide open and they need to be pricing for it.

Speak To an Expert

Surge in Cash-Out Refis Pushes VQI Sharply Higher

A sharp uptick in cash-out refinancing pushed RiskSpan’s Vintage Quality Index (VQI) to its highest level since the first quarter of 2019.

RiskSpan’s Vintage Quality Index computes and aggregates the percentage of Agency originations each month with one or more “risk factors” (low-FICO, high DTI, high LTV, cash-out refi, investment properties, etc.). Months with relatively few originations characterized by these risk factors are associated with lower VQI ratings. As the historical chart above shows, the index maxed out (i.e., had an unusually high number of loans with risk factors) leading up to the 2008 crisis.

RiskSpan uses the index principally to fine-tune its in-house credit and prepayment models by accounting for shifts in loan composition by monthly cohort.

Rising Rates Mean More Cash-Out Refis (and more risk)

As the following charts plotting the individual VQI components illustrate, a spike in cash-out refinance activity (as a percentage of all originations) accounted for more of the rise in overall VQI than did any other risk factor.

This comes as little surprise given the rising rate environment that has come to define the first quarter of 2022, a trend that is likely to persist for the foreseeable future.

As we demonstrated in this recent post, the quickly vanishing number of borrowers who are in the money for a rate-and-term refinance means that the action will increasingly turn to so-called “serial cash-out refinancers” who repeatedly tap into their home equity even when doing so means refinancing into a mortgage with a higher rate. The VQI can be expected to push ever higher to the extent this trend continues.

An increase in the percentage of loans with high debt-to-income ratios (over 45) and low credit scores (under 660) also contributed to the rising VQI, as did continued upticks in loans on investment and multi-unit properties as well as mortgages with only one borrower.

Population assumptions:

  • Monthly data for Fannie Mae and Freddie Mac.
  • Loans originated more than three months prior to issuance are excluded because the index is meant to reflect current market conditions.
  • Loans likely to have been originated through the HARP program, as identified by LTV, MI coverage percentage, and loan purpose, are also excluded. These loans do not represent credit availability in the market as they likely would not have been originated today but for the existence of HARP.

Data assumptions:

  • Freddie Mac data goes back to 12/2005. Fannie Mae only back to 12/2014.
  • Certain fields for Freddie Mac data were missing prior to 6/2008.

GSE historical loan performance data release in support of GSE Risk Transfer activities was used to help back-fill data where it was missing.

An outline of our approach to data imputation can be found in our VQI Blog Post from October 28, 2015.

Data Source: Fannie Mae PoolTalk®-Loan Level Disclosure


Asset Managers Improving Yields With Resi Whole Loans

An unmistakable transformation is underway among asset managers and insurance companies with respect to whole loan investments. Whereas residential mortgage loan investing has historically been the exclusive province of commercial banks, a growing number of other institutional investors – notably life insurance companies and third-party asset managers – have shifted their attention toward this often-overlooked asset class.

Life companies and other asset managers with primarily long-term, risk-sensitive objectives are no strangers to residential mortgages. Their exposure, however, has traditionally been in the form of mortgage-backed securities, generally taking refuge in the highest-rated bonds. Investors accustomed to the AAA and AA tranches may understandably be leery of whole-loan credit exposure. Infrastructure investments necessary for managing a loan portfolio and the related credit-focused surveillance can also seem burdensome. But a new generation of tech is alleviating more of the burden than ever before and making this less familiar and sometimes misunderstood asset class increasingly accessible to a growing cadre of investors.

Maximizing Yield

Following a period of low interest rates, life companies and other investment managers are increasingly embracing residential whole-loan mortgages as they seek assets with higher returns relative to traditional fixed-income investments (see chart below). As highlighted in the chart below, residential mortgage portfolios, on a loss-adjusted basis, consistently outperform other investments, such as corporate bonds, and look increasingly attractive relative to private-label residential mortgage-backed securities as well.

Nearly one-third of the $12 trillion in U.S. residential mortgage debt outstanding is currently held in the form of loans.

And while most whole loans continue to be held in commercial bank portfolios, a growing number of third-party asset managers have entered the fray as well, often on behalf of their life insurance company clients.

Investing in loans introduces a dimension of credit risk that investors do need to understand and manage through thoughtful surveillance practices. As the chart below (generated using RiskSpan’s Edge Platform) highlights, when evaluating yields on a loss-adjusted basis, resi whole loans routinely generate yield.

REQUEST A DEMO OR TRIAL

In addition to higher yields, whole loans investments offer investors other key advantages over securities. Notably:

Data Transparency

Although transparency into private label RMBS has improved dramatically since the 2008 crisis, nothing compares to the degree of loan-level detail afforded whole-loan investors. Loan investors typically have access to complete loan files and therefore complete loan-level datasets. This allows for running analytics based on virtually any borrower, property, or loan characteristic and contributes to a better risk management environment overall. The deeper analysis enabled by loan-level and property-specific information also permits investors to delve into ESG matters and better assess climate risk.

Daily Servicer Updates

Advancements in investor reporting are increasingly granting whole loan investors access to daily updates on their portfolio performance. Daily updating provides investors near real-time updates on prepayments and curtailments as well as details regarding problem loans that are seriously delinquent or in foreclosure and loss mitigation strategies. Eliminating the various “middlemen” between primary servicers and investors (many of the additional costs of securitization outlined below—master servicers, trustees, various deal and data “agents,” etc.—have the added negative effect of adding layers between security investors and the underlying loans) is one of the things that makes daily updates possible.

Lower Transaction Costs

Driven largely by a lack of trust in the system and lack of transparency into the underlying loan collateral, private-label securities investments incur a series of yield-eroding transactions costs that whole-loan investors can largely avoid. Consider the following transaction costs in a typical securitization:

  • Loan Data Agent costs: The concept of a loan data agent is unique to securitization. Data agents function essentially as middlemen responsible for validating the performance of other vendors (such as the Trustee). The fee for this service is avoided entirely by whole loan investors, which generally do not require an intermediary to get regularly updated loan-level data from servicers.
  • Securities Administrator/Custodian/Trustee costs: These roles present yet another layer of intermediary costs between the borrower/servicer and securities investors that are not incurred in whole loan investing.
  • Deal Agent costs: Deal agents are third party vendors typically charged with enhancing transparency in a mortgage security and ensuring that all parties’ interests are protected. The deal agent typically performs a surveillance role and charges investors ongoing annual fees plus additional fees for individual loan file reviews. These costs are not borne by whole loan investors.
  • Due diligence costs: While due diligence costs factor into loan and security investments alike, the additional layers of review required for agency ratings tends to drive these costs higher for securities. While individual file reviews are also required for both types of investments, purchasing loans only from trusted originators allows investors to get comfortable with reviewing a smaller sample of new loans. This can push due diligence costs on loan portfolios to much lower levels when compared to securities.
  • Servicing costs: Mortgage servicing costs are largely unavoidable regardless of how the asset is held. Loan investors, however, tend to have more options at their disposal. Servicing fees for securities vary from transaction to transaction with little negotiating power by the security investors. Further, securities investors incur master servicing fees which is generally not a required function for managing whole loan investments.

Emerging technology is streamlining the process of data cleansing, normalization and aggregation, greatly reducing the operational burden of these processes, particularly for whole loan investors, who can cut out many of these intermediary parties entirely.

Overcoming Operational Hurdles

Much of investor reluctance to delve into loans has historically stemmed from the operational challenges (real and perceived) associated with having to manage and make sense of the underlying mountain of loan, borrower, and property data tied to each individual loan. But forward-thinking asset managers are increasingly finding it possible to offload and outsource much of this burden to cloud-native solutions purpose built to store, manage, and provide analytics on loan-level mortgage data, such as RiskSpan’s Edge Platform supporting loan data management and analytics. RiskSpan solutions make it easy to mine available loan portfolios for profitable sub-cohorts, spot risky loans for exclusion, apply a host of credit and prepay scenario analyses, and parse static and performance data in any way imaginable.

At an increasing number of institutions, demonstrating the power of analytical tools and the feasibility of applying them to the operational and risk management challenges at hand will solve many if not most of the hurdles standing in the way of obtaining asset class approval for mortgage loans. The barriers to access are coming down, and the future is brighter than ever for this fascinating, dynamic and profitable asset class.


Will a Rising VQI Materially Impact Servicing Costs and MSR Valuations?

RiskSpan’s Vintage Quality Index computes and aggregates the percentage of Agency originations each month with one or more “risk factors” (low-FICO, high DTI, high LTV, cash-out refi, investment properties, etc.). Months with relatively few originations characterized by these risk factors are associated with lower VQI ratings. As the historical chart above shows, the index maxed out (i.e., had an unusually high number of loans with risk factors) leading up to the 2008 crisis.

RiskSpan uses the index principally to fine-tune its in-house credit and prepayment models by accounting for shifts in loan composition by monthly cohort.

Will a rising VQI translate into higher servicing costs?

The Vintage Quality Index continued to climb during the third quarter of 2021, reaching a value of 85.10, compared to 83.40 in the second quarter. The higher index value means that a higher percentage of loans were originated with one or more defined risk factors.

The rise in the index during Q3 was less dramatic than Q2’s increase but nevertheless continues a trend going back to the start of the pandemic. The increase continues to be driven by a subset of risk factors, notably the share of cash-out refinances and investor properties (both up significantly) and high-DTI loans (up modestly). On balance, fewer loans were characterized by the remaining risk metrics.

What might this mean for servicing costs?

Servicing costs are highly sensitive to loan performance. Performing Agency loans are comparatively inexpensive to service, while non-performing loans can cost thousands of dollars per year more — usually several times the amount a servicer can expect to earn in servicing fees and other ancillary servicing revenue.

For this reason, understanding the “vintage quality” of newly originated mortgage pools is an element to consider when forecasting servicing cash flows (and, by extension, MSR pricing).

Each of the risk layers that compose the VQI contributes to marginally higher default risk (and, therefore, a theoretically lower servicing valuation). But not all risk layers affect expected cash flows equally. It is also important to consider the VQI in relationship to its history. While the index has been rising since the pandemic, it remains relatively low by historical standards — still below a local high in early 2018 and certainly nowhere near the heights reached leading up to the 2008 financial crisis.

A look at the individual risk metrics driving the increase would also seem to reduce any cause for alarm. While the ever-increasing number of loans with high debt-to-income ratios could be a matter of some concern, the other two principal contributors to the overall VQI rise — loans on investment properties and cash-out refinances — do not appear to jeopardize servicing cash flows to the same degree as low credit scores and high DTI ratios do.

Consequently, while the gradual increase in loans with one or more risk factors bears watching, it likely should not have a significant bearing (for now) on how investors price Agency MSR assets.

Population assumptions:

  • Monthly data for Fannie Mae and Freddie Mac.
  • Loans originated more than three months prior to issuance are excluded because the index is meant to reflect current market conditions.
  • Loans likely to have been originated through the HARP program, as identified by LTV, MI coverage percentage, and loan purpose, are also excluded. These loans do not represent credit availability in the market as they likely would not have been originated today but for the existence of HARP.

Data assumptions:

  • Freddie Mac data goes back to 12/2005. Fannie Mae only back to 12/2014.
  • Certain fields for Freddie Mac data were missing prior to 6/2008.

GSE historical loan performance data release in support of GSE Risk Transfer activities was used to help back-fill data where it was missing.

An outline of our approach to data imputation can be found in our VQI Blog Post from October 28, 2015.


Value Opportunities in Private-Label Investor Loan Deals

The supply of investor loan collateral in private securitizations has surged in 2021 and projects to remain high (more on this below). To gain an informational edge while selecting bonds among this new issuance, traders and investors have asked RiskSpan for data and tools to dissect the performance of investor loans. Below, we first show the performance of investor loans compared to owner-occupied loans, and then offer a glimpse into a few relative value opportunities using our data and analytics platform, Edge.

As background, the increase of investor loan collateral in PLS was spurred by a new FHFA policy, recently suspended, that capped GSE acquisitions of investor and second home loans at 7% of seller volume. This cap forced originators to explore private-label securitization which, while operationally more burdensome than GSE execution, has been more profitable because it bypasses the GSEs’ high loan-level pricing adjustments. Now that this difficult but rewarding PLS path has been more widely traveled, we expect it to become more efficient and to remain popular, even with the GSE channel reopening.

Subsector Performance Comparison: Investor Vs. Owner-Occupied Loans

Investor Loans Promise Longer Collection of Above-Market Rates

Compared to owner-occupants, investors have historically paid above-market mortgage rates for longer periods before refinancing. Figure 1 shows the prepayment rates of investors vs. owner-occupants as a function of refinance incentive (the borrower’s note rate minus the prevailing mortgage rate). As their flatter “s-curve” shows, the rise in investor prepayments as refinance incentive increases is much more subdued than for owner-occupants.

Crucially, this relationship is not fully explained by higher risk-based pricing premiums on investor loans. Figure 2 shows the same comparison as Figure 1 but only for loans with spreads at origination (SATO) between 50 and 75 bps. The categorical difference between owner-occupied and investor prepay speeds is partially reduced but clearly remains. We also tried controlling for property type, but the difference persists. The relative slowness of investors may result from investors spreading their attention across many elements of their P&L besides interest expense, from higher underwriting obstacles for a rental income-driven loan, and/or from lenders limiting allocation of credit to the investor type.

While we plot these graphs over a five-year lookback period to balance desires for recency and sample size, this relationship holds over shorter and longer performance periods as well.


Figure 1: The Investor Loans S-Curve is Significantly Flatter Than the Owner-Occupied Curve
Investor s-curve vs. owner-occupied s-curve. Includes prime credit, no prepayment penalty, original loan size $200K-$400K, ages 6-48 months for the past 5yr period performance.

Source: CoreLogic’s Private-Label RMBS Collateral Dataset, RiskSpan. Note: because the increase in private-label investor loan volume is coming from Agency cutbacks, the historical performance of investor loans within both Agency and private-label datasets are relevant to private-label investor loan future performance. In this analysis we show private-label data because it straightforwardly parses voluntary prepays vs. defaults, which of course is a critical distinction for PL RMBS investors. Nonetheless, where applicable, we have run the analyses in both datasets, each of which corroborates the performance patterns we show.


Figure 2: Even Controlling for SATO, The Investor vs. Owner-Occupied S-Curve Difference Persists

Same as Figure 1 but includes only loans with SATO between 50-75 bps Source: CoreLogic, RiskSpan


Investor Loans Pose Comparable Baseline Risk, Greater Downside Risk to Credit Investors

Credit performance of investor loans has been worse than owner-occupied loans during crises, which justifies a pricing premium. During benign periods, investor loans have defaulted at similar or lower rates than owner-occupied loans – presumably due to more conservative LTVs, FICOs and DTIs among the investor loan type – and have therefore been profitable for credit investors during these periods. See Figure 3.


Figure 3: Investor Loans Have Defaulted at Greater Rates During Crises and Similar Rates in Other Periods vs. Owner-Occupied Loans
Default rates over time, investor loans vs. owner-occupied. Includes prime credit, ages 12-360 months.

Source: CoreLogic, RiskSpan

Relative Value Opportunities Within Investor Loans

California Quicker to Refinance

California has the largest share of U.S. investor mortgages, as it does with all residential mortgages. California borrowers, both investors and owner-occupieds, have exhibited a steeper response to refinance incentives than have borrowers in other states. Figure 4 shows the comparison focusing on investors. While historical home price appreciation has enabled refinances in California, it has done the same in many states. Therefore, the speed differences point to a more active refinance market in California. All else equal, then, RMBS investors will prefer less California collateral.


Figure 4: California Prepays Significantly Faster In the Money
Investor s-curves bucketed by geography (California vs. Other). Includes prime credit, no prepayment penalty, original loan size $200k-$400k, ages 6-48 months for the past 3yr performance period.

Source: CoreLogic, RiskSpan


For AAA Investors, Limited-Doc Investor Loans May Offer a Two-Sided Benefit: They Buoy Premium Bonds, and a Small Sample Suggests They Lift Discount Bonds, Too

Limited-doc investor loans offer senior tranche holders the chance to earn above-market rates for longer than full-doc investor loans, a relative edge for premium bonds (Figure 5). This is intuitive; we would expect limited-doc borrowers to face greater obstacles to refinancing. This difference holds even controlling for spread at origination. Based on a smaller sample, limited-doc investor loans have also turned over more (see greater prepay rates in the negative refinance incentive bucket). This may result from a correlation between limited documentation and more rapid flipping into the rising HPI environment we have had nationally throughout the past seven years. If so, this would mean that limited-doc investor loans also help discount bonds, relative to full-doc investor loans, accelerate repayments at par.

Because limited-doc investor loans are rare in the RMBS 2.0 era, we widened the performance period to the past seven years to get some sample in each of the refinance incentive buckets. Nonetheless, with all the filters we have put on to isolate the effect of documentation type, there are only a few hundred limited-doc investor loans in the negative refinance incentive buckets.


Figure 5: Limited-Doc Investor Loans Have Prepaid Slower In-The-Money and Faster Out-of-the-Money
Investor s-curves bucketed by doc type. Includes prime credit, no prepayment penalty, original loan size $400K-$800K, ages 6-48 months, SATO 25-125bps for the past 7yr performance period.

Source: CoreLogic, RiskSpan


Size Affects Refi Behavior – But Not How You Think

An assumption carried over from Agency performance is that rate-driven prepays get likelier as loan size increases. This pattern holds across conforming loan sizes, but then reverses and refinance response gets flatter again as balances cross $800K. This is true for investor and owner-occupied loans in both Agency and private-label loan data, though of course the number of loans above $800K in the Agency data is small. Figure 6 shows this pattern for private-label investor loans. As shown, in-the-money prepayments are slowest among loans below $200K, as we would expect. But despite their much higher motivation to refinance, loans above $800K have similar S-curves to loans of just $200K-$400K.

The SATO is generally a few basis points higher for these largest loans, but this does not explain away the speed differences. Figure 7 shows the same comparison as Figure 6 except only for loans with SATO between 50-75 bps. Except for a slightly choppier graph because of the reduced sample size, the same rank-ordering is evident. Nor does controlling for property type or geography remove the speed differences. The largest loans, we conclude, have fewer credit alternatives and/or face more stringent underwriting hurdles than smaller loans, hampering their refi rates.

Rate refinances are fastest among the mid-sized loans between $400K-$600K and $600K-$800K. That these last two groups have similar S-curves – despite the greater dollar motivation to refinance for the $600K-$800Kgroup – suggests that the countervailing effect of lower ability to find refinancing outlets is already kicking in for the $600K-$800K size range.

All of this means that high-balance collateral should be more attractive to investors than some traditional prepayment models will appreciate.


Figure 6: The Largest Investor Loans Refinance Slower Than Medium-Sized
Investor s-curves bucketed by loan size. Includes prime credit, no prepayment penalty, ages 6-48 months for the past 5yr performance period.

Source: CoreLogic, RiskSpan


Figure 7: Controlling For SATO, Largest Investor Loans Still Refinance Slower Than Medium-Sized
Same as Figure 4 but includes only loans with SATO between 50-75 bps

Source: CoreLogic, RiskSpan


Preliminarily, Chimera Has Lowest Stressed Delinquencies of Top Investor Shelves

For junior-tranche, credit-exposed investors in the COVID era, 60-day-plus delinquencies have been significantly rarer on Chimera’s shelf than on other top investor shelves. The observable credit mixes of the three shelves appear similar. We ran this analysis with only full-doc loans and from only one state (California), and the rank-ordering of delinquency rates by shelves remains the same. Further to this point, note that the spread at origination of Chimera’s shelf is nearly as high as Flagstar’s. All of this suggests there is something not directly observable about Chimera’s shelf that has generated better credit performance during this stressed period. We caution that differences in servicer reporting of COVID forbearances can distort delinquency data, so we will continue to monitor this performance as our data updates each month.


Figure 8: Chimera Posts Lowest COVID Delinquencies, with Nearly Highest SATO of Top Investor Shelves
Investor DQ60+ rates over time, bucketed by shelf. Includes prime credit, ages 12-60 months.


Source: CoreLogic, RiskSpan


The Greater Default Risk of Low-Doc Investor Loans Lasts About 10 Years

Low-doc investors default more frequently than full-doc investors, but only during the first roughly 120 months of loan age. Around this age, the default rates converge. For loans seasoned beyond this age, full-doc loans begin to default slightly more frequently than low-doc loans, likely due to a survivorship bias. This suggests that credit investors are wise to require a price discount for new issuance with low-doc collateral. For deals with heavily seasoned collateral, junior-tranche investors may counterintuitively prefer low-doc collateral — certainly if they can earn an extra risk premium for it, as it would seem they are not actually bearing any extra credit risk.


Figure 9: Low-Doc Investor Loans Default More Frequently Than Full-Doc Until Loan Age = 120
Investor default rates over time, bucketed by doc type. Includes prime credit, RMBS 2.0 era, for the past 7yr performance period.

Source: CoreLogic, RiskSpan


Summary

  • Investor loans face higher barriers to refinance than owner-occupied, offering RMBS investors the opportunity to earn higher coupons for longer periods.
  • For junior tranche investors, the credit performance of investor loans has been similar to owner-occupied loans during benign economic periods and worse during stressed times.
  • California borrowers respond more quickly to refinance incentives than borrowers from other states; investors will prefer less California collateral.
  • Limited-doc investor loans offer AAA investors a double benefit: slower refinances in the money, extending premium bonds; and faster turnover out of the money, limiting extension risk.
  • Low loan balances are attractive for their slow refinance response – as are non-conforming (high) loan balances above $800K. Traditional prepay models may miss this latter dynamic.
  • For credit investors, Chimera’s delinquency rates have been significantly better during the pandemic than other investor shelves. We will continue to monitor this as different ways of reporting COVID forbearances may confound such comparisons.
  • For credit investors, limited-doc investor loans default at higher rates than full-doc loans for about the first ten years of loan age; after this point the two perform very similarly, with limited-doc loans defaulting at slightly lower rates among these seasoned loans, likely due to survivor biases.


Contact Us

Contact us if you are interested in seeing variations on this theme. Using Edge, we can examine any loan characteristic and generate an S-curve, aging curve, or time series.


RiskSpan Named to Inaugural STORM50 Ranking by Chartis Research – Winner of “A.I. Innovation in Capital Markets”

Chartis Research has named RiskSpan to its Inaugural “STORM50” Ranking of leading risk and analytics providers. The STORM report “focuses on the computational infrastructure and algorithmic efficiency of the vast array of technology tools used across the financial services industry” and identifies industry-leading vendors that excel in the delivery of Statistical Techniques, Optimization frameworks, and Risk Models of all types. 

RiskSpan’s flagship Edge Platform was a natural fit for the designation because of its positioning squarely at the nexus of statistical behavioral modeling (specifically around mortgage credit and prepayment risk) and functionality enabling users to optimize trading and asset management strategies.  Being named the winner of the “A.I. Innovation in Capital Markets” solutions category reflects the work of RiskSpan’s vibrant innovation lab, which includes researching and developing machine learning solutions to structured finance challenges. These solutions include mining a growing trove of alternative/unstructured data sources, anomaly detection in loan-level and other datasets, and natural language processing for constructing deal cash flow models from legal documents.

Learn more about the Edge Platform or contact us to discuss ways we might help you modernize and improve your mortgage and structured finance data and analytics challenges. 


Get Started
Get A Demo

Linkedin    Twitter    Facebook