Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: MSRs

Optimizing Analytics Computational Processing 

We met with RiskSpan’s Head of Engineering and Development, Praveen Vairavan, to understand how his team set about optimizing analytics computational processing for a portfolio of 4 million mortgage loans using a cloud-based compute farm.

This interview dives deeper into a case study we discussed in a recent interview with RiskSpan’s co-founder, Suhrud Dagli.

Here is what we learned from Praveen. 


Speak to an Expert

Could you begin by summarizing for us the technical challenge this optimization was seeking to overcome? 

PV: The main challenge related to an investor’s MSR portfolio, specifically the volume of loans we were trying to run. The client has close to 4 million loans spread across nine different servicers. This presented two related but separate sets of challenges. 

The first set of challenges stemmed from needing to consume data from different servicers whose file formats not only differed from one another but also often lacked internal consistency. By that, I mean even the file formats from a single given servicer tended to change from time to time. This required us to continuously update our data mapping and (because the servicer reporting data is not always clean) modify our QC rules to keep up with evolving file formats.  

The second challenge relates to the sheer volume of compute power necessary to run stochastic paths of Monte Carlo rate simulations on 4 million individual loans and then discount the resulting cash flows based on option adjusted yield across multiple scenarios. 

And so you have 4 million loans times multiple paths times one basic cash flow, one basic option-adjusted case, one up case, and one down case, and you can see how quickly the workload adds up. And all this needed to happen on a daily basis. 

To help minimize the computing workload, our client had been running all these daily analytics at a rep-line level—stratifying and condensing everything down to between 70,000 and 75,000 rep lines. This alleviated the computing burden but at the cost of decreased accuracy because they couldn’t look at the loans individually. 

What technology enabled you to optimize the computational process of running 50 paths and 4 scenarios for 4 million individual loans?

PV: With the cloud, you have the advantage of spawning a bunch of servers on the fly (just long enough to run all the necessary analytics) and then shutting it down once the analytics are done. 

This sounds simple enough. But in order to use that level of compute servers, we needed to figure out how to distribute the 4 million loans across all these different servers so they can run in parallel (and then we get the results back so we could aggregate them). We did this using what is known as a MapReduce approach. 

Say we want to run a particular cohort of this dataset with 50,000 loans in it. If we were using a single server, it would run them one after the other – generate all the cash flows for loan 1, then for loan 2, and so on. As you would expect, that is very time-consuming. So, we decided to break down the loans into smaller chunks. We experimented with various chunk sizes. We started with 1,000 – we ran 50 chunks of 1,000 loans each in parallel across the AWS cloud and then aggregated all those results.  

That was an improvement, but the 50 parallel jobs were still taking longer than we wanted. And so, we experimented further before ultimately determining that the “sweet spot” was something closer to 5,000 parallel jobs of 100 loans each. 

Only in the cloud is it practical to run 5,000 servers in parallel. But this of course raises the question: Why not just go all the way and run 50,000 parallel jobs of one loan each? Well, as it happens, running an excessively large number of jobs carries overhead burdens of its own. And we found that the extra time needed to manage that many jobs more than offset the compute time savings. And so, using a fair bit of trial and error, we determined that 100-loan jobs maximized the runtime savings without creating an overly burdensome number of jobs running in parallel.  

Get A Demo

You mentioned the challenge of having to manage a large number of parallel processes. What tools do you employ to work around these and other bottlenecks? 

PV: The most significant bottleneck associated with this process is finding the “sweet spot” number of parallel processes I mentioned above. As I said, we could theoretically break it down into 4 million single-loan processes all running in parallel. But managing this amount of distributed computation, even in the cloud, invariably creates a degree of overhead which ultimately degrades performance. 

And so how do we find that sweet spot – how do we optimize the number of servers on the distributed computation engine? 

As I alluded to earlier, the process involved an element of trial and error. But we also developed some home-grown tools (and leveraged some tools available in AWS) to help us. These tools enable us to visualize computation server performance – how much of a load they can take, how much memory they use, etc. These helped eliminate some of the optimization guesswork.   

Is this optimization primarily hardware based?

PV: AWS provides essentially two “flavors” of machines. One “flavor” enables you to take in a lot of memory. This enables you to keep a whole lot of loans in memory so it will be faster to run. The other flavor of hardware is more processor based (compute intensive). These machines provide a lot of CPU power so that you can run a lot of processes in parallel on a single machine and still get the required performance. 

We have done a lot of R&D on this hardware. We experimented with many different instance types to determine which works best for us and optimizes our output: Lots of memory but smaller CPUs vs. CPU-intensive machines with less (but still a reasonably amount of) memory. 

We ultimately landed on a machine with 96 cores and about 240 GB of memory. This was the balance that enabled us to run portfolios at speeds consistent with our SLAs. For us, this translated to a server farm of 50 machines running 70 processes each, which works out to 3,500 workers helping us to process the entire 4-million-loan portfolio (across 50 Monte Carlo simulation paths and 4 different scenarios) within the established SLA.  

What software-based optimization made this possible? 

PV: Even optimized in the cloud, hardware can get pricey – on the order of $4.50 per hour in this example. And so, we supplemented our hardware optimization with some software-based optimization as well. 

We were able to optimize our software to a point where we could use a machine with just 30 cores (rather than 96) and 64 GB of RAM (rather than 240). Using 80 of these machines running 40 processes each gives us 2,400 workers (rather than 3,500). Software optimization enabled us to run the same number of loans in roughly the same amount of time (slightly faster, actually) but using fewer hardware resources. And our cost to use these machines was just one-third what we were paying for the more resource-intensive hardware. 

All this, and our compute time actually declined by 10 percent.  

The software optimization that made this possible has two parts: 

The first part (as we discussed earlier) is using the MapReduce methodology to break down jobs into optimally sized chunks. 

The second part involved optimizing how we read loan-level information into the analytical engine.  Reading in loan-level data (especially for 4 million loans) is a huge bottleneck. We got around this by implementing a “pre-processing” procedure. For each individual servicer, we created a set of optimized loan files that can be read and rendered “analytics ready” very quickly. This enables the loan-level data to be quickly consumed and immediately used for analytics without having to read all the loan tapes and convert them into a format that analytics engine can understand. Because we have “pre-processed” all this loan information, it is immediately available in a format that the engine can easily digest and run analytics on.  

This software-based optimization is what ultimately enabled us to optimize our hardware usage (and save time and cost in the process).  

Contact us to learn more about how we can help you optimize your mortgage analytics computational processing.


Rethink Analytics Computational Processing – Solving Yesterday’s Problems with Today’s Technology and Access 

We sat down with RiskSpan’s co-founder and chief technology officer, Suhrud Dagli, to learn more about how one mortgage investor successfully overhauled its analytics computational processing. The investor migrated from a daily pricing and risk process that relied on tens of thousands of rep lines to one capable of evaluating each of the portfolio’s more than three-and-a-half million loans individually (and how they actually saved money in the process).  

Here is what we learned. 


Could you start by talking a little about this portfolio — what asset class and what kind of analytics the investor was running? 

SD: Our client was managing a large investment portfolio of mortgage servicing rights (MSR) assets, residential loans and securities.  

The investor runs a battery of sophisticated risk management analytics that rely on stochastic modeling. Option-adjusted spread, duration, convexity, and key rate durations are calculated based on more than 200 interest rate simulations. 

GET A FREE DEMO OR FREE TRIAL

Why was the investor running their analytics computational processing using a rep line approach? 

SD: They used rep lines for one main reason: They needed a way to manage computational loads on the server and improve calculation speeds. Secondarily, organizing the loans in this way simplified their reporting and accounting requirements to a degree (loans financed by the same facility were grouped into the same rep line).  

This approach had some downsides. Pooling loans by finance facility was sometimes causing loans with different balances, LTVs, credit scores, etc., to get grouped into the same rep line. This resulted in prepayment and default assumptions getting applied to every loan in a rep line that differed from the assumptions that likely would have been applied if the loans were being evaluated individually.  

The most obvious solution to this would seem to be one that disassembles the finance facility groups into their individual loans, runs all those analytics at the loan level, and then re-aggregates the results into the original rep lines. Is this sort of analytics computational processing possible without taking all day and blowing up the server? 

SD: That is effectively what we are doing. The process is not a speedy as we’d like it to be (and we are working on that). But we have worked out a solution that does not overly tax computational resources.  

The analytics computational processing we are implementing ignores the rep line concept entirely and just runs the loans. The scalability of our cloud-native infrastructure enables us to take the three-and-a-half million loans and bucket them equally for computation purposes. We run a hundred loans on each processor and get back loan-level cash flows and then generate the output separately, which brings the processing time down considerably. 

SPEAK TO AN EXPERT

So we have a proof of concept that this approach to analytics computational processing works in practice for running pricing and risk on MSR portfolios. Is it applicable to any other asset classes?

SD: The underlying principles that make analytics computational processing possible at the loan level for MSR portfolios apply equally well to whole loan investors and MBS investors. In fact, the investor in this example has a large whole-loan portfolio alongside its MSR portfolio. And it is successfully applying these same tactics on that portfolio.   

An investor in any mortgage asset benefits from the ability to look at and evaluate loan characteristics individually. The results may need to be rolled up and grouped for reporting purposes. But being able to run the cash flows at the loan level ultimately makes the aggregated results vastly more meaningful and reliable. 

A loan-level framework also affords whole-loan and securities investors the ability to be sure they are capturing the most important loan characteristics and are staying on top of how the composition of the portfolio evolves with each day’s payoffs. 

ESG factors are an important consideration for a growing number of investors. Only a loan-level approach makes it possible for these investors to conduct the kind of property- and borrower-level analyses to know whether they are working toward meeting their ESG goals. It also makes it easier to spot areas of geographic concentration risk, which simplifies climate risk management to some degree.  

Say I am a mortgage investor who is interested in moving to loan-level pricing and risk analytics. How do I begin? 

 SD: Three things: 

  1.  It begins with having the data. Most investors have access to loan-level data. But it’s not always clean. This is especially true of origination data. If you’re acquiring a pool – be it a seasoned pool or a pool right after origination – you don’t have the best origination data to drive your model. You also need a data store that can generate loan-loan level output to drive your analytics and models.
  2. The second factor is having models that work at the loan level – models that have been calibrated using loan-level performance and that are capable of generating loan-level output. One of the constraints of several existing modeling frameworks developed by vendors is they were created to run at a rep line level and don’t necessarily work very well for loan-level projections.  
  3. The third thing you need is a compute farm. It is virtually impossible to run loan-level analytics if you’re not on the cloud because you need to distribute the computational load. And your computational distribution requirements will change from portfolio to portfolio based on the type of analytics that you are running, based on the types of scenarios that you are running, and based on the models you are using. 

The cloud is needed not just for CPU power but also for storage. This is because once you go to the loan level, every loan’s data must be made available to every processor that’s performing the calculation. This is where having the kind of shared databases, which are native to a cloud infrastructure, becomes vital. You simply can’t replicate it using a on-premise setup of computers in your office or in your own data center. 

So, 1) get your data squared away, 2) make sure you’re using models that are optimized for loan-level, and 3) max out your analytics computational processing power by migrating to cloud-native infrastructure. Thank you, Suhrud, for taking the time to speak with us.


Live Demo of RiskSpan’s Award-Winning Edge Platform–3

Wednesday, August 24th | 1:00 p.m. EDT

Live Demo of RiskSpan’s award-winning Edge Platform. Learn more and ask questions at our bi-weekly, 45-minute demo.

Historical Performance Tool: Slice and dice historical loan performance in the Agency and PLRMBS universe to find outperforming cohorts.

Predictive Loan-Level Pricing and Risk Analytics: Produce loan-level pricing and risk on loans, MSRs, and structured products in minutes – with behavioral models applied at the loan-level, and other assumptions applied conveniently to inform bids and hedging.

Loan Data Management: Let RiskSpan’s data scientists consolidate and enhance your data across origination and servicing platforms, make it analytics-ready, and maintain if for ongoing trend analysis.


About RiskSpan:

RiskSpan offers cloud-native SaaS analytics for on-demand market risk, credit risk, pricing and trading. With our data science experts and technologists, we are the leader in data as a service and end-to-end solutions for loan-level data management and analytics.

Our mission is to be the most trusted and comprehensive source of data and analytics for loans and structured finance investments.

Rethink loan and structured finance data. Rethink your analytics. Learn more at www.riskspan.com.

Presenters

Joe Makepeace

Director, RiskSpan

Jordan Parker

Sales Executive, RiskSpan


RiskSpan Introduces Multi-Scenario Yield Table 

ARLINGTON, Va., August 4, 2022

RiskSpan, a leading provider of residential mortgage and structured product data and analytics, has announced a new Multi-Scenario Yield Table feature within its award-winning Edge Platform.  

REITs and other mortgage loan and MSR investors leverage the Multi-Scenario Yield Table to instantaneously run and compare multiple scenario analyses on any individual asset in their portfolio. 

An interactive, self-guided demo of this new functionality can be viewed here. 

Comprehensive details of this and other new capabilities are available by requesting a no-obligation live demo at riskspan.com. 

Request a No-Obligation Live Demo

With a single click from the portfolio screen, Edge users can now simultaneously view the impact of as many as 20 different scenarios on outputs including price, yield, WAL, dv01, OAS, discount margin, modified duration, weighted average CRR and CDR, severity and projected losses. The ability to view these and other model outputs across multiple scenarios in a single table eliminates the tedious and time-consuming process of running scenarios individually and having to manually juxtapose the resulting analytics.  

Entering scenarios is easy. Users can make changes to scenarios right on the screen to facilitate quick, ad hoc analyses. Once these scenarios are loaded and assumptions are set, the impacts of each scenario on price and other risk metrics are lined up in a single, easily analyzed data table. 

Analysts who determine that one of the scenarios is producing more reasonable results than the defined base case can overwrite and replace the base case with the preferred scenario in just two clicks.   

The Multi-Scenario Yield Table is the latest in a series of enhancements that is making the Edge Platform increasingly indispensable for mortgage loan and MSR portfolio managers. 


 About RiskSpan, Inc.  

RiskSpan offers cloud-native SaaS analytics for on-demand market risk, credit risk, pricing and trading. With our data science experts and technologists, we are the leader in data as a service and end-to-end solutions for loan-level data management and analytics. 

Our mission is to be the most trusted and comprehensive source of data and analytics for loans and structured finance investments. 

Rethink loan and structured finance data. Rethink your analytics. Learn more at www.riskspan.com.

Media contact: Timothy Willis 


It’s time to move to DaaS — Why it matters for loan and MSR investors

Data as a service, or DaaS, for loans and MSR investors is fast becoming the difference between profitable trades and near misses.

Granularity of data is creating differentiation among investors. To win at investing in loans and mortgage servicing rights requires effectively managing a veritable ocean of loan-level data. Buried within every detailed tape of borrower, property, loan and performance characteristics lies the key to identifying hidden exposures and camouflaged investment opportunities. Understanding these exposures and opportunities is essential to proper bidding during the acquisition process and effective risk management once the portfolio is onboarded.

Investors know this. But knowing that loan data conceals important answers is not enough. Even knowing which specific fields and relationships are most important is not enough. Investors also must be able to get at that data. And because mortgage data is inherently messy, investors often run into trouble extracting the answers they need from it.

For investors, it boils down to two options. They can compel analysts to spend 75 percent of their time wrangling unwieldy data – plugging holes, fixing outliers, making sure everything is mapped right. Or they can just let somebody else worry about all that so they can focus on more analytical matters.

Don’t get left behind — DaaS for loan and MSR investors

It should go without saying that the “let somebody else worry about all that” approach only works if “somebody else” possesses the requisite expertise with mortgage data. Self-proclaimed data experts abound. But handing the process over to an outside data team lacking the right domain experience risks creating more problems than it solves.

Ideally, DaaS for loan and MSR investors consists of a data owner handing off these responsibilities to a third party that can deliver value in ways that go beyond simply maintaining, aggregating, storing and quality controlling loan data. All these functions are critically important. But a truly comprehensive DaaS provider is one whose data expertise is complemented by an ability to help loan and MSR investors understand whether portfolios are well conceived. A comprehensive DaaS provider helps investors ensure that they are not taking on hidden risks (for which they are not being adequately compensated in pricing or servicing fee structure).

True DaaS frees up loan and MSR investors to spend more time on higher-level tasks consistent with their expertise. The more “blocking and tackling” aspects of data management that every institution that owns these assets needs to deal with can be handled in a more scalable and organized way. Cloud-native DaaS platforms are what make this scalability possible.

Scalability — stop reinventing the wheel with each new servicer

One of the most challenging aspects of managing a portfolio of loans or MSRs is the need to manage different types of investor reporting data pipelines from different servicers. What if, instead of having to “reinvent the wheel” to figure out data intake every time a new servicer comes on board, “somebody else” could take care of that for you?

An effective DaaS provider is one not only that is well versed in building and maintain loan data pipes from servicers to investors but also has already established a library of existing servicer linkages. An ideal provider is one already set-up to onboard servicer data directly onto its own DaaS platform. Investors achieve enormous economies of scale by having to integrate with a single platform as opposed to a dozen or more individual servicer integrations. Ultimately, as more investors adopt DaaS, the number of centralized servicer integrations will increase, and greater economies will be realized across the industry.

Connectivity is only half the benefit. The DaaS provider not only intakes, translates, maps, and hosts the loan-level static and dynamic data coming over from servicers. The DaaS provider also takes care of QC, cleaning, and managing it. DaaS providers see more loan data than any one investor or servicer. Consequently, the AI tools an experienced DaaS provider uses to map and clean incoming loan data have had more opportunities to learn. Loan data that has been run through a DaaS provider’s algorithms will almost always be more analytically valuable than the same loan data processed by the investor alone.  

Investors seeking to increase their footprint in the loan and MSR space obviously do not wish to see their data management costs rise in proportion to the size of their portfolios. Outsourcing to a DaaS provider that specializes in mortgages, like RiskSpan, helps investors build their book while keeping data costs contained.

Save time and money – Make better bids

For all these reasons, DaaS is unquestionably the future (and, increasingly, the present) of loan and MSR data management. Investors are finding that a decision to delay DaaS migration comes with very real costs, particularly as data science labor becomes increasingly (and often prohibitively) expensive.

The sooner an investor opts to outsource these functions to a DaaS provider, the sooner that investor will begin to reap the benefits of an optimally cost-effective portfolio structure. One RiskSpan DaaS client reported a 50 percent reduction in data management costs alone.

Investors continuing to make do with in-house data management solutions will quickly find themselves at a distinct bidding disadvantage. DaaS-aided bidders have the advantage of being able to bid more competitively based on their more profitable cost structure. Not only that, but they are able to confidently hone and refine their bids based on having a better, cleaner view of the portfolio itself.

Rethink your mortgage data. Contact RiskSpan to talk about how DaaS can simultaneously boost your profitability and make your life easier.

REQUEST A DEMO

Live Demo of RiskSpan’s Award-Winning Edge Platform

Wednesday, July 27th | 1:00 p.m. EDT

Register for the next Live Demo of RiskSpan’s award-winning Edge Platform. Learn more and ask questions at our bi-weekly, 45-minute demo.

Historical Performance Tool: Slice and dice historical loan performance in the Agency and PLRMBS universe to find outperforming cohorts.

Predictive Loan-Level Pricing and Risk Analytics: Produce loan-level pricing and risk on loans, MSRs, and structured products in minutes – with behavioral models applied at the loan-level, and other assumptions applied conveniently to inform bids and hedging.

Loan Data Management: Let RiskSpan’s data scientists consolidate and enhance your data across origination and servicing platforms, make it analytics-ready, and maintain if for ongoing trend analysis.


About RiskSpan:

RiskSpan offers cloud-native SaaS analytics for on-demand market risk, credit risk, pricing and trading. With our data science experts and technologists, we are the leader in data as a service and end-to-end solutions for loan-level data management and analytics.

Our mission is to be the most trusted and comprehensive source of data and analytics for loans and structured finance investments.

Rethink loan and structured finance data. Rethink your analytics. Learn more at www.riskspan.com.

Presenters

Joe Makepeace

Director, RiskSpan

Jordan Parker

Sales Executive, RiskSpan


RiskSpan Introduces Media Effect Measure for Prepayment Analysis, Predictive Analytics for Managed Data 

ARLINGTON, Va., July 14, 2022

RiskSpan, a leading provider of residential mortgage  and structured product data and analytics, has announced a series of new enhancements in the latest release of its award-winning Edge Platform.

Comprehensive details of these new capabilities are available byrequesting a no-obligation demo at riskspan.com.

Speak to An Expert

Media Effect – It has long been accepted that prepayment speeds see an extra boost as media coverage alerts borrowers to refinancing opportunities. Now, Edge lets traders and modelers measure the media effect present in any active pool of Agency loans—highlighting borrowers most prone to refinance in response to news coverage—and plot the empirical impact on any cohort of loans. Developed in collaboration with practitioners, it measures rate novelty by comparing rate environment at a given time to rates over the trailing five years. Mortgage portfolio managers and traders who subscribe to Edge have always been able to easily stratify mortgage portfolios by refinance incentive. With the new Media Effect filter/bucket, market participants fine tune expectations by analyzing cohorts with like media effects.

Predictive Analytics for Managed Data – Edge subscribers who leverage RiskSpan’s Data Management service to aggregate and prep monthly loan and MSR data can now kick off predictive analytics for any filtered snapshot of that data. Leveraging RiskSpan’s universe of forward-looking analytics, subscribers can generate valuations, market risk metrics to inform hedging, credit loss accounting estimates and credit stress test outputs, and more. Sharing portfolio snapshots and analytics results across teams has never been easier.

These capabilities and other recently released Edge Platform functionality will be on display at next week’s SFVegas 2022 conference, where RiskSpan is a sponsor. RiskSpan will be featured at Booth 38 in the main exhibition hall. RiskSpan professionals will also be available to respond to questions on July 19th following their panels, “Market Beat: Mortgage Servicing Rights” and “Technology Trends in Securitization.”


About RiskSpan, Inc. 

RiskSpan offers cloud-native SaaS analytics for on-demand market risk, credit risk, pricing and trading. With our data science experts and technologists, we are the leader in data as a service and end-to-end solutions for loan-level data management and analytics.

Our mission is to be the most trusted and comprehensive source of data and analytics for loans and structured finance investments.

Rethink loan and structured finance data. Rethink your analytics. Learn more at www.riskspan.com.


Why Accurate Loan Pool and MSR Cost Forecasting Requires Loan-by-Loan Analytics

When it comes to forecasting loan pool and MSR cash flows, the practice of creating “rep lines,” or cohorts, of loans with similar characteristics for analytical purposes has its roots in the Agency MBS market. One of the most attractive and efficient features of Agencies is the TBA market. This market allows originators and issuers to sell large pools of mortgages that have not even been originated yet. This is possible because all parties understand what these future loans will look like. All these loans will all have enough in common as to be effectively interchangeable with one another.  

Institutions that perform the servicing on such loans may reasonably feel they can extend the TBA logic to their own analytics. Instead of analyzing a hundred similar loans individually, why not just lump them into one giant meta-loan? Sum the balances, weight-average the rates, terms, and other features, and you’re good to go. 

Why the industry still resorts to loan cohorting when forecasting loan pool and MSR cash flows

The simplest explanation for cohort-level analytics lies in its simplicity. Rep lines amount to giant simplifying assumptions. They generate fewer technological constraints than a loan-by-loan approach does. Condensing an entire loan portfolio down to a manageable number of rows requires less computational capacity. This takes on added importance when dealing with on-premise software and servers. It also facilitates the process of assigning performance and cost assumptions. 

What is more, as OAS modeling has evolved to dominate the loans and MSR landscape, the stratification approach necessary to run Monte Carlo and other simulations lends itself to cohorting. Lumping loans into like groups also greatly simplifies the process of computing hedging requirements. 

Advantages of loan-level over cohorting when forecasting cash flows

Treating loan and MSR portfolios like TBA pools, however, has become increasingly problematic as these portfolios have grown more heterogeneous. Every individual loan has a story. Even loans that resemble each other in terms of rate, credit score, LTV, DTI, and documentation level have unique characteristics. Some of these characteristics – climate risk, for example – are not easy to bucket. Lumping similar loans into cohorts also runs the risk of underestimating tail risk. Extraordinarily high servicing/claims costs on just one or two outlier loans on a bid tape can be enough to adversely affect the yield of an entire deal. 

Conversely, looking at each loan individually facilitates the analysis of portfolios with expanded credit boxes. Non-banks, which do not usually have the benefit of “knowing” their servicing customers through depository or other transactional relationships, are particularly reliant on loan-level data to understand individual borrower risks, particularly credit risks. Knowing the rate, LTV, and credit score of a bundled group of loans may be sufficient for estimating prepayment risk. But only a more granular, loan-level analysis can produce the credit analytics necessary to forecast reliably and granularly what a servicing portfolio is really going to cost in terms of collections, loss mitigation, and claims expenses.  

Loan-level analysis also eliminates the reliance on stratification limitations. It facilitates portfolio composition analysis. Slicing and dicing techniques are much more simply applied to loans individually than to cohorts. Looking at individual loans also reduces the risk of overrides and lost visibility into convexity pockets. 

Loan-Level MSR Analytics

Potential challenges and other considerations 

So why hasn’t everyone jumped onto the loan-level bandwagon when forecasting loan pool and MSR cash flows? In short, it’s harder. Resistance to any new process can be expected when existing aggregation regimes appear to be working fine. Loan-level data management requires more diligence in automated processes. It also requires the data related to each individual loan to be subjected to QC and monitoring. Daily hedging and scenario runs tend to focus more on speed than on accuracy at the macro level. Some may question whether the benefits of such a granular, case-by-case analysis that identifying the most significant loan-level pickups requires actually justifies the cost of such a regime. 

Rethink. Why now? 

Notwithstanding these challenges, there has never been a better time for loan and MSR investors to abandon cohorting and fully embrace loan-level analytics when forecasting cash flows. The emergence of cloud-native technology and enhanced database and warehouse infrastructure along with the ability to outsource the hosting and computational requirements out to third parties creates practically limitless scalability. 

The barriers between loan and MSR experts and IT professionals have never been lower. This, combined with the emergence of a big data culture in an increasing number of organizations, has brought the granular daily analysis promised by loan-level analytics tantalizingly within reach.  

 

For a deeper dive into loan and MSR cost forecasting, view our webinar, “How Much Will That MSR Portfolio Really Cost You?”

 


Webinar Recording: How Much Will That MSR Portfolio Really Cost You?

Recorded: June 8th | 1:00 p.m. ET

Accurately valuing a mortgage servicing rights portfolio requires accurately projecting MSR cash flows. And accurately projecting MSR cash flows requires a reliable forecast of servicing costs. Trouble is, servicing costs vary extensively from loan to loan. While the marginal cost of servicing a loan that always pays on time is next to nothing, seriously delinquent loans can easily cost hundreds, if not thousands, of dollars per year.

The best way to account for this is to forecast and assign servicing costs at the loan level – a once infeasible concept that cloud-native technology has now brought within reach. Our panelists present a novel, granular approach to servicing cost analytics and how to get to a truly loan-by-loan MSR valuation (without resorting to rep lines).

 

Featured Speakers

Venkat Mullur

SVP, Capital Markets, Ocwen

Paul Gross

Senior Quantitative Analyst, New Residential Investment Corp.

Dan Fleishman

Managing Director, RiskSpan

Joe Makepeace

Director, RiskSpan


Webinar: Tailoring Stress Scenarios to Changing Risk Environments

July 13th | 1:00 p.m. ET

Designing market risk stress scenarios is challenging because of the disparate ways in which various risk factors impact different asset classes. No two events are exactly alike, and the Covid-19 pandemic and the Russian invasion of Ukraine each provide a case study for risk managers seeking to incorporate events without precise precedents into existing risk frameworks.
 
Join RiskSpan’s Suhrud Dagli and Martin Kindler on Wednesday, June 15th at 1 p.m. ET as they illustrate an approach for correlating rates, spreads, commodity prices and other risk factors to analogous historical geopoltical disruptions and other major market events. Market risk managers will receive an easily digestable tutorial on the math behind how to create probability distributions and reliably model how such events are most likely to impact a portfolio.

 

Featured Speakers

Suhrud Dagli

Co-Founder and CIO, RiskSpan

Photo of Martin Kindler

Martin Kindler

Managing Director, RiskSpan


Get Started
Log in

Linkedin   

risktech2024