Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Category: Article

Bumpy Road Ahead for GNMA MBS?

In a recent webinar, RiskSpan’s Fowad Sheikh engaged in a robust discussion with two of his fellow industry experts, Mahesh Swaminathan of Hilltop Securities and Mike Ortiz of DoubleLine Group, to address the likely road ahead for Ginnie Mae securities performance.


The panel sought to address the following questions:

  • How will the forthcoming, more stringent originator/servicer financial eligibility requirements affect origination volumes, buyouts, and performance?
  • Who will fill the vacuum left by Wells Fargo’s exiting the market?
  • What role will falling prices play in delinquency and buyout rates?
  • What will be the impact of potential Fed MBS sales.

This post summarizes some the group’s key conclusions. A recording of the webinar in its entirety is available here.

GET STARTED

Wells Fargo’s Departure

To understand the the likely impact of Wells Fargo’s exit, it is first instructive to understand the declining market share of banks overall in the Ginnie Mae universe. As the following chart illustrates, banks as a whole account for just 11 percent of Ginnie Mae originations, down from 39 percent as recently as 2015.

Drilling down further, the chart below plots Wells Fargo’s Ginnie Mae share (the green line) relative to the rest of the market. As the chart shows, Wells Fargo accounts for just 3 percent of Ginnie Mae originations today, compared to 15 percent in 2015. This trend of Wells Fargo’s declining market share extends all the way back to 2010, when it accounted for some 30 percent of Ginnie originations.

As the second chart below indicates, Wells Fargo’s market share, even among banks has also been on a steady decline.

GeT A Free Trial or Demo

Three percent of the overall market is meaningful but not likely to be a game changer either in terms of origination trends or impact on spreads. Wells Fargo, however, continues to have an outsize influence in the spec pool market. The panel hypothesized that Wells’s departure from this market could open the door to other entities claiming that market share. This could potentially affect prepayment speeds – especially if Wells is replaced by non-bank servicers, which the panel felt was likely given the current non-bank dominance of the top 20 (see below) – since Wells prepays have traditionally been slightly better than the broader market.

The panel raised the question of whether the continuing bank retreat from Ginnie Mae originations would adversely affect loan quality. As basis for this concern, they cited the generally lower FICO scores and higher LTVs that characterize non-bank-originated Ginnie Mae mortgages (see below). 

These data notwithstanding, the panel asserted that any changes to credit quality would be restricted to the margins. Non-bank servicers originate a higher percentage of lower-credit-quality loans (relative to banks) not because non-banks are actively seeking those borrowers out and eschewing higher-credit-quality borrowers. Rather, banks tend to restrict themselves to borrowers with higher credit profiles. Non-banks will be more than happy to lend to these borrowers as banks continue to exit the market.

Effect of New Eligibility Requirements

The new capital requirements, which take effect a year from now, are likely to be less punitive than they appear at first glance. With the exception of certain monoline entities – say, those with almost all of their assets concentrated in MSRs – the overwhelming majority of Ginnie Mae issuers (banks and non-banks alike) are going to be able meet them with little if any difficulty.

Ginnie Mae has stated that, even if the new requirements went into effect tomorrow, 95 percent of its non-bank issuers would qualify. Consequently, the one-year compliance period should open the door for a fairly smooth transition.

To the extent Ginnie Mae issuers are unable to meet the requirements, a consolidation of non-bank entities is likely in the offing. Given that these institutions will likely be significant MSR investors, the potential increase in MSR sales could impact MSR multiples and potentially disrupt the MSR market, at least marginally.

Potential Impacts of Negative HPA

Ginnie Mae borrowers tend to be more highly leveraged than conventional borrowers. FHA borrowers can start with LTVs as high as 97.5 percent. VA borrowers, once the VA guarantee fee is rolled in, often have LTVs in excess of 100 percent. Similar characteristics apply to USDA loans. Consequently, borrowers who originated in the past two years are more likely to default as they watch their properties go underwater. This is potentially good news for investors in discount coupons (i.e., investors who benefit from faster prepay speeds) because these delinquent loans will be bought out quite early in their expected lives.

More seasoned borrowers, in contrast, have experienced considerable positive HPA in recent years. The coming forecasted decline should not materially impact these borrowers’ performance. Similarly, if HPD in 2023 proves to be mild, then a sharp uptick in delinquencies is unlikely, regardless of loan vintage or LTV. Most homeowners make mortgage payments because they wish to continue living in their house and do not seriously consider strategic defaults. During the financial crisis, most borrowers continued making good on their mortgage obligations even as their LTVs went as high as the 150s.

Further, the HPD we are likely to encounter next year likely will not have the same devastating effect as the HPD wave that accompanied the financial crisis. Loans on the books today are markedly different from loans then. Ginnie Mae loans that went bad during the crisis disproportionately included seller-financed, down-payment-assistance loans and other programs lacking in robust checks and balances. Ginnie Mae has instituted more stringent guidelines in the years since to minimize the impact of bad actors in these sorts of programs.

This all assumes, however, that the job market remains robust. Should the looming recession lead to widespread unemployment, that would have a far more profound impact on delinquencies and buyouts than would HPD.

Fed Sales

The Fed’s holdings (as of 9/21, see chart below) are concentrated around 2 percent and 2.5 percent coupons. This raises the question of what the Fed’s strategy is likely to be for unwinding its Ginnie Mae position.

Word on the street is that Fed sales are highly unlikely to happen in 2022. Any sales in 2023, if they happen at all, are not likely before the second half of the year. The panel opined that the composition of these sales is likely to resemble the composition of the Fed’s existing book – i.e., mostly 2s, 2.5s, and some 3s. They have the capacity to take a more sophisticated approach than a simple pro-rata unwinding. Whether they choose to pursue that is an open question.

The Fed was a largely non-economic buyer of mortgage securities. There is every reason to believe that it will be a non-economic seller, as well, when the time comes. The Fed’s trading desk will likely reach out to the Street, ask for inquiry, and seek to pursue an approach that is least disruptive to the mortgage market.

Conclusion

On closer consideration, many of these macro conditions (Wells’s exit, HPD, enhanced eligibility requirements, and pending Fed sales) that would seem to portend an uncertain and bumpy road for Ginnie Mae investors, may turn out to be more benign than feared.

Conditions remain unsettled, however, and these and other factors certainly bear watching as Ginnie Mae market participants seek to plot a prudent course forward.


Optimizing Analytics Computational Processing 

We met with RiskSpan’s Head of Engineering and Development, Praveen Vairavan, to understand how his team set about optimizing analytics computational processing for a portfolio of 4 million mortgage loans using a cloud-based compute farm.

This interview dives deeper into a case study we discussed in a recent interview with RiskSpan’s co-founder, Suhrud Dagli.

Here is what we learned from Praveen. 


Speak to an Expert

Could you begin by summarizing for us the technical challenge this optimization was seeking to overcome? 

PV: The main challenge related to an investor’s MSR portfolio, specifically the volume of loans we were trying to run. The client has close to 4 million loans spread across nine different servicers. This presented two related but separate sets of challenges. 

The first set of challenges stemmed from needing to consume data from different servicers whose file formats not only differed from one another but also often lacked internal consistency. By that, I mean even the file formats from a single given servicer tended to change from time to time. This required us to continuously update our data mapping and (because the servicer reporting data is not always clean) modify our QC rules to keep up with evolving file formats.  

The second challenge relates to the sheer volume of compute power necessary to run stochastic paths of Monte Carlo rate simulations on 4 million individual loans and then discount the resulting cash flows based on option adjusted yield across multiple scenarios. 

And so you have 4 million loans times multiple paths times one basic cash flow, one basic option-adjusted case, one up case, and one down case, and you can see how quickly the workload adds up. And all this needed to happen on a daily basis. 

To help minimize the computing workload, our client had been running all these daily analytics at a rep-line level—stratifying and condensing everything down to between 70,000 and 75,000 rep lines. This alleviated the computing burden but at the cost of decreased accuracy because they couldn’t look at the loans individually. 

What technology enabled you to optimize the computational process of running 50 paths and 4 scenarios for 4 million individual loans?

PV: With the cloud, you have the advantage of spawning a bunch of servers on the fly (just long enough to run all the necessary analytics) and then shutting it down once the analytics are done. 

This sounds simple enough. But in order to use that level of compute servers, we needed to figure out how to distribute the 4 million loans across all these different servers so they can run in parallel (and then we get the results back so we could aggregate them). We did this using what is known as a MapReduce approach. 

Say we want to run a particular cohort of this dataset with 50,000 loans in it. If we were using a single server, it would run them one after the other – generate all the cash flows for loan 1, then for loan 2, and so on. As you would expect, that is very time-consuming. So, we decided to break down the loans into smaller chunks. We experimented with various chunk sizes. We started with 1,000 – we ran 50 chunks of 1,000 loans each in parallel across the AWS cloud and then aggregated all those results.  

That was an improvement, but the 50 parallel jobs were still taking longer than we wanted. And so, we experimented further before ultimately determining that the “sweet spot” was something closer to 5,000 parallel jobs of 100 loans each. 

Only in the cloud is it practical to run 5,000 servers in parallel. But this of course raises the question: Why not just go all the way and run 50,000 parallel jobs of one loan each? Well, as it happens, running an excessively large number of jobs carries overhead burdens of its own. And we found that the extra time needed to manage that many jobs more than offset the compute time savings. And so, using a fair bit of trial and error, we determined that 100-loan jobs maximized the runtime savings without creating an overly burdensome number of jobs running in parallel.  

Get A Demo

You mentioned the challenge of having to manage a large number of parallel processes. What tools do you employ to work around these and other bottlenecks? 

PV: The most significant bottleneck associated with this process is finding the “sweet spot” number of parallel processes I mentioned above. As I said, we could theoretically break it down into 4 million single-loan processes all running in parallel. But managing this amount of distributed computation, even in the cloud, invariably creates a degree of overhead which ultimately degrades performance. 

And so how do we find that sweet spot – how do we optimize the number of servers on the distributed computation engine? 

As I alluded to earlier, the process involved an element of trial and error. But we also developed some home-grown tools (and leveraged some tools available in AWS) to help us. These tools enable us to visualize computation server performance – how much of a load they can take, how much memory they use, etc. These helped eliminate some of the optimization guesswork.   

Is this optimization primarily hardware based?

PV: AWS provides essentially two “flavors” of machines. One “flavor” enables you to take in a lot of memory. This enables you to keep a whole lot of loans in memory so it will be faster to run. The other flavor of hardware is more processor based (compute intensive). These machines provide a lot of CPU power so that you can run a lot of processes in parallel on a single machine and still get the required performance. 

We have done a lot of R&D on this hardware. We experimented with many different instance types to determine which works best for us and optimizes our output: Lots of memory but smaller CPUs vs. CPU-intensive machines with less (but still a reasonably amount of) memory. 

We ultimately landed on a machine with 96 cores and about 240 GB of memory. This was the balance that enabled us to run portfolios at speeds consistent with our SLAs. For us, this translated to a server farm of 50 machines running 70 processes each, which works out to 3,500 workers helping us to process the entire 4-million-loan portfolio (across 50 Monte Carlo simulation paths and 4 different scenarios) within the established SLA.  

What software-based optimization made this possible? 

PV: Even optimized in the cloud, hardware can get pricey – on the order of $4.50 per hour in this example. And so, we supplemented our hardware optimization with some software-based optimization as well. 

We were able to optimize our software to a point where we could use a machine with just 30 cores (rather than 96) and 64 GB of RAM (rather than 240). Using 80 of these machines running 40 processes each gives us 2,400 workers (rather than 3,500). Software optimization enabled us to run the same number of loans in roughly the same amount of time (slightly faster, actually) but using fewer hardware resources. And our cost to use these machines was just one-third what we were paying for the more resource-intensive hardware. 

All this, and our compute time actually declined by 10 percent.  

The software optimization that made this possible has two parts: 

The first part (as we discussed earlier) is using the MapReduce methodology to break down jobs into optimally sized chunks. 

The second part involved optimizing how we read loan-level information into the analytical engine.  Reading in loan-level data (especially for 4 million loans) is a huge bottleneck. We got around this by implementing a “pre-processing” procedure. For each individual servicer, we created a set of optimized loan files that can be read and rendered “analytics ready” very quickly. This enables the loan-level data to be quickly consumed and immediately used for analytics without having to read all the loan tapes and convert them into a format that analytics engine can understand. Because we have “pre-processed” all this loan information, it is immediately available in a format that the engine can easily digest and run analytics on.  

This software-based optimization is what ultimately enabled us to optimize our hardware usage (and save time and cost in the process).  

Contact us to learn more about how we can help you optimize your mortgage analytics computational processing.


Rethink Analytics Computational Processing – Solving Yesterday’s Problems with Today’s Technology and Access 

We sat down with RiskSpan’s co-founder and chief technology officer, Suhrud Dagli, to learn more about how one mortgage investor successfully overhauled its analytics computational processing. The investor migrated from a daily pricing and risk process that relied on tens of thousands of rep lines to one capable of evaluating each of the portfolio’s more than three-and-a-half million loans individually (and how they actually saved money in the process).  

Here is what we learned. 


Could you start by talking a little about this portfolio — what asset class and what kind of analytics the investor was running? 

SD: Our client was managing a large investment portfolio of mortgage servicing rights (MSR) assets, residential loans and securities.  

The investor runs a battery of sophisticated risk management analytics that rely on stochastic modeling. Option-adjusted spread, duration, convexity, and key rate durations are calculated based on more than 200 interest rate simulations. 

GET A FREE DEMO OR FREE TRIAL

Why was the investor running their analytics computational processing using a rep line approach? 

SD: They used rep lines for one main reason: They needed a way to manage computational loads on the server and improve calculation speeds. Secondarily, organizing the loans in this way simplified their reporting and accounting requirements to a degree (loans financed by the same facility were grouped into the same rep line).  

This approach had some downsides. Pooling loans by finance facility was sometimes causing loans with different balances, LTVs, credit scores, etc., to get grouped into the same rep line. This resulted in prepayment and default assumptions getting applied to every loan in a rep line that differed from the assumptions that likely would have been applied if the loans were being evaluated individually.  

The most obvious solution to this would seem to be one that disassembles the finance facility groups into their individual loans, runs all those analytics at the loan level, and then re-aggregates the results into the original rep lines. Is this sort of analytics computational processing possible without taking all day and blowing up the server? 

SD: That is effectively what we are doing. The process is not a speedy as we’d like it to be (and we are working on that). But we have worked out a solution that does not overly tax computational resources.  

The analytics computational processing we are implementing ignores the rep line concept entirely and just runs the loans. The scalability of our cloud-native infrastructure enables us to take the three-and-a-half million loans and bucket them equally for computation purposes. We run a hundred loans on each processor and get back loan-level cash flows and then generate the output separately, which brings the processing time down considerably. 

SPEAK TO AN EXPERT

So we have a proof of concept that this approach to analytics computational processing works in practice for running pricing and risk on MSR portfolios. Is it applicable to any other asset classes?

SD: The underlying principles that make analytics computational processing possible at the loan level for MSR portfolios apply equally well to whole loan investors and MBS investors. In fact, the investor in this example has a large whole-loan portfolio alongside its MSR portfolio. And it is successfully applying these same tactics on that portfolio.   

An investor in any mortgage asset benefits from the ability to look at and evaluate loan characteristics individually. The results may need to be rolled up and grouped for reporting purposes. But being able to run the cash flows at the loan level ultimately makes the aggregated results vastly more meaningful and reliable. 

A loan-level framework also affords whole-loan and securities investors the ability to be sure they are capturing the most important loan characteristics and are staying on top of how the composition of the portfolio evolves with each day’s payoffs. 

ESG factors are an important consideration for a growing number of investors. Only a loan-level approach makes it possible for these investors to conduct the kind of property- and borrower-level analyses to know whether they are working toward meeting their ESG goals. It also makes it easier to spot areas of geographic concentration risk, which simplifies climate risk management to some degree.  

Say I am a mortgage investor who is interested in moving to loan-level pricing and risk analytics. How do I begin? 

 SD: Three things: 

  1.  It begins with having the data. Most investors have access to loan-level data. But it’s not always clean. This is especially true of origination data. If you’re acquiring a pool – be it a seasoned pool or a pool right after origination – you don’t have the best origination data to drive your model. You also need a data store that can generate loan-loan level output to drive your analytics and models.
  2. The second factor is having models that work at the loan level – models that have been calibrated using loan-level performance and that are capable of generating loan-level output. One of the constraints of several existing modeling frameworks developed by vendors is they were created to run at a rep line level and don’t necessarily work very well for loan-level projections.  
  3. The third thing you need is a compute farm. It is virtually impossible to run loan-level analytics if you’re not on the cloud because you need to distribute the computational load. And your computational distribution requirements will change from portfolio to portfolio based on the type of analytics that you are running, based on the types of scenarios that you are running, and based on the models you are using. 

The cloud is needed not just for CPU power but also for storage. This is because once you go to the loan level, every loan’s data must be made available to every processor that’s performing the calculation. This is where having the kind of shared databases, which are native to a cloud infrastructure, becomes vital. You simply can’t replicate it using a on-premise setup of computers in your office or in your own data center. 

So, 1) get your data squared away, 2) make sure you’re using models that are optimized for loan-level, and 3) max out your analytics computational processing power by migrating to cloud-native infrastructure. Thank you, Suhrud, for taking the time to speak with us.


Quantifying the Impact of Climate Risk on Housing Finance 

When people speak of the risk climate poses to housing, they typically do so in qualitative and relative terms. A Florida home is at greater risk of hurricane damage than an Iowa home. Wildfires generally threaten homes in northern California more than they threaten homes in New Hampshire. And because of climate change, the risk these and other perils pose to any individual geographical area are largely viewed as higher than they were 25 years ago.

People feel comfortable speaking in these general terms. But qualitative estimates are of little practical use to mortgage investors seeking to fine-tune their pricing, prepayment, and default models. These analytical frameworks require not just reliable data but the means to translate them into actionable risk metrics.   

Physical risks and transition risks

Broadly speaking, climate risk manifests itself as a combination of physical risks and transition risks. Physical risks include “acute” disaster events, such as hurricanes, tornadoes, wildfires, and floods. Chronic risks, such as sea level rise, extreme temperatures, and drought, are experienced over a longer period. Transition risks relate to costs resulting from regulations promulgated to combat climate change and from the need to invest in new technologies designed either to combat climate change directly or mitigate its effects.

Some of the ways in which these risks impact mortgage assets are self-evident. Acute events that damage or destroy homes have an obvious effect on the performance of the underlying mortgages. Other mechanisms are more latent but no less real. Increasing costs of homeownership, caused by required investment in climate-change-mitigating technologies, can be a source of financial stress for some borrowers and affect mortgage performance. Likewise, as flood and other hazard insurance premiums adjust to better reflect the reality of certain geographies’ increasing exposure to natural disaster risk, demand for real estate in these areas could decrease, increasing the pressure on existing homeowners who may not have much cushion in their LTVs to begin with.

Mortgage portfolio risk management

At the individual loan level, these risks translate to higher delinquency risks, probability of default, loss given default, spreads, and advance expenses. At the portfolio level, the impact is felt in asset valuation, concentration risk (what percentage of homes in the portfolio are located in high-risk areas), VaR, and catastrophic tail risk.

VaR can be computed using natural hazard risk models designed to forecast the probability of individual perils for a given geography and using that probability to compute the worst property loss (total physical loss and loss net of insurance proceeds) that can be expected during the portfolio’s expected life at the 99 percent (or 95 percent) confidence level. The following figure illustrates how this works for a portfolio covering multiple geographies with varying types and likelihoods of natural hazard risk.

CONTACT US
Climate risk dashboard acute risk

These analyses can look at the exposure of an entire portfolio to all perils combined:    

Climate risk dashboard U.S.
SPEAK TO AN EXPERT

Or they can look at the exposure of a single geographic area to one peril in particular:

Climate risk dashboard Florida

Accounting for climate risk when bidding on whole loans

The risks quantified above pertain to properties that secure mortgages and therefore only indirectly to the mortgage assets themselves. Investors seeking to build whole-loan portfolios that are resilient to climate risk should consider climate risk in the context of other risk factors. Such a “property-level climate risk” approach takes into account factors such as:

  • Whether the property is insured against the peril in question
  • The estimate expected risk (and tail risk) of property damage by the peril in question
  • Loan-to-value ratio

The most prudent course of action includes a screening mechanism that includes pricing and concentration limits tied to LTV ratios. Investors may choose to invest in areas of high climate risk but only in loans with low LTV ratios. Bids should be adjusted to account for climate risk, but the amount of the adjustment can be a function of the LTV. Concentration limits should be adjusted accordingly:

Climate risk pricing adjustments

Conclusion

When assessing the impact of climate risk on a mortgage portfolio, investors need to consider and seek to quantify not just how natural hazard events will affect home values but also how they will affect borrower behavior, specifically in terms of prepayments, delinquencies, and defaults.

We are already beginning to see climate factors working their way into the secondary mortgage markets via pricing adjustments and concentration screening. It is only a matter of time before these considerations move further up into the origination process and begin to manifest themselves in pricing and underwriting policy (as flood insurance requirements already have today).

Investors looking for a place to start can begin by incorporating a climate risk score into their existing credit box/pricing grid, as illustrated above. This will help provide at least a modicum of comfort to investors that they are being compensated for these hidden risks and (at least as important) will ensure that portfolios do not become overly concentrated in at-risk areas.

GET STARTED

“Reject Inference” Methods in Credit Modeling: What are the Challenges?

Reject inference is a popular concept that has been used in credit modeling for decades. Yet, we observe in our work validating credit models that the concept is still dynamically evolving. The appeal of reject inference, whose aim is to develop a credit scoring model utilizing all available data, including that of rejected applicants, is easy enough to grasp. But the technique also introduces a number of fairly vexing challenges.

The technique seeks to rectify a fundamental shortcoming in traditional credit modeling: Models predicting the probability that a loan applicant will repay the loan can be trained to historical loan application data with a binary variable representing whether a loan was repaid or charged off. This information, however, is only available for accepted applications. And many of these applications are not particularly recent. This limitation results in a training dataset that may not be representative of the broader loan application universe.

Credit modelers have devised several techniques for getting around this data representativeness problem and increasing the number of observations by inferring the repayment status of rejected loan applications. These techniques, while well intentioned, are often treated empirically and lack a deeper theoretical basis. They often result in “hidden” modeling assumptions, the reasonableness of which is not fully investigated. Additionally, no theoretical properties of the coefficient estimates, or predictions are guaranteed.

This article summarizes the main challenges of reject inference that we have encountered in our model validation practice.

Speak With A MODEL VALIDATION EXPERT

Selecting the Right Reject Inference Method

Many approaches exist for reject inference, none of which is clearly and universally superior to all the others. Empirical studies have been conducted to compare methods and pick a winner, but the conclusions of these studies are often contradictory. Some authors argue that reject inference cannot improve scorecard models[1]and flatly recommend against their use. Others posit that certain techniques can outperform others[2] based on empirical experiments. The results of these experiments, however, tend to be data dependent. Some of the most popular approaches include the following:

  • Ignoring rejected applications: The simplest approach is to develop a credit scoring model based only on accepted applications. The underlying assumption is that rejected applications can be ignored and that the “missingness” of this data from the training dataset can be classified as missing at random. Supporters of this method point to the simplicity of the implementation, clear assumptions, and good empirical results. Others argue that the rejected applications cannot be dismissed simply as random missing data and thus should not be ignored.
  • Hard cut-off method: In this method, a model is first trained using only accepted application data. This trained model is then used to predict the probabilities of charge-off for the rejected applications. A cut-off value is then chosen. Hypothetical loans from rejected applications with probabilities higher than this cut-off value are considered charged off. Hypothetical loans from the remaining applications are assumed to be repaid. The specified model is then re-trained using a dataset including both accepted and rejected applications.
  • Fuzzy augmentation: Similar to the hard cut-off method, fuzzy augmentation begins by training the model on accepted applications only. The resulting model with estimated coefficients is then used to predict charge-off probabilities for rejected applications. Data from rejected applications is then duplicated and a repaid or charged-off status is assigned to each. The specified model is then retrained on the augmented dataset—including accepted applications and the duplicated rejects. Each rejected application is weighted by either a) the predicted probability of charge-off if its assigned status is “charged-off,” or b) the predicted probability of it being repaid if its assigned status is “repaid.”
  • Parceling: The parceling method resembles the hard cut-off method. However, rather than classifying all rejects above a certain threshold as charged-off, this method classifies the repayment status in proportion to the expected “bad” rate (charge-off frequency) at that score. The predicted charge-off probabilities are partitioned into k intervals. Then, for each interval, an assumption is made about the bad rate, and loan applications in each interval are assigned a repayment status randomly according to the bad rate. Bad rates are assumed to be higher in the reject dataset than among the accepted loans. This method considers the missingness to be not at random (MNAR), which requires the modeler to supplement the additional information about the distribution of charge-offs among rejects.

Proportion of Accepted Applications to Rejects

An institution with a relatively high percentage of rejected applications will necessarily end up with an augmented training dataset whose quality is heavily dependent on the quality of the selected reject inference method and its implementation. One might argue it is best to limit the proportion of rejected applications to acceptances. The level at which such a cap is established should reflect the “confidence” in the method used. Estimating such a confidence level, however, is a highly subjective endeavor.

The Proportion of Bad Rates for Accepts and Rejects

It is reasonable to assume that the “bad rate,” i.e., proportion of charged-off loans to repaid loans, will be higher among rejected applications. Some modelers set a threshold based on their a priori belief that the bad rate among rejects is at least p-times the bad rate among acceptances. If the selected reject inference method produces a dataset with a bad rate that is perceived to be artificially low, actions are taken to increase the bad rate above some threshold. Identifying where to establish this threshold is notoriously difficult to justify.

Variable Selection

As outlined above, most approaches begin by estimating a preliminary model based on accepted applications only. This model is then used to infer how rejected loans would have performed. The preliminary model is then retrained on a dataset consisting both of actual data from accepted applications and of the inferred data from rejects. This means that the underlying variables themselves are selected based only on the actual loan performance data from accepted applications. The statistical significance of the selected variables might change, however, when moving to the complete dataset. Variable selection is sometimes redone using the complete data. This, however, can lead to overfitting.

Measuring Model Performance

From a model validator’s perspective, an ideal solution would involve creating a control group in which applications would not be scored and filtered and every application would be accepted. Then the discriminating power of a credit model could be assessed by comparing the charge-off rate of the control group with the charge-off rate of the loans accepted by the model. This approach of extending credit indiscriminately is impractical, however, as it would require the lender to engage in some degree of irresponsible lending.

Another approach is to create a test set. The dilemma here is whether to include only accepted applications. A test set that includes only accepted applications will not necessarily reflect the population for which the model will be used. Including rejected applications, however, obviously necessitates the use of reject inference. For all the reasons laid out above, this approach risks overstating the model’s performance due to the fact that a similar model (trained only on the accepted cases) was used for reject inference.

A third approach that avoids both of these problems involves using information criteria such as AIC and BIC. This, however, is useful only when comparing different models (for model or variable selection). The values of information criteria cannot be interpreted as an absolute measure of performance.

A final option is to consider utilizing several models in production (the main model and challenger models). Under this scenario, each application would be evaluated by a model selected at random. The models can then be compared retroactively by calculating their bad rates on accepted application after the financed loans mature. Provided that the accept rates are similar, the model with the lowest bad rate is the best.

Conclusion

Reject inference remains a progressing field in credit modeling. Its ability to improve model performance is still the subject of intense debate. Current results suggest that while reject inference can improve model performance, its application can also lead to overfitting, thus worsening the ability to generalize. The lack of a strong theoretical basis for reject inference methods means that applications of reject inference need to rely on empirical results. Thus, if reject inference is used, key model stakeholders need to possess a deep understanding of the modeled population, have strong domain knowledge, emphasize conducting experiments to justify the applied modeling techniques, and, above all, adopt and follow a solid ongoing monitoring plan.

Doing this will result in a modeling methodology that is most likely to produce reliable outputs for the institutions while also satisfying MRM and validator requirements.

Contact Us

[1] https://www.sciencedirect.com/science/article/abs/pii/S0378426603002036

[2] https://economix.fr/pdf/dt/2016/WP_EcoX_2016-10.pdf


Rising Rates; Rising Temperatures: What Higher Interest Rates Portend for Mortgage Climate Risk — An interview with Janet Jozwik  

Janet Jozwik leads RiskSpan’s sustainability analytics (climate risk and ESG) team. She is also an expert in mortgage credit risk and a recognized industry thought leader on incorporating climate risk into credit modeling. We sat down with Janet to get her views on whether the current macroeconomic environment should impact how mortgage investors prioritize their climate risk mitigation strategies.


You contend that higher interest rates are exposing mortgage lenders and investors to increased climate risk. Why is that?

JJ: My concern is primarily around the impact of higher rates on credit risk overall, of which climate risk is merely a subset – a largely overlooked and underappreciated subset, to be sure, and one with potentially devastating consequences, but ultimately one of many. The simple reason is that, because interest rates are up, loans are going to remain on your books longer. The MBA’s recent announcement of refinance applications (and mortgage originations overall) hitting their lowest levels since 2000 is stark evidence of this.

And because these loans are going to be lasting longer, borrowers will have more opportunities to get into trouble (be it a loss of income or a natural disaster) and everybody should be taking credit risk more seriously. One of the biggest challenges posed by a high-rate environment is borrowers don’t have a lot of the “outs” available to them as they do when they encounter stress during more favorable macroeconomic environments. They can no longer simply refi into a lower rate. Modification options become more complicated. They might have no option other than to sell the home – and even that isn’t going to be as easy as it was, say, a year ago. So, we’ve entered this phase where credit risk analytics, both at origination and life of loan, really need to be taken seriously. And credit risk includes climate risk.

So longer durations mean more exposure to credit risk – more time for borrowers to run into trouble and experience credit events. What does climate have to do with it? Doesn’t homeowners’ insurance mitigate most of this risk anyway?

JJ: Each additional month or year that a mortgage loan remains outstanding is another month or year that the underlying property is exposed to some form of natural disaster risk (hurricane, flood, wildfire, earthquake, etc.). When you look at a portfolio in aggregate – one whose weighted average life has suddenly ballooned from four years to, say eight years – it is going to experience more events, more things happening to it. Credit risk is the risk of a borrower failing to make contractual payments. And having a home get blown down or flooded by a hurricane tends to have a dampening effect on timely payment of principal and interest.

As for insurance, yes, insurance mitigates portfolio exposure to catastrophic loss to some degree. But remember that not everyone has flood insurance, and many loans don’t require it. Hurricane-specific policies often come with very high deductibles and don’t always cover all the damage. Many properties lack wildfire insurance or the coverage may not be adequate. Insurance is important and valuable but should not be viewed as a panacea or a substitute for good credit-risk management or taking climate into account when making credit decisions.

But the disaster is going to hit when the disaster is going to hit, isn’t it? How should I be thinking about this if I am a lender who recaptures a considerable portion of my refis? Haven’t I just effectively replaced three shorter-lived assets with a single longer-lived one? Either way, my portfolio’s going to take a hit, right?

JJ: That is true as far as it goes. And if in the steady state that you are envisioning, one where you’re just churning through your portfolio, prepaying existing loans with refis that look exactly like the loans they’re replacing, then, yes, the risk will be similar, irrespective of expected duration.

But do not forget that each time a loan turns over, a lender is afforded an opportunity to reassess pricing (or even reassess the whole credit box). Every refi is an opportunity to take climate and other credit risks into account and price them in. But in a high-rate environment, you’re essentially stuck with your credit decisions for the long haul.

Do home prices play any role in this?

JJ: Near-zero interest rates fueled a run-up in home prices like nothing we’ve ever seen before. This arguably made disciplined credit-risk management less important because, worst case, all the new equity in a property served as a buffer against loss.

But at some level, we all had to know that these home prices were not universally sustainable. And now that interest rates are back up, existing home prices are suddenly starting to look a little iffy. Suddenly, with cash-out refis off the table and virtually no one in the money for rate and term refis, weighted average lives have nowhere to go but up. This is great, of course, if your only exposure is prepayment risk. But credit risk is a different story.

And so, extremely low interest rates over an extended period played a significant role in unsustainably high home values. But the pandemic had a lot to do with it, as well. It’s well documented that the mass influx of home buyers into cities like Boise from larger, traditionally more expensive markets drove prices in those smaller cities to astronomical levels. Some of these markets (like Boise) have not only reached an equilibrium point but are starting to see property values decline. Lenders with excessive exposure to these traditionally smaller markets that experienced the sharpest home price increases during the pandemic will need to take a hard look at their credit models’ HPI assumptions (in addition to those properties’ climate risk exposure).

What actions should lenders and investors be considering today?

JJ: If you are looking for a silver lining in the fact that origination volumes have fallen off a cliff, it has afforded the market an opportunity to catch its breath and reassess where it stands risk-wise. Resources that had been fully deployed in an effort simply to keep up with the volume can now be reallocated to taking a hard look at where the portfolio stands in terms of credit risk generally and climate risk in particular.

This includes assessing where the risks and concentrations are in mortgage portfolios and, first, making sure not to further exacerbate existing concentration risks by continuing to acquire new assets in overly exposed geographies. Investors may be wise to go so far even to think about selling certain assets if they feel like they have too much risk in problematic areas.

Above all, this is a time when lenders need to be taking a hard look at the fundamentals underpinning their underwriting standards. We are coming up on 15 years since the start of the “Great Recession” – the last time mortgage underwriting was really “tight.” For the past decade, the industry has had nothing but calm waters – rising home values and historically low interest rates. It’s been like tech stocks in the ‘90s. Lenders couldn’t help but make money.

I am concerned that this has allowed complacency to take hold. We’re in a new world now. One with shaky home prices and more realistic interest rates. The temptation will be to loosen underwriting standards in order to wring whatever volume might be available out of the economy. But in reality, they need to be doing precisely the opposite. Underwriting standards are going to have tighten a bit in order effectively manage the increased credit (and climate) risks inherent to longer-duration lending.

It’s okay for lenders and investors to be taking these new risks on. They just need to be doing it with their eyes wide open and they need to be pricing for it.

Speak To an Expert

How Do You Rate on Fannie Mae’s New Social Index?

Quick take-aways

  • HMDA data contains nearly every factor needed to replicate Fannie Mae’s Single Family Social Index. We use this data to explore how the methodology would look if the Fannie Mae Social Index were applied to other market participants.
  • The Agencies and Ginnie Mae are not the only game in town when it comes socially responsible lending. Non-agency loans would also perform reasonably well under Fannie Mae’s proposed Social Index.
  • Not surprisingly, Ginnie Mae outperforms all other “purchaser types” under the framework, buoyed by its focus on low-income borrowers and underserved communities. The gap between Ginnie and the rest of the market can be expected to expand in low-refi environments.
  • With a few refinements to account for socially responsible lending beyond low-income borrowers, Fannie Mae’s framework can work as a universally applicable social measure across the industry.

Fannie Mae’s new “Single Family Social Index

Last week, Fannie Mae released a proposed methodology for its Single Family Social Index.” The index is designed to provide “socially conscious investors” a means of “allocat[ing] capital in support of affordable housing and to provide access to credit for underserved individuals.”

The underlying methodology is simple enough. Each pool of mortgages receives a score based on how many of its loans meet one or more specified “social criteria” across three dimensions: borrower income, borrower characteristics and property location/type. Fannie Mae succinctly illustrates the defined criteria and framework in the following overview deck slide.


Social Index Figure 1: Source: Designing for Impact — A Proposed Methodology for Single-Family Social Disclosure


Each of the criteria is binary (yes/no) which facilitates the scoring. Individual loans are simply rated based on the number of boxes they check. Pools are measured in two ways: 1) a “Social Criteria Share,” which identifies the percentage of loans that meet any of the criteria, and 2) a “Social Density Score,” which assigns a “Social Score” of 0 thru 3 to each individual loan based on how many of the three dimensions (borrower income, borrower characteristics, and property characteristics) it covers and then averaging that score across all the loans in the pool.

If other issuers adopt this methodology, what would it look like?

The figure below is one of many charts and tables provided by Fannie Mae that illustrate how the Index works. This figure shows the share of acquisitions meeting one or more of the Social Index criteria (i.e., the overall “Social Criteria Share.” We have drawn a box approximately around the 2020 vintage,[1] which appears to have a Social Criteria Share of about 52% by loan count. We will refer back to this value later as we seek to triangulate in on a Social Criteria Share for other market participants.

SPEAK TO AN EXPERT

Graph Figure 2: Source: Designing for Impact — A Proposed Methodology for Single-Family Social Disclosure


We can get a sense of other issuers’ Social Criteria Share by looking at HMDA data. This dataset provides everything we need to re-create the Index at a high-level, with the exception of a flag for first time home buyers. The process involves some data manipulation as several Index criteria require us to connect to two census-tract level data sources published by FHFA.

HMDA allows us break down the loan population by purchaser type, which gives us an idea of each loan’s ultimate destination—Fannie, Freddie, Ginnie, etc. The purchaser type does not capture this for every loan, however, because originators are only obligated to report loans that are closed and sold during the same calendar year.  

The two tables below reflect two different approaches to approximating the population of Fannie, Freddie, and Ginnie loans. The left-hand table compares the 2020 origination loan count based on HMDA’s Purchaser Type field with loan counts based on MBS disclosure data pulled from RiskSpan’s Edge Platform.

The right-hand table enhances this definition by first re-categorizing as Ginnie Mae all FHA/VA/USDA loans with non-agency purchaser types. It also looks at the Automated Underwriting System field and re-maps all owner-occupied loans previously classified as “Other or NA” to Fannie (DU AUS) or Freddie (LP/LPA AUS).


Social Index



The adjusted purchaser type approach used in the right-hand table reallocates a considerable number of “Other or NA” loans from the left-hand table. The approach clearly overshoots the Fannie Mae population, as some loans underwritten using Fannie’s automated underwriting system likely wind up at Freddie and other segments of the market. This limitation notwithstanding, we believe this approximation lends a more accurate view of the market landscape than does the unadjusted purchaser type approach. We consequently rely primarily on the adjusted approach in this analysis.

Given the shortcomings in aligning the exact population, the idea here is not to get an exact calculation of the Social Index metrics via HMDA, but to use HMDA to give us a rough indication of how the landscape would look if other issuers adopted Fannie’s methodology. We expect this to provide a rough rank-order understanding of where the richest pools of ‘Social’ loans (according to Fannie’s methodology) ultimately wind up. Because the ultimate success of a social scoring methodology can truly be measured only to the extent it is adopted by other issuers, having a universally useful framework is crucial.

The table below estimates the Social Criteria Share by adjusted purchaser using seven of Fannie Mae’s eight social index criteria.[2] Not surprisingly, Ginnie, Fannie, and Freddie boast the highest overall shares. It is encouraging to note, however, that other purchaser types also originate significant percentages of socially responsible loans. This suggests that Fannie’s methodology could indeed be applied more universally. The table looks at each factor separately and could warrant its own blog post entirely to dissect, so take a closer look at the dynamics.[3]


Social Index


Ginnie Mae’s strong performance on the Index comes as no surprise. Ginnie pools, after all, consist primarily of FHA loans, which skew toward the lower end of the income spectrum, first-time borrowers, and traditionally underserved communities. Indeed, more than 56 percent of Ginnie Mae loans tick at least one box on the Index. And this does not include first-time homebuyers, which would likely push that percentage even higher.

Income’s Outsized Impact

Household income contributes directly or indirectly to most components of Fannie’s Index. Beyond the “Low-income” criterion (borrowers below 80 percent of adjusted median income), nearly every other factor favors income levels be below 120 percent of AMI. Measuring income is tricky, especially outside of the Agency/Ginnie space. The non-Agency segment serves many self-employed borrowers, borrowers who qualify based on asset (rather than income) levels, and foreign national borrowers. Nailing down precise income has historically proven challenging with these groups.

Given these dynamics, one could reasonably posit that the 18 percent of PLS classified as “low-income” is actually inflated by self-employed or wealthier borrowers whose mortgage applications do not necessarily reflect all of their income. Further refinements may be needed to fairly apply the Index framework to this and market segments that pursue social goals beyond expanding credit opportunities for low-income borrowers. This could just be further definitions on how to calculate income (or alternatives to the income metric when not available) and certain exclusions from the framework altogether (foreign national borrowers, although these may be excluded already based on the screen for second homes).

Positive effects of a purchase market

The Social Criteria Share is positively correlated with purchase loans as a percentage of total origination volume (even before accounting for the FTHB factor). This relationship is apparent in Fannie Mae’s time series chart near the top of this post. Shares clearly drop during refi waves.

Our analysis focuses on 2020 only. We made this choice because of HMDA reporting lags and the inherent facility of dealing with a single year of data. The table below breaks down the HMDA analysis (referenced earlier) by loan purpose to give us a sense for what our current low-refi environment could look like. (Rate/term refis are grouped together with cash-out refis.) As the table below indicates, Ginnie Mae’s SCS for refi loans is about the same as it is for GSE refi loans — it’s really on purchase loans where Ginnie shines. This implies that Ginnie’s SCS will improve even further in a purchase rate environment.


Social Index


Accounting for First-time Homebuyers

As described above, our methodology for estimating the Social Criteria Share omits loans to first-time homebuyers (because the HMDA data does not capture it). This likely accounts for the roughly 6 percentage point difference between our estimate of Fannie’s overall Social Criteria Share for 2020 (approximately 46 percent) and Fannie Mae’s own calculation (approximately 52 percent).

To back into the impact of the FTHB factor, we can pull in data about the share of FTHBs from RiskSpan’s Edge platform. The chart above that looks a Purchase vs. Refi tells us the SCS share without the FTHB factor for purchase loans. Using MBS data sources, we can obtain the share of 2020 originations that were FTHBs. If we assume that FTHB loans look the same as purchase loans overall in terms of how many other Social Index boxes they check, then we can back into the overall SCS incorporating all factors in Fannie’s methodology.

Applying this approach to Ginnie Mae, we conclude that, because 29 percent of Ginnie’s purchase loans (one minus 71 percent) do not tick any of the Index’s boxes, 29 percent of FTHB loans (which account for 33 percent of Ginnie’s overall population) also do not tick any Index boxes. Taking 29 percent of this 33 percent results in an additional 9.6 percent that should be tacked on to Ginnie Mae’s pre-FTHB share, bringing it up to 66 percent.


Social Index


Validating this estimation approach is the fact it increases Fannie Mae’s share from 46 percent (pre-FTHB) to 52 percent, which is consistent with the historical graph supplied by Fannie Mae (see Figure 2, above). Our FTHB approach implies that 92 percent of Ginnie Mae purchase loans meet one or more of the Index criteria. One could reasonably contend that Ginnie Mae FTHB loans might be more likely than Ginnie purchase loans overall to satisfy other social criteria (i.e., that 92 percent is a bit rich), in which case the 66 percent share for Ginnie Mae in 2020 might be overstated. Even if we mute this FTHB impact on Ginnie, however, layering FTHB loans on top of a rising purchase-loan environment would likely put today’s Ginnie Mae SCS in the low 80s.




[1] The chart is organized by acquisition month, our analysis of HMDA looks at 2020 originations, so we’ve tried to push the box slightly to the right to reflect the 1–3-month lag between origination and acquisition. Additionally, we think the chart and numbers throughout Fannie’s document are just Fixed Rate 30 loans, our analysis includes all loans. We did investigate what our numbers would look like if filtered to Fixed 30 and it would only increase the SCS slightly across the board.

[2] As noted above, we are unable to discern first-time homebuyer information from the HMDA data.

[3] We can compare the Fannie numbers for each factor to published rates in their documentation representing the time period 2017 forward. The only metric where we stand out as being meaningfully off is the percentage of loans in minority census tracts. We took this flag from FHFA’s Low-Income Area File for 2020 which defines a minority census tract having a ‘…minority population of at least 30 percent and a median income of less than 100 percent of the AMI.’ It is not 100% clear that this is what Fannie Mae is using in its definition.


It’s time to move to DaaS — Why it matters for loan and MSR investors

Data as a service, or DaaS, for loans and MSR investors is fast becoming the difference between profitable trades and near misses.

Granularity of data is creating differentiation among investors. To win at investing in loans and mortgage servicing rights requires effectively managing a veritable ocean of loan-level data. Buried within every detailed tape of borrower, property, loan and performance characteristics lies the key to identifying hidden exposures and camouflaged investment opportunities. Understanding these exposures and opportunities is essential to proper bidding during the acquisition process and effective risk management once the portfolio is onboarded.

Investors know this. But knowing that loan data conceals important answers is not enough. Even knowing which specific fields and relationships are most important is not enough. Investors also must be able to get at that data. And because mortgage data is inherently messy, investors often run into trouble extracting the answers they need from it.

For investors, it boils down to two options. They can compel analysts to spend 75 percent of their time wrangling unwieldy data – plugging holes, fixing outliers, making sure everything is mapped right. Or they can just let somebody else worry about all that so they can focus on more analytical matters.

Don’t get left behind — DaaS for loan and MSR investors

It should go without saying that the “let somebody else worry about all that” approach only works if “somebody else” possesses the requisite expertise with mortgage data. Self-proclaimed data experts abound. But handing the process over to an outside data team lacking the right domain experience risks creating more problems than it solves.

Ideally, DaaS for loan and MSR investors consists of a data owner handing off these responsibilities to a third party that can deliver value in ways that go beyond simply maintaining, aggregating, storing and quality controlling loan data. All these functions are critically important. But a truly comprehensive DaaS provider is one whose data expertise is complemented by an ability to help loan and MSR investors understand whether portfolios are well conceived. A comprehensive DaaS provider helps investors ensure that they are not taking on hidden risks (for which they are not being adequately compensated in pricing or servicing fee structure).

True DaaS frees up loan and MSR investors to spend more time on higher-level tasks consistent with their expertise. The more “blocking and tackling” aspects of data management that every institution that owns these assets needs to deal with can be handled in a more scalable and organized way. Cloud-native DaaS platforms are what make this scalability possible.

Scalability — stop reinventing the wheel with each new servicer

One of the most challenging aspects of managing a portfolio of loans or MSRs is the need to manage different types of investor reporting data pipelines from different servicers. What if, instead of having to “reinvent the wheel” to figure out data intake every time a new servicer comes on board, “somebody else” could take care of that for you?

An effective DaaS provider is one not only that is well versed in building and maintain loan data pipes from servicers to investors but also has already established a library of existing servicer linkages. An ideal provider is one already set-up to onboard servicer data directly onto its own DaaS platform. Investors achieve enormous economies of scale by having to integrate with a single platform as opposed to a dozen or more individual servicer integrations. Ultimately, as more investors adopt DaaS, the number of centralized servicer integrations will increase, and greater economies will be realized across the industry.

Connectivity is only half the benefit. The DaaS provider not only intakes, translates, maps, and hosts the loan-level static and dynamic data coming over from servicers. The DaaS provider also takes care of QC, cleaning, and managing it. DaaS providers see more loan data than any one investor or servicer. Consequently, the AI tools an experienced DaaS provider uses to map and clean incoming loan data have had more opportunities to learn. Loan data that has been run through a DaaS provider’s algorithms will almost always be more analytically valuable than the same loan data processed by the investor alone.  

Investors seeking to increase their footprint in the loan and MSR space obviously do not wish to see their data management costs rise in proportion to the size of their portfolios. Outsourcing to a DaaS provider that specializes in mortgages, like RiskSpan, helps investors build their book while keeping data costs contained.

Save time and money – Make better bids

For all these reasons, DaaS is unquestionably the future (and, increasingly, the present) of loan and MSR data management. Investors are finding that a decision to delay DaaS migration comes with very real costs, particularly as data science labor becomes increasingly (and often prohibitively) expensive.

The sooner an investor opts to outsource these functions to a DaaS provider, the sooner that investor will begin to reap the benefits of an optimally cost-effective portfolio structure. One RiskSpan DaaS client reported a 50 percent reduction in data management costs alone.

Investors continuing to make do with in-house data management solutions will quickly find themselves at a distinct bidding disadvantage. DaaS-aided bidders have the advantage of being able to bid more competitively based on their more profitable cost structure. Not only that, but they are able to confidently hone and refine their bids based on having a better, cleaner view of the portfolio itself.

Rethink your mortgage data. Contact RiskSpan to talk about how DaaS can simultaneously boost your profitability and make your life easier.

REQUEST A DEMO

Senior Home Equity Rises Again to $11.12 Trillion

Senior home equity rises again. Homeowners 62 and older saw their housing wealth grow by an estimated 4.9 percent ($520 billion) during the first quarter of 2022 to a record $11.1 trillion according to the latest quarterly release of the NRMLA/RiskSpan Reverse Mortgage Market Index.

Historical Changes in Aggregate Senior Home Values Q1 2000 - Q1 2022

The NRMLA/RiskSpan Reverse Mortgage Market Index (RMMI) rose to 388.83, another all-time high since the index was first published in 2000. The increase in older homeowners’ wealth was mainly driven by an estimated $563 billion (4.4 percent) increase in home values, offset by a $43 billion (2.1 percent) increase in senior-held mortgage debt.

For a comprehensive commentary, please see NRMLA’s press release.


How RiskSpan Computes the RMMI

To calculate the RMMI, RiskSpan developed an econometric tool to estimate senior housing value, mortgage balances, and equity using data gathered from various public resources. These resources include the American Community Survey (ACS), Federal Reserve Flow of Funds (Z.1), and FHFA housing price indexes (HPI). The RMMI represents the senior equity level at time of measure relative to that of the base quarter in 2000.[1] 

A limitation of the RMMI relates to Non-consecutive data, such as census population. We use a smoothing approach to estimate data in between the observable periods and continue to look for ways to improve our methodology and find more robust data to improve the precision of the results. Until then, the RMMI and its relative metrics (values, mortgages, home equities) are best analyzed at a trending macro level, rather than at more granular levels, such as MSA.


[1] There was a change in RMMI methodology in Q3 2015 mainly to calibrate senior homeowner population and senior housing values observed in 2013 American Community Survey (ACS).


Automated Legal Disclosure Generator for Mortgage and Asset-Backed Securities

Issuing a security requires a lot of paperwork. Much of this paperwork consists of legal disclosures. These disclosures inform potential investors about the collateral backing the bonds they are buying. Generating, reviewing, and approving these detailed disclosures is hard and takes a lot of time – hours and sometimes days. RiskSpan has developed an easy-to-use legal disclosure generator application that makes it easier, reducing the process to minutes.

RiskSpan’s Automated Legal Disclosure Generator for Mortgage and Asset-Backed Securities automates the generation of prospectus-supplements, pooling and servicing agreements, and other legal disclosure documents. These documents contain a combination of static and dynamic legal language, data, tables, and images.  

The Disclosure Generator draws from a collection of data files. These files contain collateral-, bond-, and deal-specific information. The Disclosure Generator dynamically converts the contents of these files into legal disclosure language based on predefined rules and templates. In addition to generating interim and final versions of the legal disclosure documents, the application provides a quick and easy way of making and tracking manual edits to the documents. In short, the Disclosure Generator is an all-inclusive, seamless, end-to-end system for creating, editing and tracking changes to legal documents for mortgage and asset-backed securities.   

The Legal Disclosure Generator’s user interface supports:  

  1. Simultaneous uploading of multiple data files.
  2. Instantaneous production of the first (and subsequent) drafts of legal documents, adhering to the associated template(s).
  3. A user-friendly editor allowing manual, user-level language and data changes. Users apply these edits either directly to a specific document or to the underlying data template itself. Template updates carry forward to the language of all subsequently generated disclosures. 
  4. A version control feature that tracks and retains changes from one document version to the next.
  5. An archiving feature allowing access to previously generated documents without the need for the original data files.
  6. Editing access controls based on pre-defined user level privileges.
CONTACT US

Overview

RiskSpan’s Automated Legal Disclosure Generator for Mortgage and Asset-Backed Securities enables issuers of securitized assets to create legal disclosures efficiently and quickly from raw data files.

The Legal Disclosure Generator is easy and intuitive to use. After setting up a deal in the system, the user selects the underlying collateral- and bond-level data files to create the disclosure document. In addition to the raw data related to the collateral and bonds, these data files also contain relevant waterfall payment rules. The data files can be in any format — Excel, CSV, text, or even custom file extensions. Once the files are uploaded, the first draft of the disclosures can be easily generated in just a few seconds. The system takes the underlying data files and creates a draft of the disclosure document seamlessly and on the fly.  In addition, the Legal Disclosure Generator reads custom scripts related to waterfall models and converts them into waterfall payment rules.

Here is a sample of a disclosure document created from the system.


REQUEST A DEMO

Blackline Version(s)

In addition to creating draft disclosure documents, the Legal Disclosure Generator enables users to make edits and changes to the disclosures on the fly through an embedded editor. The Disclosure Generator saves these edits and applies them to the next version. The tool creates blackline versions with a single integrated view for managing multiple drafts.

The following screenshot of a sample blackline version illustrates how users can view changes from one version to the next.

Tracking of Drafts

The Legal Disclosure Generator keeps track of a disclosure’s entire version history. The system enables email of draft versions directly to the working parties, and additionally retains timestamps of these emails for future reference.

The screenshot below shows the entire lifecycle of a document, from original creation to print, with all interim drafts along the way. 


Automated QC System

The Legal Disclosure Generator’s automated QC system creates a report that compares the underlying data file(s) to the data that is contained in the legal disclosure. The automated QC process ensures that data is accurate and reconciled.

Downstream Consumption

The Legal Disclosure Generator creates a JSON data file. This consolidated file consists of collateral and bond data, including waterfall payment rules. The data files are made available for downstream consumption and can also be sent to Intex, Bloomberg, and other data vendors. One such vendor noted that this JSON data file has enabled them to model deals in one-third the time it took previously.

Self-Serve System

The Legal Disclosure Generator was designed with the end-user in mind. Users can set up the disclosure language by themselves and edit as needed, with little or no outside help.

The ‘System’ Advantage

  • Remove unnecessary, manual, and redundant processes
  • Huge Time Efficiency – 24 Hours vs 2 Mins (Actual time savings for a current client of the system)
  • Better Managed Processes and Systems
  • Better Resource Management – Cost Effective Solutions
  • Greater Flexibility
  • Better Data Management – Inbuilt QCs


LEARN MORE

Get Started
Log in

Linkedin   

risktech2024