Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: General

RiskSpan Vintage Quality Index (VQI): Q2 2020

The RiskSpan Vintage Quality Index (“VQI”) is a monthly index designed to quantify the underwriting environment of a monthly vintage of mortgage originations and help credit modelers control for prevailing underwriting conditions at various times. Published quarterly by RiskSpan, the VQI generally trends slowly, with interesting monthly changes found primarily in the individual risk layers. (Assumptions used to construct the VQI can be found at the end of this post.) The VQI has reacted dramatically to the economic tumult caused by COVID-19, however, and in this post we explore how the VQI’s reaction to the current crisis compares to the start of the Great Recession. We examine the periods leading up to the start of each crisis and dive deep into the differences between individual risk layers.

Reacting to a Crisis

In contrast with its typically more gradual movements, the VQI’s reaction to a crisis is often swift. Because the VQI captures the average riskiness of loans issued in a given month, crises that lower lender (and MBS investor) confidence can quickly drive the VQI down as lending standards are tightened. For this comparison, we will define the start of the COVID-19 crisis as February 2020 (the end of the most recent economic expansion, according to the National Bureau of Economic Research), and the start of the Great Recession as December 2007 (the first official month of that recession). As you might expect, the VQI reacted by moving sharply down immediately after the start of each crisis.[1]

riskspan-VQI-report

Though the reaction appears similar, with each four-month period shedding roughly 15% of the index, the charts show two key differences. The first difference is the absolute level of the VQI at the start of the crisis. The vertical axis on the graphs above displays the same spread (to display the slope of the changes consistently), but the range is shifted by a full 40 points. The VQI maxed out at 139.0 in December 2007, while at the start of the COVID-19 crisis, the VQI stood at just 90.4.

A second difference surrounds the general trend of the VQI in the months leading up to the start of each crisis. The VQI was trending up in the 18 months leading up the Great Recession, signaling an increasing riskiness in the loans being originated and issued. (As we discuss later, this “last push” in the second half of 2007 was driven by an increase in loans with high loan-to-value ratios.) Conversely, 2019 saw the VQI trend downward, signaling a tightening of lending standards.

Different Layers of Risk

Because the VQI simply indexes the average number of risk layers associated with the loans issued by the Agencies in a given month, a closer look at the individual risk layers provides insights that can be masked when analyzing the VQI as a whole.

The risk layer that most clearly depicts the difference between the two crises is the share of loans with low FICO scores (below 660).

riskspan-VQI-report

The absolute difference is striking: 27.9% of loans issued in December 2007 had a low FICO score, compared with just 7.1% of loans in February 2020. That 20.8% difference perfectly captures the underwriting philosophies of the two periods and pretty much sums up the differing quality of the two loan cohorts.

FICO trends before the crisis are also clearly different. In the 12 months leading up to the Great Recession the share of low-FICO loans rose from 24.4% to 27.9% (+3.2%). In contrast, the 12 months before the COVID-19 crisis saw the share of low-FICO loans fall from 11.5% to 7.2% (-4.3%).

The low-FICO risk layer’s reaction to the crisis also differs dramatically. Falling 27.9% to 15.4% in 4 months (on its way to 3.3% in May 2009), the share of low-FICO loans cratered following the start of the recession. In contrast, the risk layer has been largely unimpacted by the current crisis, simply continuing its downward trend mostly uninterrupted.

Three other large drivers of the difference between the VQI in December 2007 and in February 2020 are the share of cash-out refinances, the share of loans for second homes, and the share of loans with debt-to-income (DTI) ratios above 45%. What makes these risk layers different from FICO is their reaction to the crisis itself. While their absolute levels in the months leading up to the Great Recession were well above those seen at the beginning of 2020 (similar to low-FICO), none of these three risk layers appear to react to either crisis but rather continue along the same general trajectory they were on in the months leading up to each crisis. Cash-out refinances, following a seasonal cycle are mostly unimpacted by the start of the crises, holding a steady spread between the two time-periods:

riskspan-vqi-report

Loans for second homes were already becoming more rare in the runup to December 2007 (the only risk layer to show a reaction to the tumult of the fall of 2007) and mostly held in the low teens immediately following the start of the recession:

Great Recession

Finally, loans with high DTIs (over 45%) have simply followed their slow trend down since the start of the COVID-19 crisis, while they actually became slightly more common following the start of the Great Recession:

riskspan-VQI-report

The outlier, both pre- and post-crisis, is the high loan-to-value risk layer. For most of the 24 months leading up to the start of the Great Recession the share of loans with LTVs above 80% was well below the same period leading up to the COVID-19 crisis. The pre-Great Recession max of 33.2% is below the 24-month average of 33.3% at the start of the COVID-19 crisis. The share of high-LTV loans also reacted to the crisis in 2008, falling sharply after the start of the recession. In contrast, the current downward trend in high-LTV loans started well before the COVID-19 crisis and was seemingly unimpacted by the start of the crisis.

RiskSpan-VQI-report

Though the current downward trend is likely due to increased refinance activity as mortgage rates continue to crater, the chart seems upside down relative to what you might have predicted.

The COVID-19 Crisis is Different

What can the VQI tell us about the similarities and differences between December 2007 and February 2020? When you look closely, quite a bit.

  1. The loans experiencing the crisis in 2020 are less risky.

By almost all measures, the loans that entered the downturn beginning in December 2007 were riskier than the loans outstanding in February 2020. There are fewer low-FICO loans, fewer loans with high debt-to-income ratios, fewer loans for second homes, and fewer cash-out refinances. Trends aside, the absolute level of these risky characteristics—characteristics that are classically considered in mortgage credit and loss models—is significantly lower. While that is no guarantee the loans will fare better through this current crisis and recovery, we can reasonably expect better outcomes this time around.

  1. The 2020 crisis did not immediately change underwriting / lending.

One of the more surprising VQI trends is the non-reaction of many of the risk layers to the start of the COVID-19 crisis. FICO, LTV, and DTI all seem to be continuing a downward trend that began well before the first coronavirus diagnosis. The VQI is merely continuing a trend started back in January 2019. (The current “drop” has brought the VQI back to the trendline.) Because the crisis was not born of the mortgage sector and has not yet stifled demand for mortgage-backed assets, we have yet to see any dramatic shifts in lending practices (a stark contrast with 2007-2008). Dramatic tightening of lending standards can lead to reduced home buying demand, which can put downward pressure on home prices. The already-tight lending standards in place before the COVID-19 crisis, coupled with the apparent non-reaction by lenders, may help to stabilize the housing market.

The VQI was not designed to gauge the unknowns of a public health crisis. It does not directly address the lessons learned from the Great Recession, including the value of modification and forbearance in maintaining stability in the market. It does not account for the role of government and the willingness of policy makers to intervene in the economy (and in the housing markets specifically). Despite not being a crystal ball, the VQI nevertheless remains a valuable tool for credit modelers seeking to view mortgage originations from different times in their proper perspective.

—————

Analytical and Data Assumptions

Population assumptions:

  • Issuance Data for Fannie Mae and Freddie Mac.
  • Loans originated more than three months prior to issuance are excluded because the index is meant to reflect current market conditions.
  • Loans likely to have been originated through the HARP program, as identified by LTV, MI coverage percentage, and loan purpose are also excluded. These loans do not represent credit availability in the market, as they likely would not have been originated today if not for the existence of HARP.

Data Assumptions:

  • Freddie Mac data goes back to December 2005. Fannie Mae data only goes back to December 2014.
  • Certain Freddie Mac data fields were missing prior to June 2008.

GSE historical loan performance data release in support of GSE Risk Transfer activities was used to help back-fill data where it was missing.

 

 

[1] Note that the VQI’s baseline of 100 reflects underwriting standards as of January 2003.

 


Edge: Bank Buyouts in Ginnie Mae Pools

Ginnie Mae prepayment speeds saw a substantial uptick in July, with speeds in some cohorts more than doubling. Much of this uptick was due to repurchases of delinquent loans. In this short post, we examine those buyouts for bank and non-bank servicers. We also offer a method for quantifying buyout risk going forward.

For background, GNMA servicers have the right (but not the obligation) to buy delinquent loans out of a pool if they have missed three or more payments. The servicer buys these loans at par and can later re-securitize them if they start reperforming. Re-securitization rules vary based on whether the loan is naturally delinquent or in a forbearance program. But the reperforming loan will be delivered into a pool with its original coupon, which almost always results in a premium-priced pool. This delivery option provides a substantial profit for the servicer that purchased the loan at par.

To purchase the loan out of the pool, the servicer must have both cash and sufficient balance sheet liquidity. Differences in access to funding can drive substantial differences buyout behavior between well-capitalized bank servicers and more thinly capitalized non-bank servicers. Below, we compare recent buyout speeds between banks and non-banks and highlight some entities whose behavior differs substantially from that of their peer group.[1]

In July, Wells Fargo’s GNMA buyouts had an outsized impact on total CPR in GNMA securities. Wells, the largest GNMA bank servicer, exhibits extraordinary buyout efficiency relative to other servicers, buying out 99 percent of eligible loans. Wells’ size and efficiency, coupled with a large 60-day delinquency in June (8.6%), caused a large increase in “involuntary prepayments” and drove total overall CPR substantially higher in July. This effect was especially apparent in some moderately seasoned multi-lender pools. For example, speeds on 2012-13 GN2 3.5 multi-lender pools accelerated from low 20s CPR in June to mid-40s in July, nearly converging to the cheapest-to-deliver 2018-19 production GN2 3.5 and wiping out any carry advantage in the sector.

FactorDate VS CPR
Figure 1: Prepayment speeds in GN2 3.5 multi-lender pools: 2012-13 vintage in blue, 2018-19 vintage in black.

This CPR acceleration in 2012-13 GN2 3.5s was due entirely to buyouts, with the sector buyouts rising for 5 CBR to 29 CBR.[2] In turn, this increase was driven almost entirely by Wells, which accounted for 25% of the servicing in some pools.

FactorDate VS CPR
Figure 2: Buyout speeds in GN2 3.5 multi-lender pools. 2012-13 vintage in blue, 2018-19 vintage in black

In the next table, we summarize performance for the top ten GNMA bank servicers. The table shows loan-level roll rates from June to July for loans that started June 60-days delinquent. Loans that rolled to the DQ90+ bucket were not bought out of the pool by the servicer, despite being eligible for it. We use this 90+ delinquency bucket to calculate each servicer’s buyout efficiency, defined as the percentage of delinquent loans eligible for buyout that a servicer actually repurchases.

Roll Rates for Bank Servicers, for July 2020 Reporting Date

roll rates

Surprisingly, many banks exhibit very low buyout efficiencies, including Flagstar, Citizens, and Fifth Third. Navy Federal and USAA (next table) show muted buyout performance due to their high VA concentration.

Next, we summarize roll rates and buyout efficiency for the top ten GNMA non-bank servicers.

Roll Rates for Ginnie Mae Non-bank Servicers, for July 2020 Reporting Date

roll rates

Not surprisingly, non-banks as a group are much less efficient at buying out eligible loans, but Carrington stands out.

Looking forward, how can investors quantify the potential CBR exposure in a sector? Investors can use Edge to estimate the upcoming August buyouts within a sector by running a servicer query to separate a set of pools or cohort into its servicer-specific delinquencies.[3] Investors can then apply that servicer’s 60DQ->90DQ roll rate plus the servicer’s buyout efficiency to estimate a CBR.[4] This CBR will contribute to the overall CBR for a pool or set of pools.

Given the significant premium at which GNMA passthroughs are trading, the profits from repurchase and re-securitization are substantial. While we expect repurchases will continue to play an outsized role in GNMA speeds, this analysis illustrates the extent to which this behavior can vary from servicer to servicer, even within the bank and non-bank sectors. Investors can mitigate this risk by quantifying the servicer-specific 60-day delinquency within their portfolio to get a clearer view of the potential impact from buyouts.

If you interested in seeing variations on this theme, contact us. Using Edge, we can examine any loan characteristic and generate a S-curve, aging curve, or time series.


 

 

[1] This post builds on our March 24 write-up on bank versus non-bank delinquencies, link here. For this analysis, we limited our analysis to loans in 3% pools and higher, issued 2010-2020. Please see RiskSpan for other data cohorts.

[2] CBR is the Conditional Buyout Rate, the buyout analogue of CPR.

[3] In Edge, select the “Expanded Output” to generate servicer-by-servicer delinquencies.

[4] RiskSpan now offers loan-level delinquency transition matrices. Please email techsupport@riskspan.com for details.

 


Is Free Public Data Worth the Cost?

No such thing as a free lunch.

The world is full of free (and semi-free) datasets ripe for the picking. If it’s not going to cost you anything, why not supercharge your data and achieve clarity where once there was only darkness?

But is it really not going to cost you anything? What is the total cost of ownership for a public dataset, and what does it take to distill truly valuable insights from publicly available data? Setting aside the reliability of the public source (a topic for another blog post), free data is anything but free. Let us discuss both the power and the cost of working with public data.

To illustrate the point, we borrow from a classic RiskSpan example: anticipating losses to a portfolio of mortgage loans due to a hurricane—a salient example as we are in the early days of the 2020 hurricane season (and the National Oceanic and Atmospheric Administration (NOAA) predicts a busy one). In this example, you own a portfolio of loans and would like to understand the possible impacts to that portfolio (in terms of delinquencies, defaults, and losses) of a recent hurricane. You know this will likely require an external data source because you do not work for NOAA, your firm is new to owning loans in coastal areas, and you currently have no internal data for loans impacted by hurricanes.

Know the Data.

The first step in using external data is understanding your own data. This may seem like a simple task. But data, its source, its lineage, and its nuanced meaning can be difficult to communicate inside an organization. Unless you work with a dataset regularly (i.e., often), you should approach your own data as if it were provided by an external source. The goal is a full understanding of the data, the data’s meaning, and the data’s limitations, all of which should have a direct impact on the types of analysis you attempt.

Understanding the structure of your data and the limitations it puts on your analysis involves questions like:

  • What objects does your data track?
  • Do you have time series records for these objects?
  • Do you only have the most recent record? The most recent 12 records?
  • Do you have one record that tries to capture life-to-date information?

Understanding the meaning of each attribute captured in your data involves questions like:

  • What attributes are we tracking?
  • Which attributes are updated (monthly or quarterly) and which remain static?
  • What are the nuances in our categorical variables? How exactly did we assign the zero-balance code?
  • Is original balance the loan’s balance at mortgage origination, or the balance when we purchased the loan/pool?
  • Do our loss numbers include forgone interest?

These same types of questions also apply to understanding external data sources, but the answers are not always as readily available. Depending on the quality and availability of the documentation for a public dataset, this exercise may be as simple as just reading the data dictionary, or as labor intensive as generating analytics for individual attributes, such as mean, standard deviation, mode, or even histograms, to attempt to derive an attribute’s meaning directly from the delivered data. This is the not-free part of “free” data, and skipping this step can have negative consequences for the quality of analysis you can perform later.

Returning to our example, we require at least two external data sets:  

  1. where and when hurricanes have struck, and
  2. loan performance data for mortgages active in those areas at those times.

The obvious choice for loan performance data is the historical performance datasets from the GSEs (Fannie Mae and Freddie Mac). Providing monthly performance information and loss information for defaulted loans for a huge sample of mortgage loans over a 20-year period, these two datasets are perfect for our analysis. For hurricanes, some manual effort is required to extract date, severity, and location from NOAA maps like these (you could get really fancy and gather zip codes covered in the landfall area—which, by leaving out homes hundreds of miles away from expected landfall, would likely give you a much better view of what happens to loans actually impacted by a hurricane—but we will stick to state-level in this simple example).

Make new data your own.

So you’ve downloaded the historical datasets, you’ve read the data dictionaries cover-to-cover, you’ve studied historical NOAA maps, and you’ve interrogated your own data teams for the meaning of internal loan data. Now what? This is yet another cost of “free” data: after all your effort to understand and ingest the new data, all you have is another dataset. A clean, well-understood, well-documented (you’ve thoroughly documented it, haven’t you?) dataset, but a dataset nonetheless. Getting the insights you seek requires a separate effort to merge the old with the new. Let us look at a simplified flow for our hurricane example:

  • Subset the GSE data for active loans in hurricane-related states in the month prior to landfall. Extract information for these loans for 12 months after landfall.
  • Bucket the historical loans by the characteristics you use to bucket your own loans (LTV, FICO, delinquency status before landfall, etc.).
  • Derive delinquency and loss information for the buckets for the 12 months after the hurricane.
  • Apply the observed delinquency and loss information to your loan portfolio (bucketed using the same scheme you used for the historical loans).

And there you have it—not a model, but a grounded expectation of loan performance following a hurricane. You have stepped out of the darkness and into the data-driven light. And all using free (or “free”) data!

Hyperbole aside, nothing about our example analysis is easy, but it plainly illustrates the power and cost of publicly available data. The power is obvious in our example: without the external data, we have no basis for generating an expectation of losses after a hurricane. While we should be wary of the impacts of factors not captured by our datasets (like the amount and effectiveness of government intervention after each storm – which does vary widely), the historical precedent we find by averaging many storms can form the basis for a robust and defensible expectation. Even if your firm has had experience with loans in hurricane-impacted areas, expanding the sample size through this exercise bolsters confidence in the outcomes. Generally speaking, the use of public data can provide grounded expectations where there had been only anecdotes.

But this power does come at a price—a price that should be appreciated and factored into the decision whether to use external data in the first place. What is worse than not knowing what to expect after a hurricane? Having an expectation based on bad or misunderstood data. Failing to account for the effort required to ingest and use free data can lead to bad analysis and the temptation to cut corners. The effort required in our example is significant: the GSE data is huge, complicated, and will melt your laptop’s RAM if you are not careful. Turning NOAA PDF maps into usable data is not a trivial task, especially if you want to go deeper than the state level. Understanding your own data can be a challenge. Applying an appropriate bucketing to the loans can make or break the analysis. Not all public datasets present these same challenges, but all public datasets present costs. There simply is no such thing as a free lunch. The returns on free data frequently justify these costs. But they should be understood before unwittingly incurring them.


Chart of the Month: Not Just the Economy — Asset Demand Drives Prices

Within weeks of the March 11th declaration of COVID-19 as a global pandemic by the World Health Organization, rating agencies were downgrading businesses across virtually every sector of the economy. Not surprisingly, these downgrades were felt most acutely by businesses that one would reasonably expect to be directly harmed by the ensuing shutdowns, including travel and hospitality firms and retail stores. But the downgrades also hit food companies and other areas of the economy that tend to be more recession resistant. 

An accompanying spike in credit spreads was even quicker to materialize. Royal Caribbean’s and Marriott’s credit spreads tripled essentially overnight, while those of other large companies increased by twofold or more. 

But then something interesting happened. Almost as quickly as they had risen, most of these spreads began retreating to more normal levels. By mid-June, most spreads were at or lower than where they were prior to the pandemic declaration. 

What business reason could plausibly explain this? The pandemic is ongoing and aggregate demand for these companies’ products does not appear to have rebounded in any material way. People are not suddenly flocking back to Marriott’s hotels or Wynn’s resorts.    

The story is indeed one of increased demand. But rather than demand for the companies’ productswe’re seeing an upswing in demand for these companies’ debt. What could be driving this demand? 

Enter the Federal Reserve. On March 23rd, The Fed announced that its Secondary Market Corporate Credit Facility (SMCCF) would begin purchasing investment-grade corporate bonds in the secondary market, first through ETFs and directly in a later phase. 

And poof! Instant demand. And instant price stabilization. All the Fed had to do was announce that it would begin buying bonds (it hasn’t actually started buying yet) for demand to rush back in, push prices up and drive credit spreads down.  

To illustrate how quickly spreads reacted to the Fed’s announcement, we tracked seven of the top 20 companies listed by S&P across different industries from early March through mid-June. The chart below plots swap spreads for a single bond (with approximately five years to maturity) from each of the following companies: 

  • Royal Caribbean Cruises (RCL)
  • BMW 
  • The TJX Companies (which includes discount retailers TJ Maxx, Marshalls, and HomeGoods, among others) 
  • Marriott 
  • Wynn Resorts 
  • Kraft Foods 
  • Ford Motor Company

Credit Spreads React to Fed More than Downgrades

We sourced the underlying data for these charts from two RiskSpan partners: S&P, which provided the timing of the downgrades, and Refinitiv, which provided time-series spread data.  

The companies we selected don’t cover every industry, of course, but they cover a decent breadth. Incredibly, with the lone exception of Royal Caribbean, swap spreads for every one of these companies are either better than or at the same level as where they were pre-pandemic. 

As alluded to above, this recovery cannot be attributed to some miraculous improvement in the underlying economic environment. Literally the only thing that changed was the Fed’s announcement that it would start buying bonds. The fact that Royal Caribbean’s spreads have not fully recovered seems to suggest that the perceived weakness in demand for cruises in the foreseeable future remains strong enough to overwhelm any buoying effect of the impending SMCCF investment. For all the remaining companies, the Fed’s announcement appears to be doing the trick. 

We view this as clear and compelling evidence that the Federal Reserve in achieving its intended result of stabilizing asset prices, which in turn should help ease corporate credit.


COVID-19 and the Cloud

COVID-19 creates a need for analytics in real time

Regarding the COVID-19 pandemic, Warren Buffet has observed that we haven’t faced anything that quite resembles this problem” and the fallout is “still hard to evaluate. 

The pandemic has created unprecedented shock to economies and asset performance. The recent unemployment  data, although encouraging , has only added to the uncertaintyFurthermore, impact and recovery are unevenoften varying considerably from county to county and city to city. Consider: 

  1. COVID-19 cases and fatalities were initially concentrated in just a few cities and counties resulting in almost a total shutdown of these regions. 
  2. Certain sectors, such as travel and leisure, have been affected worse than others while other sectors such as oil and gas have additional issues. Regions with exposure to these sectors have higher unemployment rates even with fewer COVID-19 cases. 
  3. Timing of reopening and recoveries has also varied due to regional and political factors. 

Regional employment, business activity, consumer spending and several other macro factors are changing in real time. This information is available through several non-traditional data sources. 

Legacy models are not working, and several known correlations are broken. 

Determining value and risk in this environment is requiring unprecedented quantities of analytics and on-demand computational bandwidth. 

COVID-19 in the Cloud

Need for on-demand computation and storage across the organization 

I don’t need a hard disk in my computer if I can get to the server faster… carrying around these non-connected computers is byzantine by comparison.” ~ Steve Jobs 


Front office, risk management, quants and model risk management – every aspect of the analytics ecosystem requires the ability to run large number of scenarios quickly. 

Portfolio managers need to recalibrate asset valuation, manage hedges and answer questions from senior management, all while looking for opportunities to find cheap assets. Risk managers are working closely with quants and portfolio managers to better understand the impact of this unprecedented environment on assets. Quants must not only support existing risk and valuation processes but also be able to run new estimations and explain model behavior as data streams in from variety of sources. 

These activities require several processors and large storage units to be stood up on-demand. Even in normal times infrastructure teams require at least 10 to 12 weeks to procure and deploy additional hardware. With most of the financial services world now working remotely, this time lag is further exaggerated.  

No individual firm maintains enough excess capacity to accommodate such a large and urgent need for data and computation. 

The work-from-home model has proven that we have sufficient internet bandwidth to enable the fast access required to host and use data on the cloud. 

Cloud is about how you do computing

“Cloud is about how you do computing, not where you do computing.” ~ Paul Maritz, CEO of VMware 


Cloud computing is now part of everyday vocabulary and powers even the most common consumer devices. However, financial services firms are still in early stages of evaluating and transitioning to a cloud-based computing environment. 

Cloud is the only way to procure the level of surge capacity required today. At RiskSpan we are computing an average of half-million additional scenarios per client on demand. Users don’t have the luxury to wait for an overnight batch process to react to changing market conditions. End users fire off a new scenario assuming that the hardware will scale up automagically. 

When searching Google’s large dataset or using Salesforce to run analytics we expect the hardware scaling to be limitless. Unfortunately, valuation and risk management software are typically built to run on a pre-defined hardware configuration.  

Cloud native applications, in contrast, are designed and built to leverage the on-demand scaling of a cloud platform. Valuation and risk management products offered as SaaS scale on-demand, managing the integration with cloud platforms. 

Financial services firms don’t need to take on the burden of rewriting their software to work on the cloud. Platforms such as RS Edge enable clients to plug their existing data, assumptions and models into a cloudnative platform. This enables them to get all the analytics they’ve always had—just faster and cheaper.  

Serverless access can also help companies provide access to their quant groups without incurring additional IT resource expense. 

A recent survey from Flexera shows that 30% of enterprises have increased their cloud usage significantly due to COVID-19.

COVID-19 in the Cloud

Cloud is cost effective 

In 2000, when my partner Ben Horowitz was CEO of the first cloud computing company, Loudcloud, the cost of a customer running a basic Internet application was approximately $150,000 a month.”  ~ Marc Andreessen, Co-founder of Netscape, Board Member of Facebook 


Cloud hardware is cost effective, primarily due to the on-demand nature of the pricing model. $250B asset manager uses RS Edge to run millions of scenarios for a 45minute period every day. Analysis is performed over a thousand servers at a cost of $500 per month. The same hardware if deployed for 24 hours would cost $27,000 per month 

Cloud is not free and can be a two-edged sword. The same on-demand aspect thaenables end users to spin up servers as needed, if not monitoredcan cause the cost of such servers to accumulate to undesirable levelsOne of the benefits of a cloud-native platform is built-on procedures to drop unused servers, which minimizes the risk of paying for unused bandwidth. 

And yes, Mr. Andreeseen’s basic application can be hosted today for less than $100 per month 

The same survey from Flexera shows that organizations plan to increase public cloud spending by 47% over the next 12 months. 

COVID-19 in the Cloud

Alternate data analysis

“The temptation to form premature theories upon insufficient data is the bane of our profession.” ~ Sir Arthur Conan Doyle, Sherlock Holmes.


Alternate data sources are not always easily accessible and available within analytic applications. The effort and time required to integrate them can be wasted if the usefulness of the information cannot be determined upfront. Timing of analyzing and applying the data is key. 

Machine learning techniques offer quick and robust ways of analyzing data. Tools to run these algorithms are not readily available on a desktop computer.  

Every major cloud platform provides a wealth of tools, algorithms and pre-trained models to integrate and analyze large and messy alternate datasets. 

Join fintova’s Gary Maier and me at 1 p.m. EDT on June 24th as we discuss other important factors to consider when performing analytics in the cloud. Register now.


Webinar: Data Analytics and Modeling in the Cloud – June 24th

On Wednesday, June 24th, at 1:00 PM EDT, join Suhrud Dagli, RiskSpan’s co-founder and chief innovator, and Gary Maier, managing principal of Fintova for a free RiskSpan webinar.

Suhrud and Gary will contrast the pros and cons of analytic solutions native to leading cloud platforms, as well as tips for ensuring data security and managing costs.

Click here to register for the webinar.


Chart of the Month: Tracking Mortgage Delinquency Against Non-traditional Economic Indicators by MSA

Tracking Mortgage Delinquency Against Non-traditional Economic Indicators by MSA 

Traditional economic indicators lack the timeliness and regional granularity necessary to track the impact of COVID-19 pandemic on communities across the country. Unemployment reports published by the Bureau of Labor Statistics, for example, tend to have latency issues and don’t cover all workers. As regional economies attempt to get back to a new “normal” RiskSpan has begun compiling non-traditional “alternative” data that can provide a more granular and real-time view of issues and trends. In past crises, traditional macro indicators such as home price indices and unemployment rates were sufficient to explain the trajectory of consumer credit. However, in the current crisis, mortgage delinquencies are deteriorating more rapidly with significant regional dispersion. Serious mortgage delinquencies in the New York metro region were around 1.1% by April 2009 vs 30 day delinquencies at 9.9% of UPB in April 2020.  

STACR loan–level data shows that nationwide 30–day delinquencies increased from 0.8% to 4.2% nationwide. In this chart we track the performance and state of employment of 5 large metros (MSA). 

May Chart of the Month


Indicators included in our Chart of the Month: 

Change in unemployment is the BLS measure computed from unemployment claims. Traditionally this indicator has been used to measure economic health of a region. BLS reporting typically lags by months and weeks. 

Air quality index is a measure we calculate using level PM2.5 reported by EPA’s AirNow database on a daily basis. This metric is a proxy of increased vehicular traffic in different regions. Using a nationwide network of monitoring sites, EPA has developed ambient air quality trends for particle pollution, also called Particulate Matter (PM). We compute the index as daily level of PM2.5 vs the average of the last 5 years.  For regions that are still under a shutdown air quality index should be less than 100 (e.g. New York at 75% vs Houston at 105%) 

Air pollution from traffic has increased in regions where businesses have opened in May ’20 (e.g. LA went up from 69% in April to 98% in May).  However, consumer spending has not always increased at the same level.  We look to proxies for hourly employment levels. 

New Daily COVID-19 Cases: This is a health crisis and managing the rate of new COVID-19 cases will drive decisions to open or close businesses. The chart reports the monthly peak in new cases using daily data from Opportunity Insight 

Hourly Employment and Hours Worked at small businesses is provided by Opportunity Insight using data from Homebase. Homebase is a company that provides virtual scheduling and time-tracking tools, focused on small businesses in sectors such as retail, restaurant, and leisure/accommodation. The chart shows change in level of hourly employment as compared to January 2020. We expect this is to be a leading indicator of employment levels for this sector of consumers. 


Sources of data: 

Freddie Mac’s (STACR) transaction database 

Opportunity Insight’s Recovery Tracker 

Bureau of Labor and Statistics (BLS)’ MSA level economic reports 

Environment Protection Agency (EPA)’s AirNow database. 


Edge: PIW and Prepayments

Inspection waivers have been available on agency-backed mortgages since 2017, but in this era of social distancing, the convenience of forgoing an inspection looks set to become an important feature in mortgage origination. In this post, we compare prepayments on loans with and without inspections.

Broadly, FNMA allows inspection waivers on purchase single-family mortgages up to 80% LTV, and no cash-out refi with up to 90% LTV (75% if the refi is an investment property). Inspection waivers are available on cash-out refis for primary residences with LTV up to 70%, and investment properties with LTV up to 60%.

Inspection waivers were first introduced in mid-2017. In 2018, the proportion of loans with inspection waivers held steady around 6% but started a steady uptick in the middle of 2019, long before the pandemic made social distancing a must.[1]

Proportion of New Issuance with Waivers

Cumulative Proportion of Loans with Waivers

In the current environment, market participants should expect a further uptick in loans with waivers as refis increase and as the GSEs consider relaxing restrictions around qualifying loans. In short, PIW will start to become a key factor in loan origination. Given this, we examine the different behavior between loans with waivers and loans with inspections.

In the chart below, we show prepayment speeds on 30yr borrowers with “generic” mortgages,[2] with and without waivers. When 100bp in the money, “generic” loans with a waiver paid a full 15 CPR faster than loans with an inspection appraisal. Additionally, the waiver S-curve is steeper. Waiver loans that are 50-75bp in the money outpaced appraised houses by 20 CPR.

Refilncentive vs CPR

Next, we look at PIW by origination channel. For retail origination, loans with waivers paid only 10-15 CPR faster than loans with inspections (first graph). In contrast, correspondent loans with a waiver paid 15-20 CPR faster versus loans with an inspection (second graph).

Refilncentive vs CPR

Refilncentive vs CPR

We also looked at loan purpose. Purchase loans with a waiver paid only 10 CPR faster than comparable loans purchase loans with an inspection (first graph), whereas refi loans paid 25 CPR faster when 50-75bp in the money.

Refilncentive vs CPR

PIW and Prepayments in RS Edge

We also examined servicer-specific behavior for PIW. We saw both a difference in the proportional volume of waivers, with some originators producing a heavy concentration of waivers, as well as a difference in speeds. The details are lengthy, please contact us on how to run this query in the Edge platform.

In summary, loans with inspection waivers pay faster than loans without waivers, but the differentials vary greatly by channel and loan purpose. With property inspection waivers rising as a percentage of overall origination, these differences will begin to play a larger role in forming overall prepayment expectations.

If you interested in seeing variations on this theme, contact us. Using RS Edge, we can examine any loan characteristic and generate a S-curve, aging curve, or time series.


 

 

[1] Refi loans almost entirely drove this uptick in waivers, see RiskSpan for a breakdown of refi loans with waivers.

[2] For this query, we searched for loans delivered to 30yr deliverable pools with loan balance greater than $225k, FICO greater than 700, and LTV below 80%.


Webinar: Managing Your Whole Loan Portfolio with Machine Learning

webinar

Managing Your Whole Loan Portfolio with Machine Learning

Whole Loan Data Meets Predictive Analytics

  • Ingest whole loan data
  • Normalize data sets
  • Improve data quality
  • Analyze your historical data
  • Improve your predictive analytics 

Learn the Power of Machine Learning

DATA INTAKE — How to leverage machine learning to help streamline whole loan data prep

MANAGE DATA — Innovative ways to manage the differences in large data sets

DATA IMPROVEMENT — Easily clean your data to drive better predictive analytics


LC Yarnelle

Director – RiskSpan

LC Yarnelle is a Director with experience in financial modeling, business operations, requirements gathering and process design. At RiskSpan, LC has worked on model validation and business process improvement/documentation projects. He also led the development of one of RiskSpan’s software offerings, and has led multiple development projects for clients, utilizing both Waterfall and Agile frameworks.  Prior to RiskSpan, LC was as an analyst at NVR Mortgage in the secondary marketing group in Reston, VA, where he was responsible for daily pricing, as well as on-going process improvement activities.  Before a career move into finance, LC was the director of operations and a minority owner of a small business in Fort Wayne, IN. He holds a BA from Wittenberg University, as well as an MBA from Ohio State University. 

Matt Steele

Senior Analyst – RiskSpan



Open Source Governance: Three Potential Risks

For many companies, the question is no longer whether to use open-source tools, but rather how to implement them with the appropriate governance and controls. Have security concerns been accounted for?  How does one effectively institute controls over bad code?  Are there legal implications for using open-source software?

Open Source Security Risks

Open-source software is not inherently more or less prone to malicious code injections than proprietary software. It is true that anyone can push a code enhancement for a new version, and it may be possible for the senior contributors to miss intentional malware. However, in these circumstances, open source has an advantage over proprietary, coined in 1999 by Eric S. Raymond as Linus’s Law: “Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.” It is unlikely that a deliberate security error goes unnoticed by the many pairs of eyes on each release. However, security issues persist.

Open-Source Security – An Example

Debian, a Unix-like computer operating system, was one of the first to be based on the Linux kernel. Like many systems, it utilizes OpenSSL, a software library that provides an open-source implementation of the Secure Sockets Layer (SSL) protocol, commonly used by applications that require secure communications over a network.

In 2006, a snippet of code was removed from Debian’s OpenSSL package after one of the contributors found that it caused runtime warnings generated by other packages. After the removal, the pseudorandom number generator (PRNG) generated SSL keys using only the process ID (in Linux, a number up to 32,768) to the exclusion of all other random data. Since a relatively small number of values was used, the keys created over a period of almost two years were too predictable to be used securely. Users became aware of the issue 20 months after the bug was introduced, leading to costly security resolutions for companies and individuals who relied on Debian’s OpenSSL implementation.[1]

OpenSSL was again the subject of negative attention when a bug dubbed ‘Heartbleed’ was introduced to the code in 2012 and disclosed to the public in 2014. A fixed version of OpenSSL was released on the same day the issue was announced. More than a month after the release, however, 1.5% of the 800,000 most popular affected websites were still vulnerable to the security bug. [2]

The good news is that such vulnerabilities are documented in the Common Vulnerabilities and Exposures (CVE) system, and they are not so common. For Python 2.7, the popular version released in 2010, 15 vulnerabilities were recorded from 2010 to 2016, only one of which is considered ‘High’ severity, with a CVSS score of 7.5.  jQuery, a JavaScript library that simplifies some components of web application development and the most common open-source component identified in the latest Open Source Security and Risk Analysis (OSSRA) report, only has four known vulnerabilities from 2007 to 2017, none of which rank higher than a ‘Medium’. The CVE is just one tool available for improving the security profile of software applications, but technologists must remain vigilant and abreast of known issues. Corporate IT governance frameworks should be continuously updated to keep up with the changing structure of the underlying technology itself.

Bad Code

Serious security vulnerabilities may not be a daily occurrence, but bad code can affect software at any time. pandas, a popular open-source software library used in Python implementations for data manipulation and analysis, was first released in 2009. Since then, its contributors have identified over 10,000 issues, 1,933 of which are currently considered unresolved.[3] A company that relies on accurate output from a codebase that uses pandas needs to be vigilant not only in testing the code written by its in-house developers, but also in verifying that all outstanding known pandas issues are covered by workarounds and the rest of the functionality is sound. Developers and testers who are not intimately familiar with the pandas source code must devise creative testing tools to ensure complete integrity of applications that rely on it.

Bad Code – An Example

The Comma-Separated Values (CSV) file is one of many data formats that can be loaded for data manipulation and analysis by pandas, in this case using the built-in read_csv function.  read_csv has a number of associated helper attributes intended to simplify the data import, one of which is parse_dates, which, as the name implies, tells pandas to automatically parse dates in the data using a recognition algorithm to determine the format in each date-populated column.

However, if a row of data contains a blank value where a date is expected, pandas may populate that field with today’s date — a bug first formalized in version 0.9 in 2012 [4] (closed three days after it was opened) and again in 2014.[5] The issue was not closed until the end of 2016, when one of the contributors noted that the tests passed for version 0.19, stating that he was “not sure when this was fixed, but it doesn’t seem like it occurred recently. [6]

In the meantime, pandas versions prior to 0.19 may have resulted in incorrect date-related parameters if blank fields were fed to the system. For example, a mortgage-backed security may have had an incorrect calculated weighted average loan age if some of its loans had blank first payment dates, causing these rows to have a loan age of zero.

In addition to implementing security testing, IT controls must include a clear framework for testing both in-house and open-source components of all applications, especially high-impact programs.

Open-Source Licensing

Finally, it is important to be aware of open-source licensing constraints and to maintain active licensing governance activities to avoid legal issues in the future. Similar to the copyright concept, some open source creators have adopted the concept of ‘copyleft’ to ensure that “anyone who redistributes the software, with or without changes, must pass along the freedom to further copy and change it. [7]  This means that, legally, for any software that contains a copylefted open source component, whether it comprises 99% or 0.1% of the application code, the entire source code must be distributed with the software or be made available upon request. This is not an issue when the software is distributed internally among corporate users, but it can become more problematic when the company intends to sell or otherwise provide the software without revealing the internally developed codebase. Not all open-source software is copylefted – in fact, many popular licenses are highly permissive with very few restrictions. Below is a summary of the four most popular open-source licenses. [8]

Of the four, only the GNU General Public License (GPL, all versions) requires the creators to disclose the source code.  Between 20% and 25% of all open-source software is covered by the GNU GPL.

OSSRA found that 75% of applications contained at least some components under the GPL family of licenses, and that only 45% of those applications complied with the GPL copyleft obligations. Overall, the Financial Services and FinTech industries maintained 89% of all applications with at least one licensing conflict.

Most open-source software, even that which is licensed under the GNU GPL, can be used commercially. For example, a company can use and internally distribute a financial model written in R, an open-source programming language licensed under the GNU GPL 2.0. However, important legal consequences must be considered if the developed code will be later distributed outside of the company as a proprietary application. If the organization were to sell the R-based model, the entire source code would have to be made available to the paying user, who would also be free to distribute the code, for free or at a price. Alternatively, a model implemented in Python, which is licensed under a Berkeley Software Distribution (BSD)-like agreement, could be distributed without exposing the source code.

Open-Source Governance and Controls

Governance risks are specific to how open-source tools are integrated into existing operations. These risks can stem from a lack of formal training, lack of service and support, violations of third-party intellectual property rights, or instability and incompatibility with existing operating environments. Successful users of open-source code and tools devise effective means of identifying and measuring these risks. They ensure that these risks are included in process risk assessments to facilitate identification and mitigation of potential control weaknesses. Security vulnerabilities, code issues, and software licensing should not deter developers from using the plethora of useful open-source tools. Open-source issues and bugs are viewed and tested by thousands of capable developers, increasing the likelihood of a speedy resolution. In addition, a company’s own development team has full access to the source code, making it possible to fix issues without relying on anyone else. As with any application, effective governance and controls are essential to a successful open-source application. These ensure that software is used securely and appropriately and that a comprehensive testing framework is applied to minimize inaccuracies. The world of open source is changing constantly –we all just need to keep up.

WANT TO LEARN MORE?


Get Started
Log in

Linkedin   

risktech2024