Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: Agency MBS

RiskSpan Edge & CRT Data

For participants in the credit risk transfer (CRT) market, managing the massive quantity of data to produce clear insights into deal performance can be difficult and demanding on legacy systems. Complete analysis of the deals involves bringing together historical data, predictive models, and deal cash flow logic, often leading to a complex workflow in multiple systems. RiskSpan’s Edge platform (RS Edge) solves these challenges, bringing together all aspects of CRT analysis. RiskSpan is the only vendor to bring together everything a CRT analyst needs:  

  • Normalized, clean, enhanced data across programs (STACR/CAS/ACIS/CIRT),
  • Historical Fannie/Freddie performance data normalized to a single standard,
  • Ability to load loan-level files related to private risk transfer deals,
  • An Agency-specific, loan-level, credit model,
  • Seamless Intex integration for deal and portfolio analysis,
  • Scalable scenario analysis at the deal or portfolio level, and
  • Vendor and client model integration capabilities.
  • Ability to load loan-level files related to private risk transfer deals.

Deal Comparison Table All of these features are built into RS Edge, a cloud-native, data and analytics platform for loans and securities. The RS Edge user interface is accessible via any web browser, and the processing engine is accessible via an application programming interface (API). Accessing RS Edge via the API allows access to the full functionality of the platform, with direct integration into existing workflows in legacy systems such as Excel, Python, and R. To tailor RS Edge to the specific needs of a CRT investor, RiskSpan is rolling out a series of Excel tools, built using our APIs, which allow for powerful loan-level analysis from the tool everyone knows and loves. Accessing RS Edge via our new Excel templates, users can:

  • Track deal performance,
  • Compare deal profiles,
  • Research historical performance of the full GSE population,
  • Project deal and portfolio performance with our Agency-specific credit model or with user-defined CPR/CDR/severity vectors, and
  • Analyze various macro scenarios across deals or a full portfolio

Loan Attribute Distributions

The web-based user interface allows for on-demand analytics, giving users specific insights on deals as the needs arise. The Excel template built with our API allows for a targeted view tailored to the specific needs of a CRT investor.

For teams that prefer to focus their time on outcomes rather than the build, RiskSpan’s data team can build custom templates around specific customer processes. RiskSpan offers support from premiere data scientists who work with clients to understand their unique concerns and objectives to integrate our analytics with their legacy system of choice. Loan Performance History The images are examples of a RiskSpan template for CRT deal comparison: profile comparison, loan credit score distribution, and delinquency performance for five Agency credit risk transfer deals, pulled via the RiskSpan Data API and rendered in Excel. ______________________________________________________________________________________________

Get a Demo

Case Study: RiskSpan Edge Platform Agency MBS Module

The Client

Multiple Agency Traders and the Research & Strategy Division of a Major Investment Bank

The Problem

RiskSpan leverages its extensive expertise to help clients rapidly access the drivers of prepayment risk and prepayment trends. Our analytical platform provides ultimate flexibility and speed to perform quickly turn securities level data into information to based decisions.

The Solution

The RiskSpan Edge Platform is used by the Agency Trading desk to slice and dice data and look for patterns among various bonds using the graphical interface. The RiskSpan Edge Platform offers users access to current and historical data on Ginnie Mae, Fannie Mae, and Freddie Mac (“Agencies”) pass-throughs as well as other data sets.

The tool provides a flexible user interface that supports analysis of prepayment data and actionable reporting. The database includes all monthly pool level data published by the Agencies dating back to 1995.  This data includes pool factors, geographic concentrations and supplemental pool level collateral information. The Prepayment Analytics tool provides a flexible user interface that supports intuitive analysis of the prepayment data and actionable reporting delivered quickly to decision‐makers. The database includes all monthly data published by the Agencies for all months back to 1995, including factors, geographic breakdowns and supplemental disclosure information.

The Deliverables

RiskSpan provides the tools for comprehensive Agency MBS analysis.

  • Visualizing data with integrated graphing and charting
  • Researching new prepayment trends
  • Creating user-defined data tables
  • Exporting customized charts and graphs for marketing purposes

CRT Deal Monitor: Understanding When Credit Becomes Risky

This analysis tracks several metrics related to deal performance and credit profile, putting them into a historical context by comparing the same metrics for recent-vintage deals against those of ‘similar’ cohorts in the time leading up to the 2008 housing crisis. You’ll see how credit metrics are trending today and understand the significance of today’s shifts in the context of historical data. Some of the charts in this post have interactive features, so click around! We’ll be tweaking the analysis and adding new metrics in subsequent months. Please shoot us an email if you have an idea for other metrics you’d like us to track.

Highlights

  • Performance metrics signal steadily increasing credit risk, but no cause for alarm.
    • We’re starting to see the hurricane-related (2017 Harvey and Irma) delinquency spikes subside in the deal data. Investors should expect a similar trend in 2019 due to Hurricane Florence.
    • The overall percentage of delinquent loans is increasing steadily due to the natural age ramp of delinquency rates and the ramp-up of the program over the last 5 years.
    • Overall delinquency levels are still far lower than historical rates.
    • While the share of delinquency is increasing, loans that go delinquent are ending up in default at a lower rate than before.
  • Deal Profiles are becoming riskier as new GSE acquisitions include higher-DTI business.
    • It’s no secret that both GSEs started acquiring a lot of high-DTI loans (for Fannie this moved from around 16% of MBS issuance in Q2 2017 to 30% of issuance as of Q2 this year). We’re starting to see a shift in CRT deal profiles as these loans are making their way into CRT issuance.
    • The credit profile chart toward the end of this post compares the credit profiles of recently issued deals with those of the most recent three months of MBS issuance data to give you a sense of the deal profiles we’re likely to see over the next 3 to 9 months. We also compare these recently issued deals to a similar cohort from 2006 to give some perspective on how much the credit profile has improved since the housing crisis.
    • RiskSpan’s Vintage Quality Index reflects an overall loosening of credit standards–reminiscent of 2003 levels–driven by this increase in high-DTI originations.
  • Fannie and Freddie have fundamental differences in their data disclosures for CAS and STACR.
    • Delinquency rates and loan performance all appear slightly worse for Fannie Mae in both the deal and historical data.
    • Obvious differences in reporting (e.g., STACR reporting a delinquent status in a terminal month) have been corrected in this analysis, but some less obvious differences in reporting between the GSEs may persist.
    • We suspect there is something fundamentally different about how Freddie Mac reports delinquency status—perhaps related to cleaning servicing reporting errors, cleaning hurricane delinquencies, or the way servicing transfers are handled in the data. We are continuing our research on this front and hope to follow up with another post to explain these anomalies.

The exceptionally low rate of delinquency, default, and loss among CRT deals at the moment makes analyzing their credit-risk characteristics relatively boring. Loans in any newly issued deal have already seen between 6 and 12 months of home price growth, and so if the economy remains steady for the first 6 to 12 months after issuance, then that deal is pretty much in the clear from a risk perspective. The danger comes if home prices drift downward right after deal issuance. Our aim with this analysis is to signal when a shift may be occurring in the credit risk inherent in CRT deals. Many data points related to the overall economy and home prices are available to investors seeking to answer this question. This analysis focuses on what the Agency CRT data—both the deal data and the historical performance datasets—can tell us about the health of the housing market and the potential risks associated with the next deals that are issued.

Current Performance and Credit Metrics

Delinquency Trends

The simplest metric we track is the share of loans across all deals that is 60+ days past due (DPD). The charts below compare STACR (Freddie) vs. CAS (Fannie), with separate charts for high-LTV deals (G2 for CAS and HQA for STACR) vs. low-LTV deals (G1 for CAS and DNA for STACR). Both time series show a steadily increasing share of delinquent loans. This slight upward trend is related to the natural aging curve of delinquency and the ramp-up of the CRT program. Both time series show a significant spike in delinquency around January of this year due to the 2017 hurricane season. Most of these delinquent loans are expected to eventually cure or prepay. For comparative purposes, we include a historical time series of the share of loans 60+ DPD for each LTV group. These charts are derived from the Fannie Mae and Freddie Mac loan-level performance datasets. Comparatively, today’s deal performance is much better than even the pre-2006 era. You’ll note the systematically higher delinquency rates of CAS deals. We suspect this is due to reporting differences rather than actual differences in deal performance. We’ll continue to investigate and report back on our findings.

Delinquency Outcome Monitoring

While delinquency rates might be trending up, loans that are rolling to 60-DPD are ultimately defaulting at lower and lower rates. The tables below track the status of loans that were 60+ DPD. Each bar in the chart represents the population of loans that were 60+ DPD exactly 6 months prior to the x-axis date. Over time, we see growing 60-DPD and 60+ DPD groups, and a shrinking Default group. This indicates that a majority of delinquent loans wind up curing or prepaying, rather than proceeding to default. The choppiness and high default rates in the first few observations of the data are related to the very low counts of delinquent loans as the CRT program ramped up. The following table repeats the 60-DPD delinquency analysis for the Freddie Mac Loan Level Performance dataset leading up to and following the housing crisis. (The Fannie Mae loan level performance set yields a nearly identical chart.) Note how many more loans in these cohorts remained delinquent (rather than curing or defaulting) relative to the more recent CRT loans. https://plot.ly/~dataprep/30.embed

Vintage Quality Index

RiskSpan’s Vintage Quality Index (VQI) reflects a reversion to the looser underwriting standards of the early 2000s as a result of the GSEs’ expansion of high-DTI lending. RiskSpan introduced the VQI in 2015 as a way of quantifying the underwriting environment of a particular vintage of mortgage originations. We use the metric as an empirically grounded way to control for vintage differences within our credit model. VQI-History While both GSEs increased high-DTI lending in 2017, it’s worth noting that Fannie Mae saw a relatively larger surge in loans with DTIs greater than 43%. The chart below shows the share of loans backing MBS with DTI > 43. We use the loan-level MBS issuance data to track what’s being originated and acquired by the GSEs because it is the timeliest data source available. CRT deals are issued with loans that are between 6 and 20 months seasoned, and so tracking MBS issuance provides a preview of what will end up in the next cohort of deals. High DTI Share

Deal Profile Comparison

The tables below compare the credit profiles of recently issued deals. We focus on the key drivers of credit risk, highlighting the comparatively riskier features of a deal. Each table separates the high-LTV (80%+) deals from the low-LTV deals (60%-80%). We add two additional columns for comparison purposes. The first is the ‘Coming Cohort,’ which is meant to give an indication of what upcoming deal profiles will look like. The data in this column is derived from the most recent three months of MBS issuance loan-level data, controlling for the LTV group. These are newly originated and acquired by the GSEs—considering that CRT deals are generally issued with an average loan age between 6 and 15 months, these are the loans that will most likely wind up in future CRT transactions. The second comparison cohort consists of 2006 originations in the historical performance datasets (Fannie and Freddie combined), controlling for the LTV group. We supply this comparison as context for the level of risk that was associated with one of the worst-performing cohorts. The latest CAS deals—both high- and low-LTV—show the impact of increased >43% DTI loan acquisitions. Until recently, STACR deals typically had a higher share of high-DTI loans, but the latest CAS deals have surpassed STACR in this measure, with nearly 30% of their loans having DTI ratios in excess of 43%. CAS high-LTV deals carry more risk in LTV metrics, such as the percentage of loans with a CLTV > 90 or CLTV > 95. However, STACR includes a greater share of loans with a less-than-standard level of mortgage insurance, which would provide less loss protection to investors in the event of a default. Credit Profile Low-LTV deals generally appear more evenly matched in terms of risk factors when comparing STACR and CAS. STACR does display the same DTI imbalance as seen in the high-LTV deals, but that may change as the high-DTI group makes its way into deals. Low-LTV-Deal-Credit-Profile-Most-Recent-Deals

Deal Tracking Reports

Please note that defaults are reported on a delay for both GSEs, and so while we have CPR numbers available for August, CDR numbers are not provided because they are not fully populated yet. Fannie Mae CAS default data is delayed an additional month relative to STACR. We’ve left loss and severity metrics blank for fixed-loss deals. STACR-Deals-over-the-past-3-months CAS-Deals-from-the-past-3-months.

Get a Demo


RiskSpan Adds Home Equity Conversion Mortgage Data to Edge Platform

ARLINGTON, VA, September 12, 2018 — Leading mortgage data analytics provider RiskSpan added Home Equity Conversion Mortgage (HECM) Data to the library of datasets available through its RS Edge Platform. The dataset includes over half a billion records from Ginnie Mae that will expand the RS Edge Platform’s critical applications in Reverse-Mortgage Analysis. RS Edge is a SaaS platform that integrates normalized data, predictive models and complex scenario analytics for customers in the capital markets, commercial banking, and insurance industries. The Edge Platform solves the hardest data management and analytical problem – affordable off-the-shelf integration of clean data and reliable models.

The HECM dataset is the latest in a series of recent additions to the RS Edge data libraries. The platform now holds over five billion records across decades of collection and is the solution of choice for whole loan and securities analytics. RiskSpan’s data strategy is simple. Provide our customers with normalized, tested, analysis-ready data that their enterprise modeling and analytics teams can leverage for faster, more reliable insight. We do the grunt work so that you don’t have to, said Patrick Doherty, RiskSpan’s Chief Operating Officer.  The HECM dataset has been subjected to RiskSpan’s comprehensive data normalization process for simpler analysis in RS Edge. Edge users will be able to drill down to snapshot and historical data available through the UI. Users will also be able to benchmark the HECM data against their own portfolio and leverage it to develop and deploy more sophisticated credit models.  RiskSpan’s Edge API also makes it easier-than-ever to access large datasets for analytics, model development and benchmarking. Major quant teams that prefer APIs now have access to normalized and validated data to run scenario analytics, stress testing or shock analysis. RiskSpan makes data available through its proprietary instance of RStudio and Python.

Get a Demo


What is an “S-Curve” and Does it Matter if it Varies by Servicer?

Mortgage analysts refer to graphs plotting prepayment rates against the interest rate incentive for refinancing as “S-curves” because the resulting curve typically (vaguely) resembles an “S.” The curve takes this shape because prepayment rates vary positively with refinance incentive, but not linearly. Very few borrowers refinance without an interest rate incentive for doing so. Consequently, on the left-hand side of the graph, where the refinance incentive is negative or out of the money, prepayment speeds are both low and fairly flat. This is because a borrower with a rate 1.0% lower than market rates is not very much more likely to refinance than a borrower with a rate 1.5% lower. They are both roughly equally unlikely to do so.

As the refinance incentive crosses over into the money (i.e., when prevailing interest rates fall below rates the borrowers are currently paying), the prepayment rate spikes upward, as a significant number of borrowers take advantage of the opportunity to refinance. But this spike is short-lived. Once the refinance incentive gets above 1.0% or so, prepayment rates begin to flatten out again. This reflects a segment of borrowers that do not refinance even when they have an interest rate incentive to do so. Some of these borrowers have credit or other issues preventing them from refinancing. Others are simply disinclined to go through the trouble. In either case, the growing refinance incentive has little impact and the prepayment rate flattens out.

These two bends—moving from non-incentivized borrowers to incentivized borrowers and then from incentivized borrowers to borrowers who can’t or choose not to refinance—are what gives the S-curve its distinctive shape.

Figure 1: S-Curve Example

An S-Curve Example – Servicer Effects

Interestingly, the shape of a deal’s S-curve tends to vary depending on who is servicing the deal. Many things contribute to this difference, including how actively servicers market refinance opportunities. How important is it to be able to evaluate and analyze the S-curves for the servicers specific to a given deal? It depends, but it could be imperative.

In this example, we’ll analyze a subset of the collateral (“Group 4”) supporting a recently issued Fannie Mae deal, FNR 2017-11. This collateral consists of four Fannie multi-issuer pools of recently originated jumbo-conforming loans with a current weighted average coupon (WAC) of 3.575% and a weighted average maturity (WAM) of 348 months. The table below shows the breakout of the top six servicers in these four pools based on the combined balance.

Figure 2: Breakout of Top Six Servicers

Over half (54%) of the Group 4 collateral is serviced by these six servicers. To begin the analysis, we pulled all jumbo-conforming, 30-year loans originated between 2015 and 2017 for the six servicers and bucketed them based on their refi incentive. A longer timeframe is used to ensure that there are sufficient observations at each point. The graph below shows the prepayment rate relative to the refi incentive for each of the servicers as well as the universe.

Figure 3: S-curve by Servicer

For loans that are at the money—i.e., the point at which the S-curve would be expected to begin spiking upward—only those serviced by IMPAC prepay materially faster than the entire cohort. However, as the refi incentive increases, IMPAC, Seneca Mortgage, and New American Funding all experience a sharp pick-up in speeds while loans serviced by Pingora, Lakeview, and Wells behave comparable to the market.

The last step is to compute the weighted average S-curve for the top six servicers using the current UPB percentages as the weights, shown in Figure 4 below. On the basis of the individual servicer observations, prepays for out-of-the-money loans should mirror the universe, but as loans become more re-financeable, speeds should accelerate faster than the universe. The difference between the six-servicer average and the universe reaches a peak of approximately 4% CPR between 50 bps and 100 bps in the money. This is valuable information for framing expectations for future prepayment rates. Analysts can calibrate prepayment models (or their outputs) to account for observed differences in CPRs that may be attributable to the servicer, rather than loan characteristics.

Figure 4: Weighted Average vs. Universe

This analysis was generated using RiskSpan’s data and analytics platform, RS Edge.


Open Source Governance: Three Potential Risks

For many companies, the question is no longer whether to use open-source tools, but rather how to implement them with the appropriate governance and controls. Have security concerns been accounted for?  How does one effectively institute controls over bad code?  Are there legal implications for using open-source software?

Open Source Security Risks

Open-source software is not inherently more or less prone to malicious code injections than proprietary software. It is true that anyone can push a code enhancement for a new version, and it may be possible for the senior contributors to miss intentional malware. However, in these circumstances, open source has an advantage over proprietary, coined in 1999 by Eric S. Raymond as Linus’s Law: “Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.” It is unlikely that a deliberate security error goes unnoticed by the many pairs of eyes on each release. However, security issues persist.

Open-Source Security – An Example

Debian, a Unix-like computer operating system, was one of the first to be based on the Linux kernel. Like many systems, it utilizes OpenSSL, a software library that provides an open-source implementation of the Secure Sockets Layer (SSL) protocol, commonly used by applications that require secure communications over a network.

In 2006, a snippet of code was removed from Debian’s OpenSSL package after one of the contributors found that it caused runtime warnings generated by other packages. After the removal, the pseudorandom number generator (PRNG) generated SSL keys using only the process ID (in Linux, a number up to 32,768) to the exclusion of all other random data. Since a relatively small number of values was used, the keys created over a period of almost two years were too predictable to be used securely. Users became aware of the issue 20 months after the bug was introduced, leading to costly security resolutions for companies and individuals who relied on Debian’s OpenSSL implementation.[1]

OpenSSL was again the subject of negative attention when a bug dubbed ‘Heartbleed’ was introduced to the code in 2012 and disclosed to the public in 2014. A fixed version of OpenSSL was released on the same day the issue was announced. More than a month after the release, however, 1.5% of the 800,000 most popular affected websites were still vulnerable to the security bug. [2]

The good news is that such vulnerabilities are documented in the Common Vulnerabilities and Exposures (CVE) system, and they are not so common. For Python 2.7, the popular version released in 2010, 15 vulnerabilities were recorded from 2010 to 2016, only one of which is considered ‘High’ severity, with a CVSS score of 7.5.  jQuery, a JavaScript library that simplifies some components of web application development and the most common open-source component identified in the latest Open Source Security and Risk Analysis (OSSRA) report, only has four known vulnerabilities from 2007 to 2017, none of which rank higher than a ‘Medium’. The CVE is just one tool available for improving the security profile of software applications, but technologists must remain vigilant and abreast of known issues. Corporate IT governance frameworks should be continuously updated to keep up with the changing structure of the underlying technology itself.

Bad Code

Serious security vulnerabilities may not be a daily occurrence, but bad code can affect software at any time. pandas, a popular open-source software library used in Python implementations for data manipulation and analysis, was first released in 2009. Since then, its contributors have identified over 10,000 issues, 1,933 of which are currently considered unresolved.[3] A company that relies on accurate output from a codebase that uses pandas needs to be vigilant not only in testing the code written by its in-house developers, but also in verifying that all outstanding known pandas issues are covered by workarounds and the rest of the functionality is sound. Developers and testers who are not intimately familiar with the pandas source code must devise creative testing tools to ensure complete integrity of applications that rely on it.

Bad Code – An Example

The Comma-Separated Values (CSV) file is one of many data formats that can be loaded for data manipulation and analysis by pandas, in this case using the built-in read_csv function.  read_csv has a number of associated helper attributes intended to simplify the data import, one of which is parse_dates, which, as the name implies, tells pandas to automatically parse dates in the data using a recognition algorithm to determine the format in each date-populated column.

However, if a row of data contains a blank value where a date is expected, pandas may populate that field with today’s date — a bug first formalized in version 0.9 in 2012 [4] (closed three days after it was opened) and again in 2014.[5] The issue was not closed until the end of 2016, when one of the contributors noted that the tests passed for version 0.19, stating that he was “not sure when this was fixed, but it doesn’t seem like it occurred recently. [6]

In the meantime, pandas versions prior to 0.19 may have resulted in incorrect date-related parameters if blank fields were fed to the system. For example, a mortgage-backed security may have had an incorrect calculated weighted average loan age if some of its loans had blank first payment dates, causing these rows to have a loan age of zero.

In addition to implementing security testing, IT controls must include a clear framework for testing both in-house and open-source components of all applications, especially high-impact programs.

Open-Source Licensing

Finally, it is important to be aware of open-source licensing constraints and to maintain active licensing governance activities to avoid legal issues in the future. Similar to the copyright concept, some open source creators have adopted the concept of ‘copyleft’ to ensure that “anyone who redistributes the software, with or without changes, must pass along the freedom to further copy and change it. [7]  This means that, legally, for any software that contains a copylefted open source component, whether it comprises 99% or 0.1% of the application code, the entire source code must be distributed with the software or be made available upon request. This is not an issue when the software is distributed internally among corporate users, but it can become more problematic when the company intends to sell or otherwise provide the software without revealing the internally developed codebase. Not all open-source software is copylefted – in fact, many popular licenses are highly permissive with very few restrictions. Below is a summary of the four most popular open-source licenses. [8]

Of the four, only the GNU General Public License (GPL, all versions) requires the creators to disclose the source code.  Between 20% and 25% of all open-source software is covered by the GNU GPL.

OSSRA found that 75% of applications contained at least some components under the GPL family of licenses, and that only 45% of those applications complied with the GPL copyleft obligations. Overall, the Financial Services and FinTech industries maintained 89% of all applications with at least one licensing conflict.

Most open-source software, even that which is licensed under the GNU GPL, can be used commercially. For example, a company can use and internally distribute a financial model written in R, an open-source programming language licensed under the GNU GPL 2.0. However, important legal consequences must be considered if the developed code will be later distributed outside of the company as a proprietary application. If the organization were to sell the R-based model, the entire source code would have to be made available to the paying user, who would also be free to distribute the code, for free or at a price. Alternatively, a model implemented in Python, which is licensed under a Berkeley Software Distribution (BSD)-like agreement, could be distributed without exposing the source code.

Open-Source Governance and Controls

Governance risks are specific to how open-source tools are integrated into existing operations. These risks can stem from a lack of formal training, lack of service and support, violations of third-party intellectual property rights, or instability and incompatibility with existing operating environments. Successful users of open-source code and tools devise effective means of identifying and measuring these risks. They ensure that these risks are included in process risk assessments to facilitate identification and mitigation of potential control weaknesses. Security vulnerabilities, code issues, and software licensing should not deter developers from using the plethora of useful open-source tools. Open-source issues and bugs are viewed and tested by thousands of capable developers, increasing the likelihood of a speedy resolution. In addition, a company’s own development team has full access to the source code, making it possible to fix issues without relying on anyone else. As with any application, effective governance and controls are essential to a successful open-source application. These ensure that software is used securely and appropriately and that a comprehensive testing framework is applied to minimize inaccuracies. The world of open source is changing constantly –we all just need to keep up.

WANT TO LEARN MORE?


Get Started
Log in

Linkedin   

risktech2024