Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: Credit Analytics

Houston Strong: Communities Recover from Hurricanes. Do Mortgages?

The 2017 hurricane season devastated individual lives, communities, and entire regions. As one would expect, dramatic increases in mortgage delinquencies accompanied these events. But the subsequent recoveries are a testament both to the resilience of the people living in these areas and to relief mechanisms put into place by the mortgage holders.

Now, nearly a year later, we wanted to see what the credit-risk transfer data (as reported by Fannie Mae CAS and Freddie Mac STACR) could tell us about how these borrowers’ mortgage payments are coming along.

The timing of the hurricanes’ impact on mortgage payments can be approximated by identifying when Current-to-30 days past due (DPD) roll rates began to spike. Barring other major macroeconomic events, we can reasonably assume that most of this increase is directly due to hurricane-related complications for the borrowers.

Houston Strong - Analysis by Edge

The effect of the hurricanes is clear—Puerto Rico, the U.S. Virgin Islands, and Houston all experienced delinquency spikes in September. Puerto Rico and the Virgin Islands then experienced a second wave of delinquencies in October due to Hurricanes Irma and Maria.

But what has been happening to these loans since entering delinquency? Have they been getting further delinquent and eventually defaulting, or are they curing? We focus our attention on loans in Houston (specifically the Houston-The Woodlands-Sugar Land Metropolitan Statistical Area) and Puerto Rico because of the large number of observable mortgages in those areas.

First, we look at Houston. Because the 30-DPD peak was in September, we track that bucket of loans. To help us understand the path 30-DPD might reasonably be expected to take, we compared the Houston delinquencies to 30-DPD loans in the 48 states other than Texas and Florida.

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

Of this group of loans in Houston that were 30 DPD in September, we see that while many go on to be 60+ DPD in October, over time this cohort is decreasing in size.

Recovery is slower than the non-hurricane-affected U.S. loans, but persistent. The biggest difference is that a significant number of 30-day delinquencies in the rest of the country loans continue to hover at 30 DPD (rather than curing or progressing to 60 DPD) while the Houston cohort is more evenly split between the growing number loans that cure and the shrinking number of loans progressing to 60+ DPD.

Puerto Rico (which experienced its 30 DPD peak in October) shows a similar trend:

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

To examine loans even more affected by the hurricanes, we can perform the same analysis on loans that reached 60 DPD status.

Houston Strong - Analysis by Edge

Here, Houston’s peak is in October while Puerto Rico’s is in November.

Houston vs. the non-hurricane-affected U.S.:

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

Puerto Rico vs. the non-hurricane-affected U.S.:

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

In both Houston and Puerto Rico, we see a relatively small 30-DPD cohort across all months and a growing Current cohort. This indicates many people paying their way to Current from 60+ DPD status. Compare this to the rest of the US where more people pay off just enough to become 30 DPD, but not enough to become Current.

The lack of defaults in post-hurricane Houston and Puerto Rico can be explained by several relief mechanisms Fannie Mae and Freddie Mac have in place. Chiefly, disaster forbearance gives borrowers some breathing room with regards to payment. The difference is even more striking among loans that were 90 days delinquent, where eventual default is not uncommon in the non-hurricane affected U.S. grouping:

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

And so, both 30-DPD and 60-DPD loans in Houston and Puerto Rico proceed to more serious levels of delinquency at a much lower rate than similarly delinquent loans in the rest of the U.S. To see if this is typical for areas affected by hurricanes of a similar scale, we looked at Fannie Mae loan-level performance data for the New Orleans MSA after Hurricane Katrina in August 2005.

As the following chart illustrates, current-to-30 DPD roll rates peaked in New Orleans in the month following the hurricane:

Houston Strong - Analysis by Edge

What happened to these loans?

Houston Strong - Analysis by Edge

Here we see a relatively speedy recovery, with large decreases in the number of 60+ DPD loans and a sharp increase in prepayments. Compare this to non-hurricane affected states over the same period, where the number of 60+ DPD loans held relatively constant, and the number of prepayments grew at a noticeably slower rate than in New Orleans.

Houston Strong - Analysis by Edge

The remarkable number of prepayments in New Orleans was largely due to flood insurance payouts, which effectively prepay delinquent loans. Government assistance lifted many others back to current. As of March, we do not see this behavior in Houston and Puerto Rico, where recovery is moving much more slowly. Flood insurance incidence rates are known to have been low in both areas, a likely suspect for this discrepancy.

While loans are clearly moving out of delinquency in these areas, it is at a much slower rate than the historical precedent of Hurricane Katrina. In the coming months we can expect securitized mortgages in Houston and Puerto Rico to continue to improve, but getting back to normal will likely take longer than what was observed in New Orleans following Katrina. Of course, the impending 2018 hurricane season may complicate this matter.

—————————————————————————————————————-

Note: The analysis in this blog post was developed using RiskSpan’s Edge Platform. The RiskSpan Edge Platform is a module-based data management, modeling, and predictive analytics software platform for loans and fixed-income securities. Click here to learn more.

 


Augmenting Internal Loan Data to Comply with CECL and Boost Profit

The importance of sound internal data gathering practices cannot be understated. However, in light of the new CECL standard, many lending institutions have found themselves unable to meet the data requirements. This may have served as a wake-up call for organizations at all levels to look at their internal data warehousing systems and identify and remedy the gaps in their strategies. For some institutions, it may be difficult to consolidate data siloed within various stand-alone systems. Other institutions, even after consolidating all available data, may lack sufficient loan count, timespan, or data elements to meet the CECL standard with internal data alone. This post will discuss some of the strategies to make up for shortfalls while data gathering systems and procedures are built and implemented for the future.  

Identify Your Data

The first step is to identify the data that is available. As many tasks go, this is easier said than done. Often, organizations without formal data gathering practices and without a centralized data warehouse find themselves looking at multiple data storage systems across various departments and a multitude of ad-hoc processes implemented in time of need and not upgraded to a standardized solution. However, it is important to begin this process now, if it is not already underway. As part of the data identification phase, it is important to keep track of not only the available variables, but also the length of time for which the data exists, and whether any time periods have missing or unreliable information. In most circumstances, to meet the CECL standard, institutions should have loan performance data that will cover a full economic cycle by the time of CECL adoption. Such data enables an institution to form grounded expectations of how assets will perform over their full contractual lives, across a variety of potential economic climates. Some data points are required regardless of the CECL methodology, while others are necessary only for certain approaches. At this part of the data preparation process, it is more important to understand the big picture than it is to confirm only some of the required fields—it is wise to see what information is available, even if it may not appear relevant at this time. This will prove very useful for drafting the data warehousing procedures, and will allow for a more transparent understanding of requirements should the bank decide to use a different methodology in the future.  

Choose Your CECL Methodology

There are many intricacies involved in choosing a CECL Methodology. Each organization should determine both its capabilities and its needs. For example, the Vintage method has relatively simple calculations and limited data requirements, but provides little insight and control for management, and does not yield early model performance indicators. On the other hand, the Discounted Cash Flow method provides many insights and controls, and identifies model performance indicators preemptively, but requires more complex calculations and a very robust data history. It is acceptable to implement a relatively simple methodology at first and begin utilizing more advanced methodologies in the future. Banks with limited historical data, but with procedures in place to ramp up data gathering and data warehousing capabilities, would be well served to implement a method for which all data needs are met. They can then work toward the goal of implementing a more robust methodology once enough historical data is available. However, if insufficient data exists to effectively implement a satisfactory methodology, it may be necessary to augment existing historical data with proxy data as a bridge solution while your data collections mature.  

Augment Your Internal Data

Choose Proxy Data

Search for cost-effective datasets that give historical loan performance information about portfolios that are reasonably similar to your go-forward portfolio. Note that proxy portfolios do not need to perfectly resemble your portfolio, so long as either a) the data provider offers filtering capability that enables you to find the subset of proxy loans that matches your portfolio’s characteristics, or b) you employ segment- or loan-level modeling techniques that apply the observations from the proxy dataset in the proportions that are relevant to your portfolio. RiskSpan’s Edge platform contains a Data Library that offers historical loan performance datasets from a variety of industry sources covering multiple asset classes:

  • For commercial real estate (CRE) portfolios, we host loan-level data on all CRE loans guaranteed by the Small Business Administration (SBA) dating back to 1990. Data on loans underlying CMBS securitizations dating back to 1998, compiled by Trepp, is also available on the RiskSpan platform.
  • For commercial and industrial (C&I) portfolios, we also host loan-level on all C&I loans guaranteed by the SBA dating back to 1990.
  • For residential mortgage loan portfolios, we offer large agency datasets (excellent, low-cost options for portfolios that share many characteristics with GSE portfolios) and non-agency datasets (for portfolios with unique characteristics or risks).
  • By Q3 2018, we will also offer data for auto loan portfolios and reverse mortgage portfolios (Home Equity Conversion Mortgages).

Note that for audit purposes, limitations of proxy data and consequent assumptions for a given portfolio need to be clearly outlined, and all severe limitations addressed. In some cases, multiple proxy datasets may be required. At this stage, it is important to ensure that the proxy data contains all the data required by the chosen CECL methodology. If such proxy data is not available, a different CECL model may be best.  

Prepare Your Data

The next step is to prepare internal data for augmentation. This includes standard data-keeping practices, such as accurate and consistent data headers, unique keys such as loan numbers and reporting dates, and confirmation that no duplicates exist. Depending on the quality of internal data, additional analysis may also be required. For example, all data fields need to be displayed in a consistent format according to the data type, and invalid data points, such as FICO scores outside the acceptable range, need to be cleansed. If the data is assembled manually, it is prudent to automate the process to minimize the possibility of user error. If automation is not possible, it is important to implement data quality controls that verify that the dataset is generated according to the metadata rules. This stage provides the final opportunity to identify any data quality issues that may have been missed. For example, if, after cleansing the data for invalid FICO scores, it appears that the dataset has many invalid entries, further analysis may be required, especially if borrower credit score is one of the risk metrics used for CECL modeling. Once internal data preparation is complete, proxy metadata may need to be modified to be consistent with internal standards. This includes data labels and field formats, as well as data quality checks to ensure that consistent criteria are used across all datasets.  

Identify Your Augmentation Strategy

Once the internal data is ready and its limitations identified, analysts need to confirm that the proxy data addresses these gaps. Note that it is assumed at this stage that the proxy data chosen contains information for loans that are consistent with the internal portfolio, and that all proxy metadata items are consistent with internal metadata. For example, if internal data is robust, but has a short history, proxy data needs to cover the additional time periods for the life of the asset. In such cases, augmenting internal data is relatively simple: the datasets are joined, and tested to ensure that the join was successful. Testing should also cover the known limitations of the proxy data, such as missing non-required fields or other data quality issues deemed acceptable during the research and analysis phase. More often, however, there is a combination of data shortfalls that lead to proxy data needs, which can include either time-related gaps, data element gaps, or both. In such cases, the augmentation strategy is more complex. In the cases of optional data elements, a decision to exclude certain data columns is acceptable. However, when incorporating required elements that are inputs for the allowance calculation, the data must be used in a way that complies with regulatory requirements. If internal data has incomplete information for a given variable, statistical methods and machine learning tools are useful to incorporate the proxy data with the internal data, and approximate the missing variable fields. Statistical testing is then used to verify that the relationships between actual and approximated figures are consistent with expectation, which are then verified by management or expert analysis. External research on economic or agency data, where applicable, can further be used to justify the estimated data assumptions. While rigorous statistical analysis is integral for the most accurate metrics, the qualitative analysis that follows is imperative for CECL model documentation and review.  

Justify Your Proxy Data

Overlaps in time periods between internal loan performance datasets and proxy loan performance datasets are critical in establishing the applicability of the proxy dataset. A variety of similarity metrics can be calculated that compare the performance of the proxy loans with the internal loan during the period of overlap. Such similarity metrics can be put forward to justify the use of the proxy dataset. The proxy dataset can be useful for predictions even if the performance of the proxy loans is not identical to the performance of the institutions’ loans. As long as there is a reliable pattern linking the performance of the two datasets, and no reason to think that pattern will discontinue, a risk-adjusting calibration can be justified and applied to the proxy data, or to results of models built thereon.  

Why Augment Internal Data?

While the task of choosing the augmentation strategy may seem daunting, there are concrete benefits to supplementing internal data with a proxy, rather than using simply the proxy data on its own. Most importantly, for the purpose of calculating the allowance for a given portfolio, incorporating some of the actual values will in most cases produce the most accurate estimate. For example, your institution may underwrite loans conservatively relative to the rest of the industry—incorporating at least some of the actual data associated with the lending practices will make it easier to understand how the proxy data differs from characteristics unique to your business. More broadly, proxy data is useful beyond CECL reporting, and has other applications that can boost bank profits. For example, lending institutions can build better predictive models based on richer datasets to calibrate loan screening and loan pricing decisions. These datasets can also be built into existing models to provide better insight on risk metrics and other asset characteristics, and to allow for more fine-tuned management decisions.


RiskSpan Director David Andrukonis Featured on The Purposeful Banker Podcast

RiskSpan’s CECL Soution Director David Andrukonis was a featured guest on PrecisionLender’s podcast, The Purposeful Banker in their recent episode titled “Is your Bank Ready for CECL”

David summarized the major takeaways from a recent CECL conference, including regulator signals of forthcoming capital relief and emerging practices around reasonable and supportable forecast period length (16:19); outlined how RiskSpan is helping banks prepare for the new accounting standard (3:47); and offered ways that banks can stay current on continuing CECL developments (23:42).

You can listen to the entire episode of the podcast on their SoundCloud account:

 


Choosing a CECL Methodology

CECL presents institutions with a vast array of choices when it comes to CECL loss estimation methodologies. It can seem a daunting challenge to winnow down the list of possible methods. Institutions must consider considering competing concerns – including soundness and auditability, cost and feasibility, and the value of model reusability. Institutions must convince not only themselves but also external stakeholders that their methodology choices are reasonable, and often on a segment by segment basis, as methodology can vary by segment. It benefits banks, however, to narrow the field of CECL methodology choices soon so that they can finalize data preparation and begin parallel testing (generating CECL results alongside incurred-loss allowance estimates). Parallel testing generates advance signals of CECL impact and may itself play a role in the final choice of allowance methodology. In this post, we provide an overview of some of the most common loss estimation methodologies that banks and credit unions are considering for CECL, and outline the requirements, advantages and challenges of each.

Methods to Estimate Lifetime Losses

The CECL standard explicitly mentions five loss estimation methodologies, and these are the methodologies most commonly considered by practitioners. Different practitioners define them differently. Additionally, many sound approaches combine elements of each method. For this analysis, we will discuss them as separate methods, and use the definitions that most institutions have in mind when referring to them:

  1. Vintage,
  2. Loss Rate,
  3. PDxLGD,
  4. Roll Rate, and
  5. Discount Cash Flow (DCF).

While CECL allows the use of other methods—for example, for estimating losses on individual collateral-dependent loans—these five methodologies are the most applicable to the largest subset of assets and institutions.  For most loans, the allowance estimation process entails grouping loans into segments, and for each segment, choosing and applying one of the methodologies above. A common theme in FASB’s language regarding CECL methods is flexibility: rather than prescribing a formula, FASB expects that the banks consider historical patterns and the macroeconomic and credit policy drivers thereof, and then extrapolate based on those patterns, as well as each individual institution’s macroeconomic outlook. The discussion that follows demonstrates some of this flexibility within each methodology but focuses on the approach chosen by RiskSpan based on our view of CECL and our industry experience. We will first outline the basics of each methodology, followed by their data requirements, and end with the advantages and challenges of each approach.  

Vintage Method

Using the Vintage method, historical losses are tabulated by vintage and by loan age, as a percentage of origination balances by vintage year. In the example below, known historical values appear in the white cells, and forecasted values appear in shaded cells. We will refer to the entire shaded region as the “forecast triangle” and the cells within the forecast triangle as “forecast cells.”[/vc_column_text][/vc_column][/vc_row]

Losses-as-percent-of-orig-balance

A simple way to populate the forecast cells is with the simple average of the known values from the same column. In other words, we calculate the average marginal loss rate for loans of each age and extrapolate that forward. The limitation of this approach is that it does not differentiate loss forecasts based on the bank’s macroeconomic outlook, which is a core requirement of CECL, so a bank using this method will need to incorporate its macroeconomic outlook via management adjustments and qualitative factors (Q-factors).

As an alternative methodology, RiskSpan has developed an approach to forecast the loss triangle using statistical regression, developing a regression model that estimates the historical loss rates in the vintage matrix as a function of loan age, a credit indicator, and a macroeconomic variable, and then applies that regression equation along with a forecast for the macroeconomic variable (and a mean-reversion process) to populate the forecast triangle. The forecast cells can still be adjusted by management as desired, and/or Q-factors can be used. We caution, however, that management should take care not to double-count the influence of macroeconomics on allowance estimates (i.e., once via models, and again via Q-factors)

Once the results of the regression are ready and adjustments are applied where needed, the final allowance can be derived as follows:

Loss Rate Method

Loss Rate Method

Using the Loss Rate method, the average lifetime loss rate is calculated for historical static pools within a segment. This average lifetime loss rate of a is used as the basis to predict the lifetime loss rate of the current static pool—that is, the loans on the reporting-date balance sheet.

In this context, a static pool refers to a group of loans that were on the balance sheet as of a particular date, regardless of when they were originated. For example, within an institutions’ owner-occupied commercial real estate portfolio, the 12/31/06 static pool would refer to all such loans that were on the institution’s balance sheet as of December 31, 2006. We would measure the lifetime losses of such a static pool beginning on the static pool date (December 31, 2006, in this example) and express those losses as a percentage of the balance that existed on the static pool date. This premise is consistent with what CECL asks us to do, i.e., estimate all future credit losses on the loans on the reporting-date balance sheet.

A historical static pool fully aged if all loans that made up the pool are either paid in full or charged off, where payments in full include renewals that satisfy the original contract. We should be wary of including partially aged static pools in the development of average lifetime loss estimates, because the cumulative loss rates of partially aged pools constitute life-to-date loss rates rather than complete lifetime loss rates, and inherently understates the lifetime loss rate that is required by CECL.

To generate the most complete picture of historical losses, RiskSpan constructs multiple overlapping static pools within the historical dataset of a given segment and calculates the average of the lifetime loss rates of all fully aged static pools.  This provides an average lifetime loss rate over a business cycle as the soundest basis for a long-term forecast. This technique also allows, but does not require, the use of statistical techniques to estimate lifetime loss rate as a function of the credit mix of a static pool.

After the average lifetime loss rate has been determined, we can incorporate management’s view of how the forward-looking environment will differ from the lookback period over which the lifetime loss rates were calculated, via Q-Factors.

The final allowance can be derived as follows:

Loss Rate Method

PDxLGD Method

Methods ranging from very simple to very sophisticated go by the name “PD×LGD.” At the most sophisticated end of the spectrum are models that calculate loan-by-loan, month-by-month, macro-conditioned probabilities of default and corresponding loss given default estimates. Such estimates can be used in a discounted cash flow context. These estimates can also be used outside of a cash flow context; we can summarize these monthly estimates into a cumulative default probability and corresponding exposure-at-default and loss-given-default estimates, which yield a single lifetime loss rate estimate. At the simpler end of the spectrum are calculations of the lifetime default rates and corresponding loss given default rates of static pools (not marginal monthly or annual default rates). This simpler calculation is the method that most institutions have in mind when referring to “PD×LGD methods,” so it is the definition we will use here.

Using this PDxLGD method, the loss rate is calculated based on the same static pool concept as that of the Loss Rate method. As with the Loss Rate method, we can use the default rates and loss given default rates of different static pools to quantify the relationship between those rates and the credit mix of the segment, and to use that relationship going forward based on the credit mix of today’s portfolio. However, under PDxLGD, the loss rate is a function of two components: the lifetime default rate (PD), and the loss given default (LGD).  The final allowance can be derived as follows:

PDxLGD Method

Because the PDxLGD and Loss Rate methods derive the Expected Loss Rate for the segment using different but related approaches, one of the important quality controls is to verify that the final calculated rates are equal under both methodologies, and that the cause of any discrepancies is investigated.

Roll Rate Method

Using the Roll Rate method, ultimate losses are predicted based on historical roll rates and the historical loss given default estimate.  Roll rates are either (a) the frequency with which loans transition from one delinquency status to another, or (b) the frequency with which loans “migrate” or “transition” from one risk grade to another.  While the former is preferred due to its transparency and objectivity, for institutions with established risk grades, the latter is an appropriate metric.

Under this method, management can apply adjustments for macroeconomic and other factors at the individual roll rate level, as well as on-top adjustments as needed. Roll rate matrices can included prepayment as a possible transition, thereby incorporating prepayment probabilities. Roll rates can be used in a cash flow engine that incorporates contractual loan features and generates probabilistic (expected) cash flows, or outside of a cash flow engine to generate expected chargeoffs of amortized cost. Finally, it is possible to use statistical regression techniques to express roll rates as a function of macroeconomic variables, and thus, to condition future roll rates on macroeconomic expectations.

The final allowance can be derived as follows:

Roll Rate Method

Discounted Cash Flow (DCF) Method

Discounting cash flows is a way of translating expected future cash flows into a present value. DCF is a loan-level method (even for loans grouped into segments), and thus requires loan-by-loan, month-by-month forecasts of prepayment, default, and loss-given-default forecasts to translate contractual cash flows into prepay-, default-, and loss-given-default-adjusted cash flows. Although such loan-level, monthly forecasts could be derived using any method, most institutions have statistical forecasting techniques in mind when thinking about a DCF approach. Thus, even though statistical forecasting techniques and cash flow discounting are not inextricably linked, we will treat them as a pair here.

The most complex, and the most robust, of the five methodologies, DCF (paired with statistical forecasting techniques) is generally used by larger institutions that have the capacity and the need for the greatest amount of insight and control. Critically, DCF capabilities give institutions the ability (when substituting the effective interest rate for a market-observed discount rate) to generate fair value estimates that can serve a host of accounting and strategic purposes.

To estimate future cash flows, RiskSpan uses statistical models, which comprise:

  • Prepayment sub-models
  • Probability-of-default or roll rate sub-models
  • Loss-given-default sub-models

Allowance is then determined based on the expected cash flows, which, similarly to the Roll Rate method, are generated based on the rates predicted by the statistical models, contractual loan terms, and the loan status at the reporting date.

Some argue that an advantage of the discounted cash flow approach is lower Day 1 losses. Whether DCF or non-DCF methods produce a lower Day 1 allowance, all else equal, depends upon the length of the assumed liquidation timeline, the discount rate, and the recovery rate. This is an underdiscussed topic that merits its own blog post. We will cover this fully in a future post.

The statistical models often used with DCF methods use historical data to express the likelihood of default or prepayment as a mathematical function of loan-level credit factors and macroeconomic variables.

For example, the probability of  transitioning from “Current” status to “Delinquent” at montht can be calculated as a function of that loan’s loan age at  multiplied by a sensitivity factor β1 on the loan age variable derived based on the data in the historical dataset, the loan’s FICO multiplied by a sensitivity factor β2, and the projected unemployment rate based on management’s macroeconomic assumptions at montht multiplied by a sensitivity factor β3.  Mathematically,

Probability

Because macroeconomic and loan-level credit factors are explicitly and transparently incorporated into the forecast, such statistical techniques reduce reliance on Q-Factors. This is one of the reasons why such methods are the most scientific.

Historical Data Requirements

The table below summarizes the historical data requirements for each methodology, including the dataset type, the minimum required data fields, and the timespan.

Historical Data Requirements

In conclusion, having the most robust data allows the most options; for institutions with moderately complex historical datasets, Loss Rate, PDxLGD, and Vintage are excellent options.  With limited historical data, the Vintage method can produce a sound allowance under CECL.

While the data requirements may be daunting, it is important to keep in mind that proxy data can be used in place of, or alongside, institutional historical data, and RiskSpan can help identify and fill your data needs.  Some of the proxy data options are summarized below:

Historical Data Requirements

Advantages and Challenges of CECL Methodologies

Each methodology has advantages, and each carries its own set of challenges.  While the Vintage method, for example, is forgiving to limited historical data, it also provides limited insight and control for further analysis.  On the other hand, the DCF method provides significant insight and control, as well as early model performance indicators, but requires a robust dataset and advanced statistical expertise.

We have summarized some of the advantages and challenges for each method below.

Advantages and Challenges of CECL Methodologies

In addition to the considerations summarized in the table, it is important to consider audit and regulatory requirements. Generally, institutions facing higher audit and regulatory scrutiny will be steered toward more complex methods. Also, bankers who intend to leverage the loan forecasting model they use for CECL for strategic decision-making (for example, loan screening and pricing decisions), and who desire granular insight and dials around their allowance numbers, will gravitate toward methodologies that afford more precision. At the other end of the spectrum, the methods that provide less precision and insight generally come with lighter operational burden.

Heavy Scrutiny

Choosing Your CECL Methodology

Choosing the method that’s right for you depends on many factors, from historical data availability to management objectives and associated operational costs.

In many cases, management can gain a better understanding of the institutional allowance requirements after analyzing the results determined by multiple complementary approaches.

RiskSpan is willing to talk further with individual institutions about their circumstances, as well as generate sample results using a set of various methodologies.


Hands-On Machine Learning–Predicting Loan Delinquency

The ability of machine learning models to predict loan performance makes them particularly interesting to lenders and fixed-income investors. This expanded post provides an example of applying the machine learning process to a loan-level dataset in order to predict delinquency. The process includes variable selection, model selection, model evaluation, and model tuning.

The data used in this example are from the first quarter of 2005 and come from the publicly available Fannie Mae performance dataset. The data are segmented into two different sets: acquisition and performance. The acquisition dataset contains 217,000 loans (rows) and 25 variables (columns) collected at origination (Q1 2005). The performance dataset contains the same set of 217,000 loans coupled with 31 variables that are updated each month over the life of the loan. Because there are multiple records for each loan, the performance dataset contains approximately 16 million rows.

For this exercise, the problem is to build a model capable of predicting which loans will become severely delinquent, defined as falling behind six or more months on payments. This delinquency variable was calculated from the performance dataset for all loans and merged with the acquisition data based on the loan’s unique identifier. This brings the total number of variables to 26. Plenty of other hypotheses can be tested, but this analysis focuses on just this one.

1          Variable Selection

An overview of the dataset can be found below, showing the name of each variable as well as the number of observations available

                                            Count
LOAN_IDENTIFIER                             217088
CHANNEL                                     217088
SELLER_NAME                                 217088
ORIGINAL_INTEREST_RATE                      217088
ORIGINAL_UNPAID_PRINCIPAL_BALANCE_(UPB)     217088
ORIGINAL_LOAN_TERM                          217088
ORIGINATION_DATE                            217088
FIRST_PAYMENT_DATE                          217088
ORIGINAL_LOAN-TO-VALUE_(LTV)                217088
ORIGINAL_COMBINED_LOAN-TO-VALUE_(CLTV)      217074
NUMBER_OF_BORROWERS                         217082
DEBT-TO-INCOME_RATIO_(DTI)                  201580
BORROWER_CREDIT_SCORE                       215114
FIRST-TIME_HOME_BUYER_INDICATOR             217088
LOAN_PURPOSE                                217088
PROPERTY_TYPE                               217088
NUMBER_OF_UNITS                             217088
OCCUPANCY_STATUS                            217088
PROPERTY_STATE                              217088
ZIP_(3-DIGIT)                               217088
MORTGAGE_INSURANCE_PERCENTAGE                34432
PRODUCT_TYPE                                217088
CO-BORROWER_CREDIT_SCORE                    100734
MORTGAGE_INSURANCE_TYPE                      34432
RELOCATION_MORTGAGE_INDICATOR               217088

Most of the variables in the dataset are fully populated, with the exception of DTI, MI Percentage, MI Type, and Co-Borrower Credit Score. Many options exist for dealing with missing variables, including dropping the rows that are missing, eliminating the variable, substituting with a value such as 0 or the mean, or using a model to fill the most likely value.

The following chart plots the frequency of the 34,000 MI Percentage values.

The distribution suggests a decent amount of variability. Most loans that have mortgage insurance are covered at 25%, but there are sizeable populations both above and below. Mortgage insurance is not required for the majority of borrowers, so it makes sense that this value would be missing for most loans.  In this context, it makes the most sense to substitute the missing values with 0, since 0% mortgage insurance is an accurate representation of the state of the loan. An alternative that could be considered is to turn the variable into a binary yes/no variable indicating if the loan has mortgage insurance, though this would result in a loss of information.

The next variable with a large number of missing values is Mortgage Insurance Type. Querying the dataset reveals that that of the 34,400 loans that have mortgage insurance, 33,000 have type 1 borrower paid insurance and the remaining 1,400 have type 2 lender paid insurance. Like the mortgage insurance variable, the blank values can be filled. This will change the variable to indicate if the loan has no insurance, type 1, or type 2.

The remaining variable with a significant number of missing values is Co-Borrower Credit Score, with approximately half of its values missing. Unlike MI Percentage, the context does not allow us to substitute missing values with zeroes. The distribution of both borrower and co-borrower credit score as well as their relationship can be found below.

As the plot demonstrates, borrower and co-borrower credit scores are correlated. Because of this, the removal of co-borrower credit score would only result in a minimal loss of information (especially within the context of this example). Most of the variance captured by co-borrower credit score is also captured in borrower credit score. Turning the co-borrower credit score into a binary yes/no ‘has co-borrower’ variable would not be of much use in this scenario as it would not differ significantly from the Number of Borrowers variable. Alternate strategies such as averaging borrower/co-borrower credit score might work, but for this example we will simply drop the variable.

In summary, the dataset is now smaller—Co-Borrower Credit Score has been dropped. Additionally, missing values for MI Percentage and MI Type have been filled in. Now that the data have been cleaned up, the values and distributions of the remaining variables can be examined to determine what additional preprocessing steps are required before model building. Scatter matrices of pairs of variables and distribution plots of individual variables along the diagonal can be found below. The scatter plots are helpful for identifying multicollinearity between pairs of variables, and the distributions can show if a variable lacks enough variance that it won’t contribute to model performance.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_single_image image=”1089″][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]The third row of scatterplots, above, reflects a lack of variability in the distribution of Original Loan Term. The variance of 3.01 (calculated separately) is very small, and as a result the variable can be removed—it will not contribute to any model as there is very little information to learn from. This process of inspecting scatterplots and distributions is repeated for the remaining pairs of variables. The Number of Units variable suffers from the same issue and can also be removed.

2          Heatmaps and Pairwise Grids

Matrices of scatterplots are useful for looking at the relationships between variables. Another useful plot is a heatmap and pairwise grid of correlation coefficients. In the plot below a very strong correlation between Original LTV and Original CLTV is identified.

This multicollinearity can be problematic for both the interpretation of the relationship between the variables and delinquency as well as the actual performance of some models.  To combat this problem, we remove Original CLTV because Original LTV is a more accurate representation of the loan at origination. Loans in this population that were not refinanced kept their original LTV value as CLTV. If CLTV were included in the model it would introduce information not available at origination to the model. The problem of allowing unexpected additional information in a dataset introduces an issue known as leakage, which will bias the model.

Now that the numeric variables have been inspected, the remaining categorical variables must be analyzed to ensure that the classes are not significantly unbalanced. Count plots and simple descriptive statistics can be used to identify categorical variables are problematic. Two examples below show the count of loans by state and by seller.

Inspecting the remaining variables uncovers that Relocation Indicator (indicating a mortgage issued when an employer moves an employee) and Product Type (fixed vs. adjustable rate) must be removed as they are extremely unbalanced and do not contain any information that will help the models learn. We also removed first payment date and origination date, which were largely redundant. The final cleanup results in a dataset that contains the following columns:

LOAN_IDENTIFIER 
CHANNEL 
SELLER_NAME
ORIGINAL_INTEREST_RATE
ORIGINAL_UNPAID_PRINCIPAL_BALANCE_(UPB) 
ORIGINAL_LOAN-TO-VALUE_(LTV) 
NUMBER_OF_BORROWERS
DEBT-TO-INCOME_RATIO_(DTI) 
BORROWER_CREDIT_SCORE
FIRST-TIME_HOME_BUYER_INDICATOR 
LOAN_PURPOSE
PROPERTY_TYPE 
OCCUPANCY_STATUS 
PROPERTY_STATE
MORTGAGE_INSURANCE_PERCENTAGE 
MORTGAGE_INSURANCE_TYPE 
ZIP_(3-DIGIT)

The final two steps before model building are to standardize each of the numeric variables and turn each categorical variable into a series of dummy or indicator variables. Numeric variables are scaled with mean 0 and standard deviation 1 so that it is easier to compare variables that have a different scale (e.g. interest rate vs. LTV). Additionally, standardizing is also a requirement for many algorithms (e.g. principal component analysis).

Categorical variables are transformed by turning each value of the variable into its own yes/no feature. For example, Property State originally has 50 possible values, so it will be turned into 50 variables (e.g. Alabama yes/no, Alaska yes/no).  For categorical variables with many values this transformation will significantly increase the number of variables in the model.

After scaling and transforming the dataset, the final shape is 199,716 rows and 106 columns. The target variable—loan delinquency—has 186,094 ‘no’ values and 13,622 ‘yes’ values. The data are now ready to be used to build, evaluate, and tune machine learning models.

3          Model Selection

Because the target variable loan delinquency is binary (yes/no) the methods available will be classification machine learning models. There are many classification models, including but not limited to: neural networks, logistic regression, support vector machines, decision trees and nearest neighbors. It is always beneficial to seek out domain expertise when tackling a problem to learn best practices and reduce the number of model builds. For this example, two approaches will be tried—nearest neighbors and decision tree.

The first step is to split the dataset into two segments: training and testing. For this example, 40% of the data will be partitioned into the test set, and 60% will remain as the training set. The resulting segmentations are as follows:

1.       60% of the observations (as training set)- X_train

2.       The associated target (loan delinquency) for each observation in X_train- y_train

3.       40% of the observations (as test set)- X_test

4.        The targets associated with the test set- y_test

Data should be randomly shuffled before they are split, as datasets are often in some type of meaningful order. Once the data are segmented the model will first be exposed to the training data to begin learning.

4          K-Nearest Neighbors Classifier

Training a K-neighbors model requires the fitting of the model on X_train (variables) and y_train (target) training observations. Once the model is fit, a summary of the model hyperparameters is returned. Hyperparameters are model parameters not learned automatically but rather are selected by the model creator.

 

The K-neighbors algorithm searches for the closest (i.e., most similar) training examples for each test observation using a metric that calculates the distance between observations in high-dimensional space.  Once the nearest neighbors are identified, a predicted class label is generated as the class that is most prevalent in the neighbors. The biggest challenge with a K-neighbors classifier is choosing the number of neighbors to use. Another significant consideration is the type of distance metric to use.

To see more clearly how this method works, the 6 nearest neighbors of two random observations from the training set were selected, one that is a non-default (0 label) observation and one that is not.

Random delinquent observation: 28919 
Random non delinquent observation: 59504

The indices and minkowski distances to the 6 nearest neighbors of the two random observations are found below. Unsurprisingly, the first nearest neighbor is always itself and the first distance is 0.

Indices of closest neighbors of obs. 28919 [28919 112677 88645 103919 27218 15512]
Distance of 5 closest neighbor for obs. 28919 [0 0.703 0.842 0.883 0.973 1.011]

Indices of 5 closest neighbors for obs. 59504 [59504 87483 25903 22212 96220 118043]
Distance of 5 closest neighbor for obs. 59504 [0 0.873 1.185 1.186 1.464 1.488]

Recall that in order to make a classification prediction, the kneighbors algorithm finds the nearest neighbors of each observation. Each neighbor is given a ‘vote’ via their class label, and the majority vote wins. Below are the labels (or votes) of either 0 (non-delinquent) or 1 (delinquent) for the 6 nearest neighbors of the random observations. Based on the voting below, the delinquent observation would be classified correctly as 3 of the 5 nearest neighbors (excluding itself) are also delinquent. The non-delinquent observation would also be classified correctly, with 4 of 5 neighbors voting non-delinquent.

Delinquency label of nearest neighbors- non delinquent observation: [0 1 0 0 0 0]
Delinquency label of nearest neighbors- delinquent observation: [1 0 1 1 0 1]

 

5          Tree-Based Classifier

Tree based classifiers learn by segmenting the variable space into a number of distinct regions or nodes. This is accomplished via a process called recursive binary splitting. During this process observations are continuously split into two groups by selecting the variable and cutoff value that results in the highest node purity where purity is defined as the measure of variance across the two classes. The two most popular purity metrics are the gini index and cross entropy. A low value for these metrics indicates that the resulting node is pure and contains predominantly observations from the same class. Just like the nearest neighbor classifier, the decision tree classifier makes classification decisions by ‘votes’ from observations within each final node (known as the leaf node).

To illustrate how this works, a decision tree was created with the number of splitting rules (max depth) limited to 5. An excerpt of this tree can be found below. All 120,000 training examples start together in the top box. From top to bottom, each box shows the variable and splitting rule applied to the observations, the value of the gini metric, the number of observations the rule was applied to, and the current segmentation of the target variable. The first box indicates that the 6th variable (represented by the 5th index ‘X[5]’) Borrower Credit Score was  used to  split  the  training  examples.  Observations where the value of Borrower Credit Score was below or equal to -0.4413 follow the line to the box on the left. This box shows that 40,262 samples met the criteria. This box also holds the next splitting rule, also applied to the Borrower Credit Score variable. This process continues with X[2] (Original LTV) and so on until the tree is finished growing to its depth of 5. The final segments at the bottom of the tree are the aforementioned leaf nodes which are used to make classification decisions.  When making a prediction on new observations, the same splitting rules are applied and the observation receives the label of the most commonly occurring class in its leaf node.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_single_image image=”1086″][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]A more advanced tree based classifier is the Random Forest Classifier. The Random Forest works by generating many individual trees, often hundreds or thousands. However, for each tree, number of variables considered at each split is limited to a random subset. This helps reduce model variance and de-correlate the trees (since each tree will have a different set of available splitting choices). In our example, we fit a random forest classifier on the training data. The resulting hyperparameters and model documentation indicate that by default the model generates 10 trees, considers a random subset of variables the size of the square root of all variables (approximately 10 in this case), has no depth limitation, and only requires each leaf node to have 1 observation.

Since the random forest contains many trees and does not have a depth limitation, it is incredibly difficult to visualize. In order to better understand the model, a plot showing which variables were selected and resulted in the largest drop in the purity metric (gini index) can be useful. Below are the top 10 most important variables in the model, ranked by the total (normalized) reduction to the gini index.  Intuitively, this plot can be described as showing which variables can be used to best segment the observations into groups that are predominantly one class, either delinquent and non-delinquent.

 

6          Model Evaluation

Now that the models have been fitted, their performance must be evaluated. To do this, the fitted model will first be used to generate predictions on the test set (X_test). Next, the predicted class labels are compared to the actual observed class label (y_test). Three of the most popular classification metrics that can be used to compare the predicted and actual values are recall, precision, and the f1-score. These metrics are calculated for each class, delinquent and not-delinquent.

Recall is calculated for each class as the ratio of events that were correctly predicted. More precisely, it is defined as the number of true positive predictions divided by the number of true positive predictions plus false negative predictions. For example, if the data had 10 delinquent observations and 7 were correctly predicted, recall for delinquent observations would be 7/10 or 70%.

Precision is the number of true positives divided by the number of true positives plus false positives. Precision can be thought of as the ratio of events correctly predicted to the total number of events predicted. In the hypothetical example above, assume that the model made a total of 14 predictions for the label delinquent. If so, then the precision for delinquent predictions would be 7/14 or 50%.

The f1 score is calculated as the harmonic mean of recall and precision: (2(Precision*Recall/Precision+Recall)).

The classification reports for the K-neighbors and decision tree below show the precision, recall, and f1 scores for label 0 (non-delinquent) and 1 (delinquent).

 

There is no silver bullet for choosing a model—often it comes down to the goals of implementation. In this situation, the tradeoff between identifying more delinquent loans at the cost of misclassification can be analyzed with a specific tool called a roc curve.  When the model predicts a class label, a probability threshold is used to make the decision. This threshold is set by default at 50% so that observations with more than a 50% chance of membership belong to one class and vice-versa.

The majority vote (of the neighbor observations or the leaf node observations) determines the predicted label. Roc curves allow us to see the impact of varying this voting threshold by plotting the true positive prediction rate against the false positive prediction rate for each threshold value between 0% and 100%.

The area under the ROC curve (AUC) quantifies the model’s ability to distinguish between delinquent and non-delinquent observations.  A completely useless model will have an AUC of .5 as the probability for each event is equal. A perfect model will have an AUC of 1 as it is able to perfectly predict each class.

To better illustrate, the ROC curves plotting the true positive and false positive rate on the held-out test set as the threshold is changed are plotted below.

7          Model Tuning

Up to this point the models have been built and evaluated using a single train/test split of the data. In practice this is often insufficient because a single split does not always provide the most robust estimate of the error on the test set. Additionally, there are more steps required for model tuning. To solve both of these problems it is common to train multiple instances of a model using cross validation. In K-fold cross validation, the training data that was first created gets split into a third dataset called the validation set. The model is trained on the training set and then evaluated on the validation set. This process is repeated times, each time holding out a different portion of the training set to validate against. Once the model has been tuned using the train/validation splits, it is tested against the held out test set just as before. As a general rule, once data have been used to make a decision about the model they should never be used for evaluation.

8          K-Nearest Neighbors Tuning

Below a grid search approach is used to tune the K-nearest neighbors model. The first step is to define all of the possible hyperparameters to try in the model. For the KNN model, the list nk = [10, 50, 100, 150, 200, 250] specifies the number of nearest neighbors to try in each model. The list is used by the function GridSearchCV to build a series of models, each using the different value of nk. By default, GridSearchCV uses 3-fold cross validation. This means that the model will evaluate 3 train/validate splits of the data for each value of nk. Also specified in GridSearchCV is the scoring parameter used to evaluate each model. In this instance it is set to the metric discussed earlier, the area under the roc curve. GridSearchCV will return the best performing model by default, which can then be used to generate predictions on the test set as before. Many more values of could be specified to search through, and the default minkowski distance could be set to a series of metrics to try. However, this comes at a cost of computation time that increases significantly with each added hyperparameter.

 

In the plot below the mean training and validation scores of the 3 cross-validated splits is plotted for each value of K. The plot indicates that for the lower values of the model was overfitting the training data and causing lower validation scores. As increases, the training score lowers but the validation score increases because the model gets better at generalizing to unseen data.

9               Random Forest Tuning

There are many hyperparameters that can be adjusted to tune the random forest model. We use three in our example: n_estimatorsmax_features, and min_samples_leafN_estimators refers to the number of trees to be created. This value can be increased substantially, so the search space is set to list estimators. Random Forests are generally very robust to overfitting, and it is not uncommon to train a classifier with more than 1,000 trees. Second, the number of variables to be randomly considered at each split can be tuned via max_features. Having a smaller value for the number of random features is helpful for decorrelating the trees in the forest, which is especially useful when multicollinearity is present. We tried a number of different values for max_features, which can be found in the list features. Finally, the number of observations required in each leaf node is tuned via the min_samples_leaf parameter and list samples.

 

The resulting plot, below, shows a subset of the grid search results. Specifically, it shows the mean test score for each number of trees and leaf size when the number of random features considered at each split is limited to 5. The plot demonstrates that the best performance occurs with 500 trees and a requirement of at least 5 observations per leaf. To see the best performing model from the entire grid space the best estimator method can be used.

By default, parameters of the best estimator are assigned to the GridSearch object (cvknc and cvrfc). This object can now be used generate future predictions or predicted probabilities. In our example, the tuned models are used to generate predicted probabilities on the held out test set. The resulting

ROC curves show an improvement in the KNN model from an AUC of .62 to .75. Likewise, the tuned Random Forest AUC improves from .64 to .77.

Predicting loan delinquency using only origination data is not an easy task. Presumably, if significant signal existed in the data it would trigger a change in strategy by MBS investors and ultimately origination practices. Nevertheless, this exercise demonstrates the capability of a machine learning approach to deconstruct such an intricate problem and suggests the appropriateness of using machine learning model to tackle these and other risk management data challenges relating to mortgages and a potentially wide range of asset classes.

Talk Scope


Credit Risk Transfer: Front End Execution – Why Does It Matter?

This article was originally published on the GoRion blog.

Last month I described an overview of the activities of Credit Risk Transfer (CRT) as outlined from the Federal Finance Housing Agency (FHFA) guidance to Fannie Mae and Freddie Mac (the GSEs). This three-year-old program has shown great promise and success in creating a deeper residential credit investor segment and has enabled risk increments to be shifted from the GSEs and taxpayer to the private sector.

The FHFA issued an RFI to solicit feedback from stakeholders on proposals from the GSEs to adopt additional front-end credit risk transfer structures and to consider additional credit risk transfer policy issues. There is firm interest in this new and growing execution for risk transfer by investors who have confidence in the underwriting and servicing of mortgage loans through new and improved GSE standards.

In addition to the back-end industry appetite for CRT, there is also a growing focus to increase risk share at the front-end of the origination transaction. In particular, the mortgage industry and insurers (MIs) are interested in exploring risk sharing more actively on the front-end of the mortgage process. The MIs desire to participate in this new and growing market opportunity would increase their traditional coverage to much deeper levels than the standard 30% coverage.

 

Front-End Credit Risk Transfer

In 2016 FHFA expanded the GSE scorecards to include broadening the types of loans and risk transfer which included expanding to the front-end CRT. In addition to many prescriptive outlines on CRT, they also included wording such as “…Work with FHFA to conduct an analysis and assessment of front-end credit risk transfer transactions, including work to support a forthcoming FHFA Request for Input. Work with FHFA to engage key stakeholders and solicit their feedback. After conducting the necessary analysis and assessment, work with FHFA to take appropriate steps to continue front-end credit risk transfer transactions.”

Two additional ways to work with risk sharing on the front-end are using 1) recourse transactions and 2) deeper mortgage insurance.

 

Recourse Transaction

Recourse as a form of credit enhancement is not a new concept. In years past, some institutions would sell loans with recourse to the GSEs but it was usually determined to be capital intensive and not an efficient way of selling loans to the secondary markets. However, some of the non-depositories have found recourse to be an attractive way to sell loans to the GSEs.

To date from 2013 through December 2015, the GSE’s have executed 12 deals with recourse on $12.6 billion in UPB. The pricing and structures are very different and the transactions are not transparent. While this can be attractive to both parties if structured adequately, the transactions are not as scalable and each deal requires significant review and assessment. Arguments against recourse note this diminishes opportunities for the small to medium sized player who would like to participate in this new form of reduced g-fee structure and front-end CRT transaction.

Penny Mac shared their perspective on this activity at a recent CRT conference. They use the recourse structure with Fannie Mae and it leverages their capital structure and allows flexibility. Importantly, Penny Mac reminds us that both parties’ interests are aligned as there is skin in the game for quality originations.

 

Deeper Mortgage Insurance

The GSE model has a significant amount of counter-party risk with MIs through their standard business offerings. Through their charter, they require credit enhancements on loans of 80% or higher Loan to Value (LTV). This traditionally plays out to be 30% first loss coverage of such loans. For example, a 95% LTV loan is insured down to 65%. The mortgage insurers are integral to most of the GSE’ higher LTV books of business. Per the RFI, as of December 2015, the MI industry collectively has counter-party exposure of $185.5 billion, covering $724.5 billion of loans. So as a general course of business, this is already a risk they share of higher LTV lending without adding any additional exposure.

Through the crisis, the MIs were unable to pay dollar for dollar initial claims. This has caused hesitation on embracing a more robust model with more counter-party risk than the model of today. It is well documented that the MIs did pay a great deal of claims and buffered the GSEs by taking the first loss on billions of dollars before any losses were incurred by the GSEs. While much of that has been paid back, memories are long and this has generated pause as to how to value the insurance which is different than the back-end transactions. Today the MI industry is in much better shape through capital raises and increased standards directed from the GSEs and state regulators. (Our recent blog post on mortgage insurance haircuts explores this phenomenon in greater detail.)

FHFA instituted the PMIERs which required higher capital for the MI business transacted with the GSEs. The state regulators also increased the regulatory capital for the residential insurance sector and today the industry has strengthened their hand as a partner to the GSEs. In fact, the industry has new entrants who do not have the legacy books of losses which also adds new opportunities for the GSEs to expand the counter-party pools.

The MI companies can be a front-end model and play a more significant role in the risk share business by having deeper MI on the front-end (to 50% coverage) as a way of de-levering the GSE’s and ultimately, the taxpayers. And, like the GSE’s, MI’s may also participate in reinsurance markets to shed risk and balance out their own portfolios. Other market participants may also participate in this type of transaction and we will observe what opportunities avail themselves in the longer term. While nothing is ever black and white, there appear to be benefits to expanding the risk share efforts to the front-end of the business.

 

Benefits

1) Strong execution: Pricing and executing on mortgage risk, at the front of the origination will allow for options in a counter-cyclical volatile market.

2) Transparency: Moving risk metrics and pricing to the front-end will drive more front-end price transparency for mortgage credit risk.

3) Inclusive institutional partnering: Smaller entities may participate in a front-end risk share effort thereby creating opportunities outside of the largest financial institutions.

4) Inclusive borrower process: Front-end CRT may reach more borrowers and provide options as more institutions can take part of this opportunity.

5) Expands options for CRT in pilot phase: By driving the risk share to the front-end, the GSE’s reach their goals in de-risking their credit guarantee while providing a timely trade off of G-fee and MI pricing on the front-end of the transaction.

As part of the RFI response, the trade representing the MIs summarized principal benefits of front-end CRT as follows:

  1. Increased CRT availability and market stability
  2. Reduced first-loss holding risk
  3. Beneficial stakeholder familiarity and equitable access
  4. Increased transparency.

The full letter may be found at usmi.org.

In summary, whether it is recourse to a lending institution or participation in the front-end MI cost structure, pricing this risk at origination will continue to bring forward price discovery and transparency. This means the consumer and lender will be closer to the true credit costs of origination. With experience pricing and executing on CRT, it may become clearer where the differential cost of credit lies. The additional impact of driving more front-end CRT will be scalability and less process on the back-end for the GSEs. By leveraging the front-end model, GSEs will reach more borrowers and utilize a wider array of lending partners through this process.

As of November 8, we experienced a historic election which may take us in new directions. However, credit risk transfer is an option that may be used in the future regardless of GSE status, even if they 1) revert back to the old model with recap and release; 2) re-emerge after housing reform post legislation; or 3) remain in conservatorship and continue to be led by FHFA down this path.

**Footnote: All data was retrieved from the Federal Housing Finance Agency, FHFA, Single Family Credit Risk Transfer request for input, June 2016. More information may be found at FHFA.gov.

This is the second installment in a monthly Credit Risk Transfer (CRT) series on the GoRion blog. CRT is a significant accomplishment in bringing back private capital to the housing sector. This young effort, three years strong, has already shown promising investor appetite while discussions are underway to expand offerings to front-end risk share executions. My goal in this series is to share insights around CRT as it evolves with the private sector.


Get Started
Log in

Linkedin   

risktech2024