Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Category: Article

The Non-Agency MBS Market: Re-Assessing Securitization Market Conditions

Since the financial crisis began in 2007, the “Non-Agency” MBS market, i.e., securities neither issued nor guaranteed by Fannie Mae, Freddie Mac, or Ginnie Mae, has been sporadic and has not rebounded from pre-crisis levels. In recent months, however, activity by large financial institutions, such as AIG and Wells Fargo, has indicated a return to the issuance of Non-Agency MBS. What is contributing to the current state of the securitization market for high-quality mortgage loans? Does the recent, limited-scale return to issuance by these institutions signal an increase in private securitization activity in this sector of the securitization market? If so, what is sparking this renewed interest?

 

The MBS Securitization Market

Three entities – Ginnie Mae, Fannie Mae, and Freddie Mac – have been the dominant engine behind mortgage-backed securities (MBS) issuance since 2007. These entities, two of which remain in federal government conservatorship and the third a federal government corporation, have maintained the flow of capital from investors into guaranteed MBS and ensured that mortgage originators have adequate funds to originate certain types of single-family mortgage loans.

Virtually all mortgage loans backed by federal government insurance or guaranty programs, such as those offered by the Federal Housing Administration and the Department of Veterans Affairs, are issued in Ginnie Mae pools. Mortgage loans that are not eligible for these programs are referred to as “Conventional” mortgage loans. In the current market environment, most Conventional mortgage loans are sold to Fannie Mae and Freddie Mac (i.e. “Conforming” loans) and are securitized in Agency-guaranteed pass-through securities.

 

The Non-Agency MBS Market

Not all Conventional mortgage loans are eligible for purchase by Fannie Mae or Freddie Mac, however, due to collateral restrictions (i.e., their loan balances are too high or they do not meet certain underwriting requirements). These are referred to as “Non-Conforming” loans and, for most of the past decade, have been held in portfolio at large financial institutions, rather than placed in private, Non-Agency MBS. The Non-Agency MBS market is further divided into sectors for “Qualified Mortgage” (QM) loans, non-QM loans, re-performing loans and nonperforming loans. This post deals with the securitization of QM loans through Non-Agency MBS programs.

Since the crisis, Non-Agency MBS issuance has been the exclusive province of JP Morgan and Redwood Trust, both of which continue to issue a relatively small number of deals each year. The recent entry of AIG into the Non-Agency MBS market and, combined with Wells Fargo’s announcement that it intends to begin issuing as well, makes this a good time to discuss reasons why these institutions with other funding sources available to them are now moving back to this securitization market sector.

 

Considerations for Issuing QM Loans

Three potential considerations may lead financial institutions to investigate issuing QM Loans through Non-Agency MBS transactions:

  • “All-In” Economics
  • Portfolio Concentration or Limitations
  • Regulatory Pressures

Investigate “All-In” Economics

Over the long-term, mortgage originators gravitate to funding sources that provide the lowest cost to borrowers and profitability for their firms.  To improve the “all-in” economics of a Non-Agency MBS transaction, investment banks work closely with issuers to broaden the investor base for each level of the securitization capital structure.  Partly due to the success of the Fannie Mae and Freddie Mac Credit Risk Transfer transactions, there appears to be significant interest in higher-yielding mortgage-related securities at the lower-rated (i.e. higher risk) end of the securitization capital structure. This need for higher yielding assets has also increased demand for lower-rated securities in the Non-Agency MBS sector.

However, demand from investors at the higher-rated end of the securitization capital structure (i.e. ‘AAA’ and ‘AA’ securities) has not resulted in “all-in” economics for a Non-Agency MBS transaction that surpass the economics of balance sheet financing provided by portfolios funded with low deposit rates or low debt costs. If deposit rates and debt costs remain at historically low levels, the portfolio funding alternative will remain attractive. Notwithstanding the low interest rate environment, some institutions may develop operational capabilities for Non-Agency MBS programs as a risk mitigation process for future periods where balance sheet financing alternatives may not be as beneficial.

 

Portfolio Concentration or Limitations

Due to the lack of robust investor demand and unfavorable economics in Non-Agency MBS, many banks have increased their portfolio exposure to both fixed-rate and intermediate-adjustable-rate QM loans. The ability to hold these mortgage loans in portfolio has provided attractive pricing to a key customer demographic and earned an attractive net interest rate margin during the historical low-rate environment. While bank portfolios have provided an attractive funding source for Non-Agency QM loans, some financial institutions may attempt to develop diversified funding sources in response to regulatory pressure or self-imposed portfolio concentration limits. Selling existing mortgage portfolio assets into the Non-Agency MBS securitization market is one way in which financial institutions might choose to reduce concentrated mortgage risk exposure.

 

Regulatory Pressure

Some financial institutions may be under pressure from their regulators to demonstrate their ability to sell assets out of their mortgage portfolio as a contingency plan. The Non-Agency MBS market is one way of complying with these sorts of regulatory requests. Developing a contingency ability to tap Non-Agency MBS markets develops operational capabilities under less critical circumstances, while assessing the time needed by the institution to liquidate such assets through securitization. This early establishment of securitization functionalities is a prudent activity for those institutions who foresee the possibility of securitization as a future funding option.

While the Non-Agency MBS market has been dormant for most of the past decade, some financial institutions that have relied upon portfolio funding now appear to be testing the current viability of the Non-Agency MBS market. Other mortgage originators would be wise to take notice of these events, monitor activity in these markets, and assess the viability of this alternative funding source for their on-Conforming QM Loans. With the continued issuance by JP Morgan and Redwood Trust and new entrants such as AIG and Wells Fargo, -Non-Agency MBS market activity should be monitored by other mortgage originators to determine whether securitization has the potential to provide an alternative funding source for future lending activity.

In our next article on the Non-Agency MBS market, we will review the changes in due diligence practices, loan-level data disclosures, the representation and warranty framework, and the ratings process made by securitization market participants and the impact of these changes on the Non-Agency MBS market segment.


Open Source Governance: Three Potential Risks

For many companies, the question is no longer whether to use open-source tools, but rather how to implement them with the appropriate governance and controls. Have security concerns been accounted for?  How does one effectively institute controls over bad code?  Are there legal implications for using open-source software?

Open Source Security Risks

Open-source software is not inherently more or less prone to malicious code injections than proprietary software. It is true that anyone can push a code enhancement for a new version, and it may be possible for the senior contributors to miss intentional malware. However, in these circumstances, open source has an advantage over proprietary, coined in 1999 by Eric S. Raymond as Linus’s Law: “Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.” It is unlikely that a deliberate security error goes unnoticed by the many pairs of eyes on each release. However, security issues persist.

Open-Source Security – An Example

Debian, a Unix-like computer operating system, was one of the first to be based on the Linux kernel. Like many systems, it utilizes OpenSSL, a software library that provides an open-source implementation of the Secure Sockets Layer (SSL) protocol, commonly used by applications that require secure communications over a network.

In 2006, a snippet of code was removed from Debian’s OpenSSL package after one of the contributors found that it caused runtime warnings generated by other packages. After the removal, the pseudorandom number generator (PRNG) generated SSL keys using only the process ID (in Linux, a number up to 32,768) to the exclusion of all other random data. Since a relatively small number of values was used, the keys created over a period of almost two years were too predictable to be used securely. Users became aware of the issue 20 months after the bug was introduced, leading to costly security resolutions for companies and individuals who relied on Debian’s OpenSSL implementation.[1]

OpenSSL was again the subject of negative attention when a bug dubbed ‘Heartbleed’ was introduced to the code in 2012 and disclosed to the public in 2014. A fixed version of OpenSSL was released on the same day the issue was announced. More than a month after the release, however, 1.5% of the 800,000 most popular affected websites were still vulnerable to the security bug. [2]

The good news is that such vulnerabilities are documented in the Common Vulnerabilities and Exposures (CVE) system, and they are not so common. For Python 2.7, the popular version released in 2010, 15 vulnerabilities were recorded from 2010 to 2016, only one of which is considered ‘High’ severity, with a CVSS score of 7.5.  jQuery, a JavaScript library that simplifies some components of web application development and the most common open-source component identified in the latest Open Source Security and Risk Analysis (OSSRA) report, only has four known vulnerabilities from 2007 to 2017, none of which rank higher than a ‘Medium’. The CVE is just one tool available for improving the security profile of software applications, but technologists must remain vigilant and abreast of known issues. Corporate IT governance frameworks should be continuously updated to keep up with the changing structure of the underlying technology itself.

Bad Code

Serious security vulnerabilities may not be a daily occurrence, but bad code can affect software at any time. pandas, a popular open-source software library used in Python implementations for data manipulation and analysis, was first released in 2009. Since then, its contributors have identified over 10,000 issues, 1,933 of which are currently considered unresolved.[3] A company that relies on accurate output from a codebase that uses pandas needs to be vigilant not only in testing the code written by its in-house developers, but also in verifying that all outstanding known pandas issues are covered by workarounds and the rest of the functionality is sound. Developers and testers who are not intimately familiar with the pandas source code must devise creative testing tools to ensure complete integrity of applications that rely on it.

Bad Code – An Example

The Comma-Separated Values (CSV) file is one of many data formats that can be loaded for data manipulation and analysis by pandas, in this case using the built-in read_csv function.  read_csv has a number of associated helper attributes intended to simplify the data import, one of which is parse_dates, which, as the name implies, tells pandas to automatically parse dates in the data using a recognition algorithm to determine the format in each date-populated column.

However, if a row of data contains a blank value where a date is expected, pandas may populate that field with today’s date — a bug first formalized in version 0.9 in 2012 [4] (closed three days after it was opened) and again in 2014.[5] The issue was not closed until the end of 2016, when one of the contributors noted that the tests passed for version 0.19, stating that he was “not sure when this was fixed, but it doesn’t seem like it occurred recently. [6]

In the meantime, pandas versions prior to 0.19 may have resulted in incorrect date-related parameters if blank fields were fed to the system. For example, a mortgage-backed security may have had an incorrect calculated weighted average loan age if some of its loans had blank first payment dates, causing these rows to have a loan age of zero.

In addition to implementing security testing, IT controls must include a clear framework for testing both in-house and open-source components of all applications, especially high-impact programs.

Open-Source Licensing

Finally, it is important to be aware of open-source licensing constraints and to maintain active licensing governance activities to avoid legal issues in the future. Similar to the copyright concept, some open source creators have adopted the concept of ‘copyleft’ to ensure that “anyone who redistributes the software, with or without changes, must pass along the freedom to further copy and change it. [7]  This means that, legally, for any software that contains a copylefted open source component, whether it comprises 99% or 0.1% of the application code, the entire source code must be distributed with the software or be made available upon request. This is not an issue when the software is distributed internally among corporate users, but it can become more problematic when the company intends to sell or otherwise provide the software without revealing the internally developed codebase. Not all open-source software is copylefted – in fact, many popular licenses are highly permissive with very few restrictions. Below is a summary of the four most popular open-source licenses. [8]

Of the four, only the GNU General Public License (GPL, all versions) requires the creators to disclose the source code.  Between 20% and 25% of all open-source software is covered by the GNU GPL.

OSSRA found that 75% of applications contained at least some components under the GPL family of licenses, and that only 45% of those applications complied with the GPL copyleft obligations. Overall, the Financial Services and FinTech industries maintained 89% of all applications with at least one licensing conflict.

Most open-source software, even that which is licensed under the GNU GPL, can be used commercially. For example, a company can use and internally distribute a financial model written in R, an open-source programming language licensed under the GNU GPL 2.0. However, important legal consequences must be considered if the developed code will be later distributed outside of the company as a proprietary application. If the organization were to sell the R-based model, the entire source code would have to be made available to the paying user, who would also be free to distribute the code, for free or at a price. Alternatively, a model implemented in Python, which is licensed under a Berkeley Software Distribution (BSD)-like agreement, could be distributed without exposing the source code.

Open-Source Governance and Controls

Governance risks are specific to how open-source tools are integrated into existing operations. These risks can stem from a lack of formal training, lack of service and support, violations of third-party intellectual property rights, or instability and incompatibility with existing operating environments. Successful users of open-source code and tools devise effective means of identifying and measuring these risks. They ensure that these risks are included in process risk assessments to facilitate identification and mitigation of potential control weaknesses. Security vulnerabilities, code issues, and software licensing should not deter developers from using the plethora of useful open-source tools. Open-source issues and bugs are viewed and tested by thousands of capable developers, increasing the likelihood of a speedy resolution. In addition, a company’s own development team has full access to the source code, making it possible to fix issues without relying on anyone else. As with any application, effective governance and controls are essential to a successful open-source application. These ensure that software is used securely and appropriately and that a comprehensive testing framework is applied to minimize inaccuracies. The world of open source is changing constantly –we all just need to keep up.

WANT TO LEARN MORE?


Advantages and Disadvantages of Open Source Data Modeling Tools

Using open source data modeling tools has been a topic of debate as large organizations, including government agencies and financial institutions, are under increasing pressure to keep up with technological innovation to maintain competitiveness. Organizations must be flexible in development and identify cost-efficient gains to reach their organizational goals, and using the right tools is crucial. Organizations must often choose between open source software, i.e., software whose source code can be modified by anyone, and closed software, i.e., proprietary software with no permissions to alter or distribute the underlying code.

Mature institutions often have employees, systems, and proprietary models entrenched in closed source platforms. For example, SAS Analytics is a popular provider of proprietary data analysis and statistical software for enterprise data operations among financial institutions. But several core computations SAS performs can also be carried out using open source data modeling tools, such as Python and R. The data wrangling and statistical calculations are often fungible and, given the proper resources, will yield the same result across platforms.

Open source is not always a viable replacement for proprietary software, however. Factors such as cost, security, control, and flexibility must all be taken into consideration. The challenge for institutions is picking the right mix of platforms to streamline software development.  This involves weighing benefits and drawbacks.

Advantages of Open Source Programs

The Cost of Open Source Software

The low cost of open source software is an obvious advantage. Compared to the upfront cost of purchasing a proprietary software license, using open source programs seems like a no-brainer. Open source programs can be distributed freely (with some possible restrictions to copyrighted work), resulting in virtually no direct costs. However, indirect costs can be difficult to quantify. Downloading open source programs and installing the necessary packages is easy and adopting this process can expedite development and lower costs. On the other hand, a proprietary software license may bundle setup and maintenance fees for the operational capacity of daily use, the support needed to solve unexpected issues, and a guarantee of full implementation of the promised capabilities. Enterprise applications, while accompanied by a high price tag, provide ongoing and in-depth support of their products. The comparable cost of managing and servicing open source programs that often have no dedicated support is difficult to determine.

Open Source Talent Considerations

Another advantage of open source is that it attracts talent who are drawn to the idea of sharable and communitive code. Students and developers outside of large institutions are more likely to have experience with open source applications since access is widespread and easily available. Open source developers are free to experiment and innovate, gain experience, and create value outside of the conventional industry focus. This flexibility naturally leads to more broadly skilled inter-disciplinarians. The chart below from Indeed’s Job Trend Analytics tool reflects strong growth in open source talent, especially Python developers.

From an organizational perspective, the pool of potential applicants with relevant programming experience widens significantly compared to the limited pool of developers with closed source experience. For example, one may be hard-pressed to find a new applicant with development experience in SAS since comparatively few have had the ability to work with the application. Key-person dependencies become increasingly problematic as the talent or knowledge of the proprietary software erodes down to a shrinking handful of developers.

Job Seekers Interests via Indeed

*Indeed searches millions of jobs from thousands of job sites. The jobseeker interest graph shows the percentage of jobseekers who have searched for SAS, R, and python jobs.

*Indeed searches millions of jobs from thousands of job sites. The jobseeker interest graph shows the percentage of jobseekers who have searched for SAS, R, and python jobs.

Support and Collaboration

The collaborative nature of open source facilitates learning and adapting to new programming languages. While open source programs are usually not accompanied by the extensive documentation and user guides typical of proprietary software, the constant peer review from the contributions of other developers can be more valuable than a user guide. In this regard, adopters of open source may have the talent to learn, experiment with, and become knowledgeable in the software without formal training.

Still, the lack of support can pose a challenge. In some cases, the documentation accompanying open source packages and the paucity of usage examples in forums do not offer a full picture. For example, RiskSpan built a model in R that was driven by the available packages for data infrastructure – a precursor to performing statistical analysis – and their functionality. R does not have an active support solutions line and the probability of receiving a response from the author of the package is highly unlikely. This required RiskSpan to thoroughly vet packages.

Flexibility and Innovation

Another attractive feature of open source is its inherent flexibility. Python allows users to use different integrated development environments (IDEs) that have multiple different characteristics or functions, as compared to SAS Analytics, which only provides SAS EG or Base SAS. R makes possible web-based interfaces for server-based deployments. These functionalities grant more access to users at a lower cost. Thus, there can be more firm-wide development and participation in development. The ability to change the underlying structure of open source makes it possible to mold it per the organization’s goals and improve efficiency.

Another advantage of open source is the sheer number of developers trying to improve the software by creating many functionalities not found in their closed source equivalent. For example, R and Python can usually perform many functions like those available in SAS, but also have many capabilities not found in SAS: downloading specific packages for industry specific tasks, scraping the internet for data, or web development (Python). These specialized packages are built by programmers seeking to address the inefficiencies of common problems. A proprietary software vendor does not have the expertise nor the incentive to build equivalent specialized packages since their product aims to be broad enough to suit uses across multiple industries.

RiskSpan uses open source data modeling tools and operating systems for data management, modeling, and enterprise applications. R and Python have proven to be particularly cost effective in modeling. R provides several packages that serve specialized techniques. These include an archive of packages devoted to estimating the statistical relationship among variables using an array of techniques, which cuts down on development time. The ease of searching for these packages, downloading them, and researching their use incurs nearly no cost.

Open source makes it possible for RiskSpan to expand on the tools available in the financial services space. For example, a leading cash flow analytics software firm that offers several proprietary solutions in modeling structured finance transactions lacks the full functionality RiskSpan was seeking.  Seeking to reduce licensing fees and gain flexibility in structuring deals, RiskSpan developed deal cashflow programs in Python for STACR, CAS, CIRT, and other consumer lending deals. The flexibility of Python allowed us to choose our own formatted cashflows and build different functionalities into the software. Python, unlike closed source applications, allowed us to focus on innovating ways to interact with the cash flow waterfall.

Disadvantages of Open Source Programs

Deploying open source solutions also carries intrinsic challenges. While users may have a conceptual understanding of the task at hand, knowing which tools yield correct results, whether derived from open or closed source, is another dimension to consider. Different parameters may be set as default, new limitations may arise during development, or code structures may be entirely different. Different challenges may arise from translating a closed source program to an open source platform. Introducing open source requires new controls, requirements, and development methods.

Redundant code is an issue that might arise if a firm does not strategically use open source. Across different departments, functionally equivalent tools may be derived from distinct packages or code libraries. There are several packages offering the ability to run a linear regression, for example. However, there may be nuanced differences in the initial setup or syntax of the function that can propagate problems down the line. In addition to the redundant code, users must be wary of “forking” where the development community splits on an open source application. For example, R develops multiple packages performing the same task/calculations, sometimes derived from the same code base, but users must be cognizant that the package is not abandoned by developers.

Users must also take care to track the changes and evolution of open source programs. The core calculations of commonly used functions or those specific to regular tasks can change. Maintaining a working understanding of these functions in the face of continual modification is crucial to ensure consistent output. Open source documentation is frequently lacking. In financial services, this can be problematic when seeking to demonstrate a clear audit trail for regulators. Tracking that the right function is being sourced from a specific package or repository of authored functions, as opposed to another function, which may have an identical name, sets up blocks on unfettered usage of these functions within code. Proprietary software, on the other hand, provides a static set of tools, which allows analysts to more easily determine how legacy code has worked over time.

Using Open Source Data Modeling Tools

Deciding on whether to go with open source programs directly impacts financial services firms as they compete to deliver applications to the market. Open source data modeling tools are attractive because of their natural tendency to spur innovation, ingrain adaptability, and propagate flexibility throughout a firm. But proprietary software solutions are also attractive because they provide the support and hard-line uses that may neatly fit within an organization’s goals. The considerations offered here should be weighed appropriately when deciding between open source and proprietary data modeling tools.

Questions to consider before switching platforms include:

  • How does one quantify the management and service costs for using open source programs? Who would work on servicing it, and, once all-in expenses are considered, is it still more cost-effective than a vendor solution?
  • When might it be prudent to move away from proprietary software? In a scenario where moving to a newer open source technology appears to yield significant efficiency gains, when would it make sense to end terms with a vendor?
  • Does the institution have the resources to institute new controls, requirements, and development methods when introducing open source applications?
  • Does the open source application or function have the necessary documentation required for regulatory and audit purposes?

Open source is certainly on the rise as more professionals enter the space with the necessary technical skills and a new perspective on the goals financial institutions want to pursue. As competitive pressures mount, financial institutions are faced with a difficult yet critical decision of whether open source is appropriate for them. Open source may not be a viable solution for everyone—the considerations discussed above may block the adoption of open source for some organizations. However, often the pros outweigh the cons, and there are strategic precautions that can be taken to mitigate any potential risks.


References

 https://www.redhat.com/en/open-source/open-source-way

http://www.stackoverflow.blog/code-for-a-living/how-i-open-sourced-my-way-to-my-dream-job-mohamed-said

https://www.redhat.com/f/pdf/whitepapers/WHITEpapr2.pdf

http://www.forbes.com/sites/benkepes/2013/10/02/open-source-is-good-and-all-but-proprietary-is-still-winning/#7d4d544059e9

https://www.indeed.com/jobtrends/q-SAS-q-R-q-python.html


Open Source Software for Mortgage Data Analysis

While open source has been around for decades, using open source software for mortgage data analysis is a recent trend. Financial institutions have traditionally been slow to adopt the latest data and technology innovations due to the strict regulatory and risk-averse nature of the industry, and open source has been no exception. As open source becomes more mainstream, however, many of our clients have come to us with questions regarding its viability within the mortgage industry.

The short answer is simple: open source has a lot of potential for the financial services and mortgage industries, particularly for data modeling and data analysis. Within our own organization, we frequently use open source data modeling tools for our proprietary models as well as models built for clients. While a degree of risk is inherent, prudent steps can be taken to mitigate them and profit from the many worthwhile benefits of open source.

Open source has a lot of potential for the mortgage industry, particularly for data modeling & analysis @RiskSpan (Click to Tweet)

To address the common concerns that arise with open source, we’ll be publishing a series of blog posts aimed at alleviating these concerns and providing guidelines for utilizing open source software for data analysis within your organization. Some of the questions we’ll address include:

  • Can open source programming languages be applied to mortgage data modeling and data analysis?
  • What risks does open source expose me to and what can I do to mitigate them?
  • What are the pitfalls of open source and do the benefits outweigh them?
  • How does using open source software for mortgage data analysis affect the control and governance of my models?
  • What factors do I need to consider when deciding whether to use open source at my institution?

Throughout the series, we’ll also include examples of how RiskSpan has used open source software for mortgage data analysis, why we chose to use it, and what factors were considered. Before we dive in on the considerations for open source, we thought it would be helpful to offer an introduction to open source and provide some context around its birth and development within the financial industry.

What Is Open Source Software?

Software has conventionally been considered open source when the original code is made publicly available so that anyone can edit, enhance, or modify it freely. This original concept has recently been expanded to incorporate a larger movement built on values of collaboration, transparency, and community.

Open Source Software Vs Proprietary Software

Proprietary software refers to applications for which the source code is only accessible to those who created it. Thus, only the original author(s) has control over any updates or modifications. Outside players are barred from even viewing the code to protect the owners from copying and theft. To use proprietary software, users agree to a licensing agreement and typically pay a fee. The agreement legally binds the user to the owners’ terms and prevents the user from any actions the owners have not expressly permitted.

Open source software, on the other hand, gives any user free rein to view, copy, or modify it. The idea is to foster a community built on collaboration, allowing users to learn from each other and build on each other’s work. Like with proprietary software, open source users must still agree to a licensing agreement, but the terms are very differ significantly from those of a proprietary license.1

History of Open Source Software

The idea of open source software first developed in the 1950s, when much of software development was done by computer scientists in higher education. In line with the value of sharing knowledge among academics, source code was openly accessible. By the 1960s, however, as the cost of software development increased, hardware companies were charging additional fees for software that used to be bundled with their products.

Change came again in the 1980s. At this point, it was clear that technology and software were important factors of the growing business economy. Technology leaders were frustrated with the increasing costs of software. In 1984, Richard Stallman launched the GNU Project with the purpose of creating a complete computer operating system with no limitations on the use of its source code. In 1991, the operating system now referred to as Linux was released.

The final tipping point came in 1997, when Eric Raymond published his book, The Cathedral and the Bazaar, in which he articulated the underlying principles behind open source software. His book was a driving factor in Netscape’s decision to release its source code to the public, inspired by the idea that allowing more people to find and fix bugs will improve the system for everyone. Following Netscape’s release, the term “open source software” was introduced in 1998.

In the data-driven economy of the past two decades, open source has played an ever-increasing role. The field of software development has evolved to embrace the values of open source. Open source has made it not only possible but easy for anyone to access and manipulate source code, improving our ability to create and share valuable software.2

Adoption of Open Source Software in Business

The growing relevance of open source software has also changed the way large organizations approach their software solutions. While open source software was at one point rare in an enterprise’s system, it’s now the norm. A survey conducted by Black Duck Software revealed that fewer than 3% of companies don’t rely on open source at all. Even the most conservative organizations are hopping on board the open source trend.3
Even the most conservative organizations are hopping on board the open source trend.

In a blog post from June 2016, TechCrunch writes:

“Open software has already rooted itself deep within today’s Fortune 500, with many contributing back to the projects they adopt. We’re not just talking stalwarts like Google and Facebook; big companies like Walmart, GE, Merck, Goldman Sachs — even the federal government — are fleeing the safety of established tech vendors for the promises of greater control and capability with open software. These are real customers with real budgets demanding a new model of software.”4

The expected benefits of open source software are alluring all types of institutions, from small businesses, to technology giants, to governments. This shift away from proprietary software in favor of open source is streamlining business operations. As more companies make the switch, those who don’t will fall behind the times and likely be at a serious competitive disadvantage.

Open Source Software for Mortgage Data Analysis

Open source software is slowly finding its way into the financial services industry as well. We’ve observed that smaller entities that don’t have the budgets to buy expensive proprietary software have been turning to open source as a viable substitute. Smaller companies are either building software in house or turning to companies like RiskSpan to achieve a cost-effective solution. On the other hand, bigger companies with the resources to spare are also dabbling in open source. These companies have the technical expertise in house and give their skilled workers the freedom to experiment with open source software.

Within our own work, we see tremendous potential for open source software for mortgage data analysis. Open source data modeling tools like Python, R, and Julia are useful for analyzing mortgage loan and securitization data and identifying historical trends. We’ve used R to build models for our clients and we’re not the only ones: several of our clients are now building their DFAST challenger models using R.

Open source has grown enough in the past few years that more and more financial institutions will make the switch. While the risks associated with open source software will continue to give some organizations pause, the benefits of open source will soon outweigh those concerns. It seems open source is a trend that is here to stay, and luckily, it is a trend ripe with opportunity.


[1] https://opensource.com/resources/what-open-source

[2] https://www.longsight.com/learning-center/history-open-source

[3] https://techcrunch.com/2016/06/19/the-next-wave-in-software-is-open-adoption-software/

[4] https://techcrunch.com/2016/06/19/the-next-wave-in-software-is-open-adoption-software/


Balancing Internal and External Model Validation Resources

The question of “build versus buy” is every bit as applicable and challenging to model validation departments as it is to other areas of a financial institution. With no “one-size-fits-all” solution, banks are frequently faced with a balancing act between the use of internal and external model validation resources. This article is a guide for deciding between staffing a fully independent internal model validation department, outsourcing the entire operation, or a combination of the two.

Striking the appropriate balance is a function of at least five factors:

  1. control and independence
  2. hiring constraints
  3. cost
  4. financial risk
  5. external (regulatory, market, and other) considerations

Control and Independence

Internal validations bring a measure of control to the operation. Institutions understand the specific skill sets of their internal validation team beyond their resumes and can select the proper team for the needs of each model. Control also extends to the final report, its contents, and how findings are described and rated.

Further, the outcome and quality of internal validations may be more reliable. Because a bank must present and defend validation work to its regulators, low-quality work submitted by an external validator may need to be redone by yet another external validator, often on short notice, in order to bring the initial external model validation up to spec.

Elements of control, however, must sometimes be sacrificed in order to achieve independence. Institutions must be able to prove that the validator’s interests are independent from the model validation outcomes. While larger banks frequently have large, freestanding internal model validation departments whose organizational independence is clear and distinct, quantitative experts at smaller institutions must often wear multiple hats by necessity.

Ultimately the question of balancing control and independence can only be suitably addressed by determining whether internal personnel qualified to perform model validations are capable of operating without any stake in the outcome (and persuading examiners that this is, in fact, the case).

Hiring Constraints

Practically speaking, hiring constraints represent a major consideration. Hiring limitations may result from budgetary or other less obvious factors. Organizational limits aside, it is not always possible to hire employees with a needed skill set at a workable salary range at the time when they are needed. For smaller banks with limited bandwidth or larger banks that need to further expand, external model validation resources may be sought out of sheer necessity.

Cost

Cost is an important factor that can be tricky to quantify. Model validators tend to be highly specialized; many typically work on one type of model, for example, Basel models. If your bank is large enough and has enough Basel models to keep a Basel model validator busy with internal model validations all year, then it may be cost effective to have a Basel model validator on staff. But if your Basel model validator is only busy for six months of the year, then a full-time Basel validator is only efficient if you have other projects that are suited to that validator’s experience and cost. To complicate things further, an employee’s cost is typically housed in one department, making it difficult from a budget perspective to balance an employee’s time and cost across departments.

If we were building a cost model to determine how many internal validators we should hire, the input variables would include:

  1. the number of models in our inventory
  2. the skills required to validate each model
  3. the risk classification of each model (i.e., how often validations must be completed)
  4. the average fully loaded salary expense for a model validator with those specific skills

Only by comparing the cost of external validations to the year-round costs associated with hiring personnel with the specialized knowledge required to validate a given type of model (e.g., credit models, market risk models, operational risk models, ALM models, Basel models, and BSA/AML models) can a bank arrive at a true apples-to-apples comparison.

Financial Risk

While cost is the upfront expense of internal or external model validations, financial risk accounts for the probability of unforeseen circumstances. Assume that your bank is staffed with internal validators and your team of internal validators can handle the schedule of model validations (validation projects are equally spaced throughout the year). However, operations may need to deploy a new version of a model or a new model on a schedule that requires a validation at a previously unscheduled time with no flexibility. In this case, your bank may need to perform an external validation in addition to managing and paying a fully-staffed team of internal validators.

A cost model for determining whether to hire additional internal validators should include a factor for the probability that models will need to be validated off-schedule, resulting in unforeseen external validation costs. On the other hand, a cost model might also consider the probability that an external validator’s product will be inferior and incur costs associated with required remediation.

External Risks

External risks are typically financial risks caused by regulatory, market, and other factors outside an institution’s direct control. The risk of a changing regulatory environment under a new presidential administration is always real and uncertainty clearly abounds as market participants (and others) attempt to predict President Trump’s priorities. Changes may include exemptions for regional banks from certain Dodd-Frank requirements; the administration has clearly signaled its intent to loosen regulations generally. Even though model validation will always be a best practice, these possibilities may influence a bank’s decision to staff an internal model validation team.

Recent regulatory trends can also influence validator hiring decisions. For example, our work with various banks over the past 12-18 months has revealed that regulators are trending toward requiring larger sample sizes for benchmarking and back-testing. Given the significant effort already associated with these activities, larger sample sizes could ultimately lower the number of model validations internal resources can complete per year. Funding external validations may become more expensive, as well.

Another industry trend is the growing acceptance of limited-scope validations. If only minimal model changes have occurred since a prior validation, the scope of a scheduled validation may be limited to the impact of these changes. If remediation activities were required by a prior validation, the scope may be limited to confirming that these changes were effectively implemented. This seemingly common-sense approach to model validations by regulators is a welcome trend and could reduce the number of internal and external validations required.

Joint Validations

In addition to reduced-scope validations, some of our clients have sought to reduce costs by combining internal and external resources. This enables institutions to limit hiring to validators without model-specific or highly quantitative skills. Such internal validators can typically validate a large number of lower-risk, less technical models independently.

For higher-risk, more technical models, such as ALM models, the internal validator may review the controls and documentation sufficiently, leaving the more technical portions of the validation—conceptual soundness, process verification, benchmarking, and back-testing, for example—to an external validator with specific expertise. In these cases, reports are produced jointly with internal and external validators each contributing the sections pertaining to procedures that they performed.

The resulting report often has the dual benefit of being more economical than a report generated externally and more defensible than one that relies solely on internal resources who may lack the specific domain knowledge necessary.

Conclusion

Model risk managers have limited time, resources, and budgets and face unending pressure from management and regulators. Striking an efficient resource-balancing strategy is critically important to consistently producing high-quality model validation reports on schedule and within budgets. The question of using internal vs. external model validation resources is not an either/or proposition. In considering it, we recommend that model risk management (MRM) professionals

  • consider the points above and initiate risk tolerance and budget conversations within the MRM framework.
  • reach out to vendors who have the skills to assist with your high-risk models, even if there is not an immediate need. Some institutions like to try out a model validation provider on a few low- or moderate-risk models to get a sense of their capabilities.
  • consider internal staffing to meet basic model validation needs and external vendors (either for full validations or outsourced staff) to fill gaps in expertise.

Credit Risk Transfer: Front End Execution – Why Does It Matter?

This article was originally published on the GoRion blog.

Last month I described an overview of the activities of Credit Risk Transfer (CRT) as outlined from the Federal Finance Housing Agency (FHFA) guidance to Fannie Mae and Freddie Mac (the GSEs). This three-year-old program has shown great promise and success in creating a deeper residential credit investor segment and has enabled risk increments to be shifted from the GSEs and taxpayer to the private sector.

The FHFA issued an RFI to solicit feedback from stakeholders on proposals from the GSEs to adopt additional front-end credit risk transfer structures and to consider additional credit risk transfer policy issues. There is firm interest in this new and growing execution for risk transfer by investors who have confidence in the underwriting and servicing of mortgage loans through new and improved GSE standards.

In addition to the back-end industry appetite for CRT, there is also a growing focus to increase risk share at the front-end of the origination transaction. In particular, the mortgage industry and insurers (MIs) are interested in exploring risk sharing more actively on the front-end of the mortgage process. The MIs desire to participate in this new and growing market opportunity would increase their traditional coverage to much deeper levels than the standard 30% coverage.

 

Front-End Credit Risk Transfer

In 2016 FHFA expanded the GSE scorecards to include broadening the types of loans and risk transfer which included expanding to the front-end CRT. In addition to many prescriptive outlines on CRT, they also included wording such as “…Work with FHFA to conduct an analysis and assessment of front-end credit risk transfer transactions, including work to support a forthcoming FHFA Request for Input. Work with FHFA to engage key stakeholders and solicit their feedback. After conducting the necessary analysis and assessment, work with FHFA to take appropriate steps to continue front-end credit risk transfer transactions.”

Two additional ways to work with risk sharing on the front-end are using 1) recourse transactions and 2) deeper mortgage insurance.

 

Recourse Transaction

Recourse as a form of credit enhancement is not a new concept. In years past, some institutions would sell loans with recourse to the GSEs but it was usually determined to be capital intensive and not an efficient way of selling loans to the secondary markets. However, some of the non-depositories have found recourse to be an attractive way to sell loans to the GSEs.

To date from 2013 through December 2015, the GSE’s have executed 12 deals with recourse on $12.6 billion in UPB. The pricing and structures are very different and the transactions are not transparent. While this can be attractive to both parties if structured adequately, the transactions are not as scalable and each deal requires significant review and assessment. Arguments against recourse note this diminishes opportunities for the small to medium sized player who would like to participate in this new form of reduced g-fee structure and front-end CRT transaction.

Penny Mac shared their perspective on this activity at a recent CRT conference. They use the recourse structure with Fannie Mae and it leverages their capital structure and allows flexibility. Importantly, Penny Mac reminds us that both parties’ interests are aligned as there is skin in the game for quality originations.

 

Deeper Mortgage Insurance

The GSE model has a significant amount of counter-party risk with MIs through their standard business offerings. Through their charter, they require credit enhancements on loans of 80% or higher Loan to Value (LTV). This traditionally plays out to be 30% first loss coverage of such loans. For example, a 95% LTV loan is insured down to 65%. The mortgage insurers are integral to most of the GSE’ higher LTV books of business. Per the RFI, as of December 2015, the MI industry collectively has counter-party exposure of $185.5 billion, covering $724.5 billion of loans. So as a general course of business, this is already a risk they share of higher LTV lending without adding any additional exposure.

Through the crisis, the MIs were unable to pay dollar for dollar initial claims. This has caused hesitation on embracing a more robust model with more counter-party risk than the model of today. It is well documented that the MIs did pay a great deal of claims and buffered the GSEs by taking the first loss on billions of dollars before any losses were incurred by the GSEs. While much of that has been paid back, memories are long and this has generated pause as to how to value the insurance which is different than the back-end transactions. Today the MI industry is in much better shape through capital raises and increased standards directed from the GSEs and state regulators. (Our recent blog post on mortgage insurance haircuts explores this phenomenon in greater detail.)

FHFA instituted the PMIERs which required higher capital for the MI business transacted with the GSEs. The state regulators also increased the regulatory capital for the residential insurance sector and today the industry has strengthened their hand as a partner to the GSEs. In fact, the industry has new entrants who do not have the legacy books of losses which also adds new opportunities for the GSEs to expand the counter-party pools.

The MI companies can be a front-end model and play a more significant role in the risk share business by having deeper MI on the front-end (to 50% coverage) as a way of de-levering the GSE’s and ultimately, the taxpayers. And, like the GSE’s, MI’s may also participate in reinsurance markets to shed risk and balance out their own portfolios. Other market participants may also participate in this type of transaction and we will observe what opportunities avail themselves in the longer term. While nothing is ever black and white, there appear to be benefits to expanding the risk share efforts to the front-end of the business.

 

Benefits

1) Strong execution: Pricing and executing on mortgage risk, at the front of the origination will allow for options in a counter-cyclical volatile market.

2) Transparency: Moving risk metrics and pricing to the front-end will drive more front-end price transparency for mortgage credit risk.

3) Inclusive institutional partnering: Smaller entities may participate in a front-end risk share effort thereby creating opportunities outside of the largest financial institutions.

4) Inclusive borrower process: Front-end CRT may reach more borrowers and provide options as more institutions can take part of this opportunity.

5) Expands options for CRT in pilot phase: By driving the risk share to the front-end, the GSE’s reach their goals in de-risking their credit guarantee while providing a timely trade off of G-fee and MI pricing on the front-end of the transaction.

As part of the RFI response, the trade representing the MIs summarized principal benefits of front-end CRT as follows:

  1. Increased CRT availability and market stability
  2. Reduced first-loss holding risk
  3. Beneficial stakeholder familiarity and equitable access
  4. Increased transparency.

The full letter may be found at usmi.org.

In summary, whether it is recourse to a lending institution or participation in the front-end MI cost structure, pricing this risk at origination will continue to bring forward price discovery and transparency. This means the consumer and lender will be closer to the true credit costs of origination. With experience pricing and executing on CRT, it may become clearer where the differential cost of credit lies. The additional impact of driving more front-end CRT will be scalability and less process on the back-end for the GSEs. By leveraging the front-end model, GSEs will reach more borrowers and utilize a wider array of lending partners through this process.

As of November 8, we experienced a historic election which may take us in new directions. However, credit risk transfer is an option that may be used in the future regardless of GSE status, even if they 1) revert back to the old model with recap and release; 2) re-emerge after housing reform post legislation; or 3) remain in conservatorship and continue to be led by FHFA down this path.

**Footnote: All data was retrieved from the Federal Housing Finance Agency, FHFA, Single Family Credit Risk Transfer request for input, June 2016. More information may be found at FHFA.gov.

This is the second installment in a monthly Credit Risk Transfer (CRT) series on the GoRion blog. CRT is a significant accomplishment in bringing back private capital to the housing sector. This young effort, three years strong, has already shown promising investor appetite while discussions are underway to expand offerings to front-end risk share executions. My goal in this series is to share insights around CRT as it evolves with the private sector.


Validating Model Inputs: How Much Is Enough?

In some respects, the OCC 2011-12/SR 11-7 mandate to verify model inputs could not be any more straightforward: “Process verification … includes verifying that internal and external data inputs continue to be accurate, complete, consistent with model purpose and design, and of the highest quality available.” From a logical perspective, this requirement is unambiguous and non-controversial. After all, the reliability of a model’s outputs cannot be any better than the quality of its inputs. From a functional perspective, however, it raises practical questions around the amount of work that needs to be done in order to consider a particular input “verified.” Take the example of a Housing Price Index (HPI) input assumption. It could be that the modeler obtains the HPI assumption from the bank’s finance department, which purchases it from an analytics firm. What is the model validator’s responsibility? Is it sufficient to verify that the HPI input matches the data of the finance department that supplied it? If not, is it enough to verify that the finance department’s HPI data matches the data provided by its analytics vendor? If not, is it necessary to validate the analytics firm’s model for generating HPI assumptions? It depends. Just as model risk increases with greater model complexity, higher uncertainty about inputs and assumptions, broader use, and larger potential impact, input risk increases with increases in input complexity and uncertainty. The risk of any specific input also rises as model outputs become increasingly sensitive to it.

Validating Model Inputs Best Practices

So how much validation of model inputs is enough? As with the management of other risks, the level of validation or control should be dictated by the magnitude or impact of the risk. Like so much else in model validation, no ‘one size fits all’ approach applies to determining the appropriate level of validation of model inputs and assumptions. In addition to cost/benefit considerations, model validators should consider at least four factors for mitigating the risk of input and assumption errors leading to inaccurate outputs.

  • Complexity of inputs
  • Manual manipulation of inputs from source system prior to input into model
  • Reliability of source system
  • Relative importance of the input to the model’s outputs (i.e., sensitivity)

Consideration 1: Complexity of Inputs

The greater the complexity of the model’s inputs and assumptions, the greater the risk of errors. For example, complex yield curves with multiple data points will be inherently subject to greater risk of inaccuracy than binary inputs such as “yes” and “no.” In general, the more complex an input is, the more scrutiny it requires and the “further back” a validator should look to verify its origin and reasonability.

Consideration 2: Manual Manipulation of Inputs from Source System Prior to Input into Model

Input data often requires modification from the source system to facilitate input into the model. More handling and manual modifications increase the likelihood of error. For example, if a position input is manually copied from Bloomberg and then subjected to a manual process of modification of format to enable uploading to the model, there is a greater likelihood of error than if the position input is extracted automatically via an API. The accuracy of the input should be verified in either case, but the more manual handling and manipulation of data that occurs, the more comprehensive the testing should be. In this example, more comprehensive testing would likely take the form of a larger sample size.

In addition, the controls over the processes to extract, transform, and load data from a source system into the model will impact the risk of error. More mature and effective controls, including automation and reconciliation, will decrease the likelihood of error and therefore likely require a lighter verification procedure.

Consideration 3: Reliability of Source Systems

More mature and stable source systems generally produce more consistently reliable results. Conversely, newer systems and those that have produced erroneous results increase the risk of error. The results of previous validation of inputs, from prior model validations or from third parties, including internal audit and compliance, can be used as an indicator of the reliability of information from source systems and the magnitude of input risk. The greater the number of issues identified, the greater the risk, and the more likely it is that a validator should seek to drill deeper into the fundamental sources of source data.

Consideration 4: Output Sensitivity to Inputs

No matter how reliable an input data’s source system is deemed to be, or the amount of manual manipulation to which an input is subjected, perhaps the most important consideration is the individual input’s power to affect the model’s outputs. Returning to our original example, if a 50 percent change in the HPI assumption has only a negligible impact on the model’s outputs, then a quick verification against the report supplied by the finance department may be sufficient. If, however, the model’s outputs are extremely sensitive to even small shifts in the HPI assumption, then additional testing is likely warranted—perhaps even to include a validation of the analytics vendor’s HPI model (along with all of its inputs).

A Cost-Effective Model Input Validation Strategy

When it comes to verifying model inputs, there is no theoretical limitation to the lengths to which a model validator can go. Model risk managers, who do not have unlimited time or budgets, would benefit from applying practical limits to validation procedures using a risk-based approach to determine the most cost-effective strategies to ensure that models are sufficiently validated. Applying the considerations listed above on a case-by-case basis will help validators appropriately define and scope model input reviews in a manner commensurate with appropriate risk management principles.


Performance Testing: Benchmarking Vs. Back-Testing

When someone asks you what a model validation is, what is the first thing you think of? If you are like most, then you would immediately think of performance metrics— those quantitative indicators that tell you not only if the model is working as intended, but also its performance and accuracy over time and compared to others. Performance testing is the core of any model validation and generally consists of the following components:

  • Benchmarking
  • Back-testing
  • Sensitivity Analysis
  • Stress Testing

Sensitivity analysis and stress testing, while critical to any model validation’s performance testing, will be covered by a future article. This post will focus on the relative virtues of benchmarking versus back-testing—seeking to define what each is, when and how each should be used, and how to make the best use of the results of each.

Benchmarking

Benchmarking is when the validator is providing a comparison of the model being validated to some other model or metric. The type of benchmark utilized will vary, like all model validation performance testing does, with the nature, use, and type of model being validated. Due to the performance information it provides, benchmarking should always be utilized in some form when a suitable benchmark can be found.

Choosing a Benchmark

Choosing what kind of benchmark to use within a model validation can sometimes be a very daunting task. Like all testing within a model validation, the kind of benchmark to use depends on the type of model being tested. Benchmarking takes many forms and may entail comparing the model’s outputs to:

  • The model’s previous version
  • An externally produced model
  • A model built by the validator
  • Other models and methodologies considered by the model developers, but not chosen
  • Industry best practice
  • Thresholds and expectations of the model’s performance

One of the most used benchmarking approaches is to compare a new model’s outputs to those of the version of the model it is replacing. It remains very common throughout the industry for models to be replaced due to a deterioration of performance, change in risk appetite, new regulatory guidance, need to capture new variables, or the availability of new sets of information. In these cases, it is important to not only document but also prove that the new model performs better and does not have the same issues that triggered the old model’s replacement.

Another common benchmarking approach compares the model’s outputs to those of an external “challenger” model (or one built by the validator) which functions with the same objective and data. This approach is likely to return more apt output comparisons than those generated by benchmarking against older versions that are likely to be out of date since the challenger model is developed and updated with the same data as the champion model.

Another benchmark set which could be used for model validation includes other models or methodologies reviewed by the model developers as possibilities for the model being validated but ultimately not used. Model developers as best practice should always list any alternative methodologies, theories, or data which were omitted from the model’s final version. Additionally, model validators should always leverage their experience and understanding of the current best practices throughout the industry, along with any analysis previously completed on similar models. Model validation should then take these alternatives and use them as benchmarks to the model being validated.

Model validators have multiple, distinct ways to incorporate benchmarking into their analysis. The use of the different types of benchmarking discussed here should be based on the type of model, its objective, and the validator’s best judgment. If a model cannot be reasonably benchmarked, then the validator should record why not and discuss the resulting limitations of the validation.

Back-Testing

Back-testing is used to measure model outcomes. Here, instead of measuring performance with a comparison, the validator is specifically measuring whether the model is both working as intended and is accurate. Back-testing can take many forms based on the model’s objective. As with benchmarking, back-testing should be a part of every full-scope model validation to the extent possible.

What Back-Tests to Perform

As a form of outcomes analysis, back-testing provides quantitative metrics which measure the performance of a model’s forecast, the accuracy of its estimates, or its ability to rank-order risk. For instance, if a model produces forecasts for a given variable, back-testing would involve comparing the model’s forecast values against actual outcomes, thus indicating its accuracy.

A related function of model back-testing evaluates the ability of a given model to adequately measure risk. This risk could take any of several forms, from the probability of a given borrower to default to the likelihood of a large loss during a given trading day. To back-test a model’s ability to capture risk exposure, it is important first to collect the right data. In order to back-test a probability of default model, for example, data would need to be collected containing cases where borrowers have actually defaulted in order to test the model’s predictions.

Back-testing models that assign borrowers to various risk levels necessitate some special considerations. Back-testing these and other models that seek to rank-order risk involves looking at the model’s performance history and examining its accuracy through its ability to rank and order the risk. This can involve analyzing both Type 1 (false positive) and Type 2 (false negative) statistical errors against the true positive and true negative rates for a given model.  Common statistical tests used for this type of back-testing analysis include, but are not limited to, a Kolmogorov-Smirnov score (KS), a Brier score, or a Receiver Operating Characteristic (ROC).

Benchmarking vs Backtesting

Back-testing measures a model’s outcome and accuracy against real-world observations, while benchmarking measures those outcomes against those of other models or metrics. Some overlap exists when the benchmarking includes comparing how well different models’ outputs back-test against real-world observations and the chosen benchmark. This overlap sometimes leads people to mistakenly conclude that model validations can rely on just one method. In reality, however, back-testing and benchmarking should ideally be performed together in order to bring their individual benefits to bear in evaluating the model’s overall performance. The decision, optimally, should not be whether to create a benchmark or to perform back-testing. Rather, the decision should be what form both benchmarking and back-testing should take.

While benchmarking and back-testing are complementary exercises that should not be viewed as mutually exclusive, their outcomes sometimes appear to produce conflicting results. What should a model validator do, for example, if the model appears to back-test well against real-world observations but do not benchmark particularly well against similar model outputs? What about a model that returns results similar to those of other benchmark models but does not back-test well? In the first” scenario, the model owner can derive a measure of comfort from the knowledge that the model performs well in hindsight. But the owner also runs the very real risk of being “out on an island” if the model turns out to be wrong. The second scenario affords the comfort of company in the model’s projections. But what if the models are all wrong together?

Scenarios where benchmarking and back-testing do not produce complementary results are not common, but they do happen. In these situations, it becomes incumbent on model validators to determine whether back-testing results should trump benchmarking results (or vice-versa) or if they should simply temper one another. The course to take may be dictated by circumstances. For example, a model validator may conclude that macro-economic indicators are changing to the point that a model which back-tests favorably is not an advisable tool because it is not tuned to the expected forward-looking conditions. This could explain why a model that back-tests favorably remains a benchmarking outlier if the benchmark models are taking into account what the subject model is missing. On the other hand, there are scenarios where it is reasonable to conclude that back-testing results trump benchmarking results. After all, most firms would rather have an accurate model than one that lines up with all the others.

As seen in our discussion here, benchmarking and back-testing can sometimes produce distinct or similar metrics depending on the model being validated. While those differences or similarities can sometimes be significant, both benchmarking and back-testing provide critical complementary information about a model’s overall performance. So when approaching a model validation and determining its scope, your choice should be what form of benchmarking and back-testing needs to be done, rather than whether one needs to be performed versus the other.


The Real Reason Low Down Payment VA loans Don’t Default Like Comparable FHA Loans

On this Veterans Day, I was reminded of the Urban Institute’s 2014 article on VA loan performance and its explanation of why VA loans outperform FHA loans.1 The article illustrated how VA loans outperformed comparable FHA loans despite controlling for key variables like FICO, income, and DTI. The article further explained the structural differences and similarities between the veterans program and FHA loans—similarities that include owner occupancy, loan size, and low down payments.

The analysis was well thought out and clearly showed how VA loans outperformed FHA. The article took great care to understand how FICOs, DTI, and income levels could impact default performance. The article further demonstrated VA outperformance wasn’t a recent or short-lived trend.

Its concluding rationale for superior performance was based on:

  1. VA’s residual income test
  2. VA’s loss mitigation efforts
  3. Lender’s “skin in the game”
  4. Lender concentration
  5. Military culture

Two of these reasons assume VA’s internal policies made the difference. Two of the reasons assume lenders’ ability to self-regulate credit policy or capital had an influence. The final reason centered on veterans as a social group with differing values that contributed to the difference.

As someone who has spent his mortgage career between modeling credit, counterparty risks and managing credit underwriters, the lack of good analytical data or anecdotal evidence makes it hard to see how these reasons can account for VA’s strong relative default performance versus FHA. While I understand their rationale, I don’t see how it makes for a compelling explanation.

VA Internal Policies

The residual income test is a tertiary measure used by lenders to qualify borrowers. It is used after applying the traditional MTI (mortgage payment to income) and DTI (total debt to income) ratios. The test mandates that the borrowing veteran have a minimum net income after paying all mortgage and debt payments. But as a third-level underwriting test, it is hard to see how it could be the source of so much of default performance improvement.

The same goes for VA’s loss mitigation outreach efforts. It sounds good in the press, but it is just a secondary loss mitigation effort used in conjunction with the servicer’s own loss mitigation efforts. Perhaps it is responsible for some of the incremental improvement, but it’s hard to believe it accounts for much more.

Lenders’ Ability to Self-Regulate

“Skin in the Game” attempts to explain how lenders manage their retained credit risk when the VA insurance payment is insufficient to cover all loan losses.2 The “Skin in the Game” theory holds when lenders have exposure to losses they will adjust their lending policies to reduce their risk. This translates into tighter credit policies, like floors on credit scores or ceilings on DTIs. Having worked for companies with strong credit cultures that nearly failed in the recent financial crisis, I find it hard to believe privately held mortgage bankers can manage this risk. Quite the opposite, my experience tells me: 1) lenders always underestimate their residual credit risks, and 2) pressure for volume, market share, and profits quickly overwhelm any attempts to maintain credit discipline.

The notion that lender concentration in the VA originations market somehow means that those lenders have more capital or the ability to earn more money also makes little sense. Historically, the largest source of origination revenue is the capitalized value of the MSRs created when the loan is securitized. Over the past several years Ginnie Mae MSR prices have collapsed. This severely limits the profit margins from VA loans. Trust me, the top lenders are not making it up on volume.

Effect of Military Culture on VA Loans

So, what’s left? Military culture. This I believe. And not just because it is the only reason left. As the son of a retired Army colonel and the brother of both a retired Navy captain and a retired Marine Corps lieutenant colonel, I think I understand what the military culture is.

My view of military culture isn’t that veterans are more disciplined or responsible than the rest of the American public. My view of military culture is that institutional, structural, and societal differences make the military personnel workforce different than that of the general public. How?

Job Security

  • Active duty military personnel are not subject to mass layoffs or reductions in force typical in the business world.
  • Poor performing military personnel are typically eased out of the military and not fired.  This process of “getting passed over” spans several years and not weeks or months.
  • Most active duty military personnel have the flexibility to determine their exit strategy/retirement date, so they can defer when economic times are bad.

Retirement

  • Military personnel can retire with 50% pay of their base pay after 20 years of service.
  • This pension is received immediately upon retirement and is indexed to inflation.

Job Skills

  • A high percentage of retired military personnel re-enter the workforce and work for governmental agencies.
  • Military personnel typically leave active duty with more marketable skills than their similarly educated peers.
  • Active duty, retired, and former military personnel have jobs/careers/professions that are in higher demand than the rest of the American population.

Military Pay

  • Active and retired military pay is transparent, public, and socialized. The only variables are rank and years in service. The pay schedule provides for automatic pay increases based on the number of years in service and rank. Every two years you will get a pay increase. If you get promoted, you will get a pay raise. You know how much your boss makes. Female service-members make the same pay as their male counterparts.
  • Military pay is indexed to inflation.
  • In addition to their base pay, all military personnel are given a tax-free monthly housing allowance which is adjusted regionally.

Medical

  • All military families, active duty and retired, receive full medical care.
  • This health insurance has nearly no deductible or out-of-pocket expenses.
  • This means veterans don’t default due to catastrophic medical emergencies or have their credit capacity impacted by unpaid medical bills. 

It is for these reasons that I believe VA borrowers default less frequently than FHA borrowers. The U.S. military is not a conscripted force, but rather an all-volunteer force. The structural programs offered by the U.S. Government provide the incentives necessary for people to remain in the military.

The Federal Government has designed a military force with low turnover and backed by an institutionalized social safety net to help recruit, retain, and reward its personnel.  Lower default rates associated with the VA loan program are just the secondary benefits of a nation trying to keep its citizens safe.

So, on this Veterans Day, remember to thank a veteran for his service. But also, remember the efforts of the Federal Government to ease the difficulties of those protecting our nation.


[1] Housing Finance Policy Center Commentary, “VA Loans Outperform FHA Loans. Why. And What Can We Learn?”, Laura Goodman, Ellen Seidman, Jun Zhu.  Urban Institute – July 16, 201

[2] VA insurance is a first loss guaranty like MI insurance.  If the loss is greater than the insurance payment the mortgage servicer is responsible for the additional loss


4 Questions to Ask When Determining Model Validation Scope

Model risk management is a necessary undertaking for which model owners must prepare on a regular basis. Model risk managers frequently struggle to strike an appropriate cost-benefit balance in determining whether a model requires validation, how frequently a model needs to be validated, and how detailed subsequent and interim model validations need to be. The extent to which a model must be validated is a decision that affects many stakeholders in terms of both time and dollars. Everyone has an interest in knowing that models are reliable, but bringing the time and expense of a full model validation to bear on every model, every year is seldom warranted. What are the circumstances under which a limited-scope validation will do and what should that validation look like? We have identified four considerations that can inform your decision on whether a full-scope model validation is necessary:

  1.  What about the model has changed since the last full-scope validation?
  2. How have market conditions changed since the last validation?
  3.  How mission-critical is the model?
  4. How often have manual overrides of model output been necessary?

What Constitutes a Model Validation

Comprehensive model validations consist of three main components: conceptual soundness, ongoing monitoring and benchmarking, and outcomes analysis and back-testing.[1] A comprehensive validation encompassing all these areas is usually required when a model is first put into use. Any validation that does not fully address all three of these areas is by definition a limited-scope validation. 1 Comprehensive validations on ‘black box’ models developed and maintained by third-party vendors are therefore problematic because the mathematical code and formulas are not typically available for review (in many cases a validator can only hypothesize the cause and effect relationships between the inputs and outputs based on a reading of the model’s documentation).  Ideally, regular comprehensive validations are supplemented by limited-scope validations and outcomes analyses on an ongoing, interim basis to ensure that the model performs as expected.

Key Considerations for Model Validation

There is no ‘one size fits all’ question for determining how often a comprehensive validation is necessary, versus when a limited-scope review would be appropriate. Beyond the obvious time and cost considerations, model validation managers would benefit from asking themselves a minimum of four questions in making this determination:

Question 1: What about the model has changed since the last full-scope validation?

Many models layer economic assumptions on top of arithmetic equations. Most models consist of three principal components:

  1. inputs (assumptions and data)
  2. processing (underlying mathematics and code that transform inputs into estimates)
  3. output reporting (processes that translate estimates into useful information)

Changes to either of the first two components are more likely to require a comprehensive validation than changes to the third component. A change that materially impacts how the model output is computed, either by changing the inputs that drive the calculation or by changing the calculations themselves, is more likely to merit a more comprehensive review than a change that merely affects how the model’s outputs are interpreted.

For example, say I have a model that assigns a credit rating to a bank’s counterparties on a 100-point scale. The requirements the bank establishes for the counterparty are driven by how the model rates the counterparty. Say, for example, that the bank lends to counterparties that score between 90 and 100 with no restrictions, between 80 and 89 with pledged collateral, between 70 and 79 with delivered collateral, and does not lend to counterparties scoring below a 70. Consider two possible changes to the model:

  1. Changes in model calculations that result in what used to be a 65 now being a 79.
  2. Changes in grading scale that result in a counterparty that receives a rating of 65 now being deemed creditworthy.

While the second change impacts the interpretation of model output and may require only a limited-scope validation to determine whether the amended grading scale is defensible, the first change is almost certain to require that the validator go deeper ‘under the hood’ for verification that the model is working as intended. Assuming that the inputs did not change, the first type of change may be the result of changes to assumptions (e.g., weighting schemes) or simply a revision to a perceived calculation error. The second is a change on the reporting component, where a comparison of the model’s forecasts to those of challenger models and back-testing with historical data may be sufficient for validation. Not every change that affects model outputs necessarily requires a full-scope validation. The insertion of recently updated economic forecasts into a recently validated model may require only a limited set of tests to demonstrate that changes in the model estimates are consistent with the new economic forecast inputs. The magnitude of the impact on output also matters. Altering several input parameters that results in a material change to model output is more likely to require a full validation.

Question 2: How have market conditions changed since the last validation?

Even models that do not change at all require periodic, full-scope validations because macroeconomic conditions or other external factors call one or more of the model’s underlying assumptions into question. The 2008 global financial crisis is a perfect example. Mortgage credit and prepayment models prior to 2008 were built on assumptions that appeared reasonable and plausible based on market observations prior to 2008. Statistical models based solely on historical data before, during, or after the crisis are likely to require full-scope validations as their underlying datasets are expanded to capture a more comprehensive array of observed economic scenarios. It doesn’t always have to be bad news in the economy to instigate model changes that require full-scope validations. The federal funds rate has been hovering near zero since the end of 2008. With a period of gradual and sustained recovery potentially on the horizon, many models are beginning to incorporate rising interest rates into their current forecasts. These foreseeable model adjustments will likely require more comprehensive validations geared toward verifying that model outputs are appropriately sensitive to the revised interest rate assumptions.

Question 3: How mission-critical is the model?

The more vital the model’s outputs are to financial statements or mission-critical business decisions, the greater the need for frequent and detailed third-party validations. Model risk is amplified when the model outputs inform reports that are provided to investors, regulators, or compliance authorities. Particular care should be given when deciding whether to partially validate models with such high-stake outputs. Models whose outputs are used for internal strategic planning are also important. That being said, some models are more critical to a bank’s long-term success than others. Ensuring the accuracy of the risk algorithms used for DFAST stress testing is more imperative than the accuracy of a model that predicts wait times in a customer service queue. Consequently, DFAST models, regardless of their complexity, are likely to require more frequent full-scope validations than models whose results likely undergo less scrutiny.

Question 4: How often have manual overrides of model output been necessary?

Another issue to consider revolves around the use of manual overrides to the model’s output. In cases where expert opinion is permitted to supersede the model outputs on a regular basis, more frequent full-scope validations may be necessary in order to determine whether the model is performing as intended. Counterparty credit scoring models, cited in our earlier example, are frequently subjected to manual overrides by human underwriters to account for new or other qualitative information that cannot be processed by the model. The decision of whether it is necessary to revise or re-estimate a model is frequently a function of how often such overrides are required and what the magnitude of these overrides tends to be. Models that frequently have their outputs overridden should be subjected to more frequent full-scope validations. And models that are revised as a result of numerous overrides should also likely be fully validated, particularly when the revision includes significant changes to input variables and their respective weightings.

Full or Partial Model Validation?

Model risk managers need to perform a delicate balancing act in order to ensure that an enterprise’s models are sufficiently validated while keeping to a budget and not overly burdening model owners. In many cases, limited-scope validations are the most efficient means to this end. Such validations allow for the continuous monitoring of model performance without bringing in a Ph.D. with a full team of experts to opine on a model whose conceptual approach, inputs, assumptions, and controls have not changed since its last full-scope validation. While gray areas abound and the question of full versus partial validation needs to be addressed on a case-by-case basis, the four basic considerations outlined above can inform and facilitate the decision. Incorporating these considerations into your model risk management policy will greatly simplify the decision of how detailed your next model validation needs to be. An informed decision to perform a partial model validation can ultimately save your business the time and expense required to execute a full model validation.


[1] In the United States, most model validations are governed by the following sets of guidelines: 1) OCC 2011-12 (institutions regulated by the OCC), and 2) FRB SR-11 (institutions regulated by the Federal Reserve). These guidelines are effectively identical to one another. Model validations at Government-sponsored enterprises, including Fannie Mae, Freddie Mac, and the Federal Home Loan Banks, are governed by Advisory Bulletin 2013-07, which, while different from the OCC and Fed guidance, shares many of the same underlying principles.


Get Started
Log in

Linkedin   

risktech2024