Today, Fannie Mae and Freddie Mac begin issuing the long-awaited Uniform Mortgage-Backed Security (UMBS). The Federal Housing Finance Administration (FHFA) conceived of this new standard in its 2012 “A Strategic Plan for Enterprise Conservatorships,” which marked the start of the Single Security Initiative (the history of which is laid out in the graphic below). RiskSpan produces FHFA’s quarterly...
RiskSpan Adds Whole Loan Analytics to Edge Platform ARLINGTON, VA, May 20, 2019 – Leading mortgage data and analytics provider RiskSpan announced the release of its Whole Loan Analytics Module on the RiskSpan Edge Platform. The module enables whole loan investors, portfolio managers, and risk managers to manage loan-level data flows and predictive models that forecast loan performance under a range of scenarios. The off-the-shelf SaaS version supports whole loan pricing and surveillance. It enables complex forecasting analytics including geographically granular House Price scenarios and historically significant economic event scenarios. Other features and custom configurations are also...
Validating short-rate models can be challenging because many different ways of modeling how interest rates change over time (“interest rate dynamics”) have been created over the years. Each approach has advantages and shortcomings, and it is critical to distinguish the limitations and advantages of each of them to understand whether the short-rate model being used is appropriate to the task. This can be accomplished via the basic tenets of model validation—evaluation of conceptual soundness, replication, benchmarking, and outcomes analysis. Applying these concepts to short-rate models, however, poses some unique complications.
Complying with the DFAST/CCAR requirements within an existing quantitative models and model risk management framework is one of the most daunting of the many recent challenges banks, Bank Holding Companies (BHC) and some Investment Holding Companies (IHC) currently face. The Dodd-Frank Act Stress Tests (DFAST) require all financial institutions with total assets above $10 billion to do stress tests on their portfolio and balance sheet. The Comprehensive Capital Analysis and Review (CCAR) is generally required to be completed once a bank’s total assets are above $50 billion. The objective of both exercises is to simulate a bank’s balance sheet performance and losses in a hypothetical severe economic downturn over the next nine quarters. Given this common objective, most risk managers consider and complete both exercises together.
The question of “build versus buy” is every bit as applicable and challenging to model validation departments as it is to other areas of a financial institution. With no “one-size-fits-all” solution, banks are frequently faced with a balancing act between the use of internal and external model validation resources. This article is a guide for deciding between staffing a fully independent internal model validation department, outsourcing the entire operation, or a combination of the two.
In some respects, the OCC 2011-12/SR 11-7 mandate to verify model inputs could not be any more straightforward: “Process verification … includes verifying that internal and external data inputs continue to be accurate, complete, consistent with model purpose and design, and of the highest quality available.” From a logical perspective, this requirement is unambiguous and non-controversial. After all, the reliability of a model’s outputs cannot be any better than the quality of its inputs.
When someone asks you what a model validation is what is the first thing you think of? If you are like most, then you would immediately think of performance metrics— those quantitative indicators that tell you not only if the model is working as intended, but also its performance and accuracy over time and compared to others. Performance testing is the core of any model validation and generally consists of the following components:
Model risk management is a necessary undertaking for which model owners must prepare on a regular basis. Model risk managers frequently struggle to strike an appropriate cost-benefit balance in determining whether a model requires validation, how frequently a model needs to be validated, and how detailed subsequent and interim model validations need to be. The extent to which a model must be validated is a decision that affects many stakeholders in terms of both time and dollars. Everyone has an interest in knowing that models are reliable, but bringing the time and expense of a full model validation to bear on every model, every year is seldom warranted. What are the circumstances under which a limited-scope validation will do and what should that validation look like?
We have identified four considerations that can inform your decision on whether a full-scope model validation is necessary
Over the course of several hundred model validations we have observed a number of recurring themes and challenges that appear to be common to almost every model risk management department. At one time or another, every model risk manager will puzzle over questions around whether an application is a model, whether a full-scope validation is necessary, how to deal with challenges surrounding “black box” third-party vendor models, and how to elicit assistance from model owners. This series of blog posts aims to address these and other related questions with what we’ve learned while helping our clients think through these issues.
Though not its intent, model validation can be disruptive to model owners and others seeking to carry out their day-to-day work. We have performed enough model validations over the past decade to have learned how cumbersome the process can be to business unit model owners and others we inconvenience with what at times must feel like an endless barrage of touch-point meetings, documentation requests and other questions relating to modeling inputs, outputs, and procedures.