Get Started
Articles Tagged with: Model Validation

Why Model Validators Need to Care About the LIBOR Transition

The transition to the Secured Overnight Financing Rate (SOFR) as a LIBOR replacement after 2021 creates layers of risk for banks. Many of these risks are readily apparent, others less so. But the factors banks must consider while choosing replacement rates and correctly implementing contractual fallback language makes a seamless transition a daunting proposition. Though sometimes overlooked, model risk managers have an important role in ensuring this happens correctly and in a way that does not jeopardize the reliability of model outputs.   

LIBOR, SOFR and the need for transition

A quick refresher: The London Interbank Offered Rate (LIBOR) currently serves as the benchmark at which major global banks lend to one another on a short-term basis in the international interbank market. LIBOR is calculated by the Intercontinental Exchange (ICE) and is published daily. LIBOR is based on a combination of five currencies and seven maturities. The most common of these is the three-month U.S. Dollar rate.

Accusations of manipulation by major banks going back as early as 2008, however, raised concerns about the sustainability of LIBOR. A committee convened by the Federal Reserve Board and the Federal Reserve Bank of New York in 2017—the Alternative Reference Rates Committee (ARRC)—identified a broad Treasury repurchase agreement (repo) financing rate as its preferred alternative reference rate to replace LIBOR after 2021. This repo rate (now known as SOFR) was chosen for its ability to provide liquidity to underlying markets and because the volumes underlying SOFR are far larger than any other U.S. money market. This combination of size and liquidity contributes to SOFR’s transparency and protects market participants from attempts at manipulation.

What Does This Mean for MRM?

Because the transition has potential bearing on so many layers of risk—market risk, operational risk, strategic risk, reputation risk, compliance risk, not to mention the myriad risks associated with mispricing assets—any model in a bank’s existing inventory that is tasked with gauging or remediating these risks is liable to be impacted. Understanding how and the extent to which models are considering how LIBOR transition may affect pricing and other core processes are (or should be) of principal concern to model validators.

Ongoing Monitoring and Benchmarking

Regulatory guidance and model validation best practices require testing model inputs and benchmarking how the model performs with the selected inputs relative to alternatives. For this reason, the validation any model whose outputs are sensitive to variable interest rates should include an assessment of how a replacement index (such as SOFR) and adjustment methodology were selected.

Model validators should be able to ascertain whether the model developer has documented enough evidence relating to:

  • Available reference rates and the appropriateness of each to the bank’s specific products
  • System capabilities for using these replacement rates with the bank’s products.
  • Control risks associated with unavailable alternative rates


Fallback Language considerations:

Fallback language—contractual provisions that govern the process for selecting a replacement rate in the event of LIBOR termination—should also factor into a validator’s assessment of model inputs. While many existing fallback provisions can be frustratingly vague when it comes to dealing with a permanent cessation of LIBOR, validators of models that rely on reference rates as inputs have an obligation to determining compliance with fallback language containing clear and executable terms. These include:

  • Specific triggers to enact the replacement rate
  • Clarity regarding the replacement rate and spread adjustments
  • Permissible options under fallback language – and whether other options might be more appropriate than the one ultimately selected based on the potential for valuation changes, liquidity impact, hedging implications, system changes needed, and customer impact

In November 2019, the ARRC published the finalized fallback language for residential adjustable rate mortgages, bilateral business loans, floating rate notes, securitizations, and syndicated loans. It has also actively engaged with the International Swap Derivatives Association (ISDA) to finalize the fallback parameters for derivatives.

The ARRC also recommended benchmark replacement rates adjusted for spread that would replace the current benchmark due to circumstances that trigger the replacement. The recommendation included the following benchmark replacement waterfalls. Validators of models relying on these replacements may choose, as part of their best practices review, to determine the extent to which existing fallback provisions align with the recommendations.

Replacement Description
Term SOFR + spread adjustment Forward-looking term SOFR for the applicable corresponding tenor. Note: Loan recommendations allow use of the next longest tenor term SOFR rate if the corresponding tenor is unavailable  
Compounded SOFR + spread Adjustment Compounded average of daily SOFRs over the relevant period depending on the tenor of USD LIBOR being replaced
Relevant selected rate + spread adjustment   Rate selected by the Relevant Governmental Body, lender, or borrower & administrative agent
Relevant ISDA replacement rate + spread adjustment The applicable replacement rate (without spread adjustment) that is embedded in ISDA’s standard definitions  
Issuer, designated transaction representative or noteholder replacement + spread adjustment An identified party will select a replacement rate, in some cases considering any industry-accepted rate in the related market. Note: in certain circumstances this step could be omitted


Model risk managers can sometimes be lulled into believing that the validation of interest rate inputs consists solely of verifying their source and confirming that they have been faithfully brought into the model. Ultimately, however, model validators are responsible for verifying not only the provenance of model inputs but also their appropriateness. Consequently, ensuring a smooth transition to the most appropriate available reference rate replacement is of paramount importance to risk management efforts related to the models these rates feed.


RESOURCES:

https://www.insideafricalaw.com/blog/benchmark-reform-the-impact-of-libor-transition-on-the-african-project-finance-market

https://www.occ.treas.gov/news-issuances/bulletins/2020/bulletin-2020-68.html

https://www.isda.org/a/n6tME/Supplemental-Consultation-on-USD-LIBOR-CDOR-HIBOR-and-SOR.pdf

https://www.investopedia.com/terms/l/libor.asp

https://www.newyorkfed.org/medialibrary/Microsites/arrc/files/2020/ARRC-factsheet.pdf

https://www.newyorkfed.org/arrc/sofr-transition

https://www.newyorkfed.org/medialibrary/Microsites/arrc/files/2019/LIBOR_Fallback_Language_Summary

https://www.isda.org/a/n6tME/Supplemental-Consultation-on-USD-LIBOR-CDOR-HIBOR-and-SOR.pdf

http://assets.isda.org/media/50b3fed0/47be9435-pdf/


The information within this section has been taken directly from the https://www.occ.treas.gov/news-issuances/bulletins/2020/bulletin-2020-68.html [AR1]


Managing Machine Learning Model Risk

Though the terms are often used interchangeably in casual conversation, machine learning is a subset of artificial intelligence. Simply put, ML is the process of getting a computer to learn the properties of one dataset and generalizing this “knowledge” on other datasets.


ML Financial Models

ML models have crept into virtually every corner of banking and finance — from fraud and money-laundering prevention to credit and prepayment forecasting, trading, servicing, and even marketing. These models take various forms (see Table 1, below). Modelers base their selection of a particular ML technique on a model’s objective and data availability.   

Table 1. ML Models and Application in Finance

Model Application
Linear Regression Credit Risk; Forecasting
Logistic Regression Credit Risk
Monte Carlo Simulation Capital Market; (ALM)
Artificial Neutral Networks Score Card and AML
Decision Trees Regression Models (Random Forest, Bagging) Score Card
Multinomial Logistic Regression Prepayment Projection
Deep Learning Prepayment Projection
Time Series Model Capital Forecasting; Macroeconomics Forecasting Model
Linear Regression with ARIMA Errors Capital Forecasting
Factor Models Short Rate Evolution
Fuzzy Matching AML; OFAC
Linear Discriminant Analysis (LDA) AML; OFAC
K Means Clustering AML; OFAC

 

ML models require large datasets relative to conventional models as well as more sophisticated computer programing and econometric/statistical skills. ML model developers are required to have deep knowledge about the ML model they want to use, its assumptions and limitations, and alternative approaches.

 

ML Model Risk

ML models present many of the same risks that accompany conventional models. As with any model, errors in design or application can lead to performance issues resulting in financial losses, poor decisions, and damage to reputation.

ML is all about algorithms. Failing to understand the mathematical aspects of these algorithms can lead to adopting inefficient optimization algorithms without knowing the nature or the interpretation of the optimization being solved. Making decisions under these circumstances increases model risk and can lead to unreliable outputs.

As sometimes befalls conventional regression models, ML models may perform well on the training data but not on the test data. Their complexity and high dimensionality makes them especially susceptible to overfitting. The poor performance of some ML models when applied beyond the training dataset can translate into a huge source of risk.

Finally, ML models can give rise to unintended consequences when used inappropriately or incorrectly. Model risk is magnified when the goal of a ML model’s algorithm is not aligned with the business problem or doesn’t consider all relevant considerations of the business problem. Model risk also arises when an ML model is used outside the environment for which it was designed. These risks include overstated/understated model outputs and lack of fairness. Table 2, below, presents a more comprehensive list of these risks.

Table 2. Potential risk from ML models

Overfitting
Underfitting
Bias toward protected groups
Interpretability
Complexity
Use of poor-quality data
Job displacement
Models may produce socially unacceptable results
Automation may create model governance issues

 

Managing ML Model Risk

managing ML model risk

It may seem self-evident, but the first step in managing ML model risk consists of reliably  identifying every model in the inventory that relies on machine learning. This exercise is not always as straightforward as it might seem. Successfully identifying all ML models requires MRM departments to incorporate the right information requests into their model determination or model assessment forms. These should include questions designed to identify specific considerations of ML model techniques, algorithms, platforms and capabilities. MRM departments need to adopt a consistent but flexible definition about what constitutes an ML model across the institution. Models developers, owners and users should be trained in identifying ML models and those features that need to be reported in the model identification assessment form.

MRM’s next step involves risk assessing ML models in the inventory. As with traditional models, ML models should be risk assessed based on their complexity, materiality and frequency of use. Because of their complexity, however, ML models require an additional level of screening in order to account for data structure, level of algorithm sophistication, number of hyper-parameters, and how the models are calibrated. The questionnaire MRM uses to assess the risk of its conventional models often needs to be enhanced in order to adequately capture the additional risk dimensions introduced by ML models.

Managing ML model risk also involves not only ensuring that a clear model development and implementation process is in place but also that it is consistent with the business objective and the intended use of the models. Thorough documentation is important for any model, but the need to describe model theory, methodology, design and logic takes on added importance when it comes to ML models. This includes specifying the methodology (regression or classification), the type of model (linear regression, logistic regression natural language processing, etc.), the resampling method (cross-validation, bootstrap) and the subset selection method such as backward, forward or stepwise selection. Obviously, simply stating that the model “relies on a variety of machine learning techniques” is not going to pass muster.

As with traditional models, developers must document the data source, quality and any transformations that are performed. This includes listing the data sources, normalization and sampling techniques, training and test data size, the data dimension reduction technique (principal component, partial least squares, etc.) as well as controls around them. An assessment of the risk around the utilization of certain data should also be assessed.

A model implementation plan and controls around the model should be also be developed.

Finally, all model performance testing should be clearly stated, and the results documented. This helps assess whether the model is performing as intended and in line with its design and business objective. Limitations and calibrations around the models should also be documented.

Like traditional models, ML models require independent validation to ensure they are sound and performing as intended and to identify potential limitations. All components of ML models should be subject to validation, including conceptual soundness, outcomes analysis and ongoing monitoring.

Validators can assess the conceptional soundness of an ML model by evaluating its design and construction, focusing on the theory, methodology, assumptions and limitations, data quality and integrity, hyper-parameter calibration and overlays, bias and interpretability.

Validators can assess outcomes analysis by checking whether the model outputs are appropriate and in line with a priori expectations. Results of the performance metrics should also be assessed for accuracy and degree of precision. Performance metrics for ML models vary by model type. Similar to traditional predictive models, common performance metrics for ML models include the mean-squared-error (MSE), Gini coefficient, entropy, the confusion matrix, and the receiver operating characteristic (ROC) curve.

Outcomes analysis should also include out-of-sample testing, which can be conducted using cross-validation techniques. Finally, ongoing monitoring should be reviewed as a core element of the validation process. Validators should evaluate whether model use is appropriate given changes in products, exposures and market conditions. Validators should also ensure performance metrics are being monitored regularly based on the inherent risk of the model and frequency of use. Validators should ensure that a continuous performance monitoring plan exists and captures the most important metrics. Also, a change control document and access control document should be available.  

The principles outlined above will sound familiar to any experienced model validator—even one with no ML training or experience. ML models do not upend the framework of MRM best practices but rather add a layer of complexity to their implementation. This complexity requires MRM departments in many cases to adjust their existing procedures to property identify ML models and suitably capture the risk emerging from them. As is almost always the case, aggressive staff training to ensure that their well-considered process enhancements are faithfully executed and have their desired effect.       


September 30 Webinar: Machine Learning in Model Validation

Recorded: September 30th | 1:00 p.m. EDT

Join our panel of experts as they share their latest work using machine learning to identify and validate model inputs.

  • Suhrud Dagli, Co-Founder & Fintech Lead, RiskSpan
  • Jacob Kosoff, Head of Model Risk Management & Validation, Regions Bank
  • Nick Young, Head of Model Validation, RiskSpan
  • Sanjukta Dhar, Consulting Partner, Risk and Regulatory Compliance Strategic Initiative, TCS Canada


Featured Speakers

Suhrud-Dagli

Suhrud Dagli

Co-Founder & Fintech Lead, RiskSpan

Jacob Kosoff

Head of Model Risk Management & Validation, Regions Bank

dan-kim

Nick Young

Head of Model Validation, RiskSpan

Sanjukta Dhar

Sanjukta Dhar

Consulting Partner, Risk and Regulatory Compliance Strategic Initiative, Tata Consulting


The Why and How of a Successful SAS-to-Python Model Migration

A growing number of financial institutions are migrating their modeling codebases from SAS to Python. There are many reasons for this, some of which may be unique to the organization in question, but many apply universally. Because of our familiarity not only with both coding languages but with the financial models they power, my colleagues and I have had occasion to help several clients with this transition.

Here are some things we’ve learned from this experience and what we believe is driving this change.

Python Popularity

The popularity of Python has skyrocketed in recent years. Its intuitive syntax and a wide array of packages available to aid in development make it one of the most user-friendly programming languages in use today. This accessibility allows users who may not have a coding background to use Python as a gateway into the world of software development and expand their toolbox of professional qualifications.

Companies appreciate this as well. As an open-source language with tons of resources and low overhead costs, Python is also attractive from an expense perspective. A cost-conscious option that resonates with developers and analysts is a win-win when deciding on a codebase.

Note: R is another popular and powerful open-source language for data analytics. Unlike R, however, which is specifically used for statistical analysis, Python can be used for a wider range of uses, including UI design, web development, business applications, and others. This flexibility makes Python attractive to companies seeking synchronicity — the ability for developers to transition seamlessly among teams. R remains popular in academic circles where a powerful, easy-to-understand tool is needed to perform statistical analysis, but additional flexibility is not necessarily required. Hence, we are limiting our discussion here to Python.

Python is not without its drawbacks. As an open-source language, less oversight governs newly added features and packages. Consequently, while updates may be quicker, they are also more prone to error than SAS’s, which are always thoroughly tested prior to release.

Visualization Capabilities

While both codebases support data visualization, Python’s packages are generally viewed more favorably than SAS’s, which tend to be on the more basic side. More advanced visuals are available from SAS, but they require the SAS Visual Analytics platform, which comes at an added cost.

Python’s popular visualization packages — matplotlib, plotly, and seaborn, among others — can be leveraged to create powerful and detailed visualizations by simply importing the libraries into the existing codebase.

Accessibility

SAS is a command-driven software package used for statistical analysis and data visualization. Though available only for Windows operating systems, it remains one of the most widely used statistical software packages in both industry and academia.

It’s not hard to see why. For financial institutions with large amounts of data, SAS has been an extremely valuable tool. It is a well-documented language, with many online resources and is relatively intuitive to pick up and understand – especially when users have prior experience with SQL. SAS is also one of the few tools with a customer support line.

SAS, however, is a paid service, and at a standalone level, the costs can be quite prohibitive, particularly for smaller companies and start-ups. Complete access to the full breadth of SAS and its supporting tools tends to be available only to larger and more established organizations. These costs are likely fueling its recent drop-off in popularity. New users simply cannot access it as easily as they can Python. While an academic/university version of the software is available free of charge for individual use, its feature set is limited. Therefore, for new users and start-up companies, SAS may not be the best choice, despite being a powerful tool. Additionally, with the expansion and maturity of the variety of packages that Python offers, many of the analytical abilities of Python now rival those of SAS, making it an attractive, cost-effective option even for very large firms.

Future of tech

Many of the expected advances in data analytics and tech in general are clearly pointing toward deep learning, machine learning, and artificial intelligence in general. These are especially attractive to companies dealing with large amounts of data.

While the technology to analyze data with complete independence is still emerging, Python is better situated to support companies that have begun laying the groundwork for these developments. Python’s rapidly expanding libraries for artificial intelligence and machine learning will likely make future transitions to deep learning algorithms more seamless.

While SAS has made some strides toward adding machine learning and deep learning functionalities to its repertoire, Python remains ahead and consistently ranks as the best language for deep learning and machine learning projects. This creates a symbiotic relationship between the language and its users. Developers use Python to develop ML projects since it is currently best suited for the job, which in turn expands Python’s ML capabilities — a cycle which practically cements Python’s position as the best language for future development in the AI sphere.

Overcoming the Challenges of a SAS-to-Python Migration

SAS-to-Python migrations bring a unique set of challenges that need to be considered. These include the following.

Memory overhead

Server space is getting cheaper but it’s not free. Although Python’s data analytics capabilities rival SAS’s, Python requires more memory overhead. Companies working with extremely large datasets will likely need to factor in the cost of extra server space. These costs are not likely to alter the decision to migrate, but they also should not be overlooked.

The SAS server

All SAS commands are run on SAS’s own server. This tightly controlled ecosystem makes SAS much faster than Python, which does not have the same infrastructure out of the box. Therefore, optimizing Python code can be a significant challenge during SAS-to-Python migrations, particularly when tackling it for the first time.

SAS packages vs Python packages

Calculations performed using SAS packages vs. Python packages can result in differences, which, while generally minuscule, cannot always be ignored. Depending on the type of data, this can pose an issue. And getting an exact match between values calculated in SAS and values calculated in Python may be difficult.

For example, the true value of “0” as a float datatype in SAS is approximated to 3.552714E-150, while in Python float “0” is approximated to 3602879701896397/255. These values do not create noticeable differences in most calculations. But some financial models demand more precision than others. And over the course of multiple calculations which build upon each other, they can create differences in fractional values. These differences must be reconciled and accounted for.

Comparing large datasets

One of the most common functions when working with large datasets involves evaluating how they change over time. SAS has a built-in function (proccompare) which compares datasets swiftly and easily as required. Python has packages for this as well; however, these packages are not as robust as their SAS counterparts. 

Conclusion

In most cases, the benefits of migrating from SAS to Python outweigh the challenges associated with going through the process. The envisioned savings can sometimes be attractive enough to cause firms to trivialize the transition costs. This should be avoided. A successful migration requires taking full account of the obstacles and making plans to mitigate them. Involving the right people from the outset — analysts well versed in both languages who have encountered and worked through the pitfalls — is key.


Changes to Loss Models…and How to Validate Them

So you’re updating all your modeling assumptions. Don’t forget about governance.

Modelers have now been grappling with how COVID-19 should affect assumptions and forecasts for nearly two months. This exercise is raising at least as many questions as it is answering.

No credit model (perhaps no model at all) is immune. Among the latest examples are mortgage servicers having to confront how to bring their forbearance and loss models into alignment with new realities.

These new realities are requiring servicers to model unprecedented macroeconomic conditions in a new and changing regulatory environment. The generous mortgage forbearance provisions ushered in by March’s CARES Act are not tantamount to loan forgiveness. But servicers probably shouldn’t count on reimbursement of their forbearance advances until loan liquidation (irrespective of what form the payoff takes).

The ramifications of these costs and how servicers should modeling them is a central topic to be addressed in a Mortgage Bankers Association webinar on Wednesday, May 13, “Modeling Forbearance Losses in the COVID-19 world” (free for MBA members). RiskSpan CEO Bernadette Kogler will lead a panel consisting of Faith Schwartz, Suhrud Dagli, and Morgan Snyder in a discussion of the forbearance’s regulatory implications, the limitations of existing models, and best practices for modeling forbearance-related advances, losses, and operational costs.

Models, of course, are only as good as their underlying data and assumptions. When it comes to forbearance modeling, those assumptions obviously have a lot to do with unemployment, but also with the forbearance take-up rate layered on top of more conventional assumptions around rates of delinquency, cures, modifications, and bankruptcies.

The unique nature of this crisis requires modelers to expand their horizons in search of applicable data. For example, GSE data showing how delinquencies trend in rising unemployment scenarios might need to be supplemented by data from Greek or other European crises to better simulate extraordinarily high unemployment rates. Expense and liquidation timing assumptions will likely require looking at GSE and private-label data from the 2008 crisis. Having reliable assumptions around these is critically important because liquidity issues associated with servicing advances are often more an issue of timing than of anything else.

Model adjustments of the magnitude necessary to align them with current conditions almost certainly qualify as “material changes” and present a unique set of challenges to model validators. In addition to confronting an expanded workload brought on by having to re-validate models that might have been validated as recently as a few months ago, validators must also effectively challenge the new assumptions themselves. This will likely prove challenging absent historical context.

RiskSpan’s David Andrukonis will address many of these challenges—particularly as they relate to CECL modeling—as he participates in a free webinar, “Model Risk Management and the Impacts of COVID-19,” sponsored by the Risk Management Association. Perhaps fittingly, this webinar will run concurrent with the MBA webinar discussed above.

As is always the case, the smoothness of these model-change validations will depend on the lengths to which modelers are willing to go to thoroughly document their justifications for the new assumptions. This becomes particularly important when introducing assumptions that significantly differ from those that have been used previously. While it will not be difficult to defend the need for changes, justifying the individual changes themselves will prove more challenging. To this end, meticulously documenting every step of feature selection during the modeling process is critical not only in getting to a reliable model but also in ensuring an efficient validation process.

Documenting what they’re doing and why they’re doing it is no modeler’s favorite part of the job—particularly when operating in crisis mode and just trying to stand up a workable solution as quickly as possible. But applying assumptions that have never been used before always attracts increased scrutiny. Modelers will need to get into the habit of memorializing not only the decisions made regarding data and assumptions, but also the other options considered, and why the other considered options were ultimately passed over.

Documenting this decision-making process is far easier at the time it happens, while the details are fresh in a modeler’s mind, than several months down the road when people inevitably start probing.

Invest in the “ounce of prevention” now. You’ll thank yourself when model validation comes knocking.


Webinar: Applying Model Validation Principles to Anti-Money Laundering Tools

webinar

Applying Model Validation Principles to Anti-Money Laundering Tools

This webinar will explore some of the more efficient ways we have encountered for applying model validation principles to AML tools, including:

  • Ensuring that the rationale supporting rules and thresholds is sufficiently documented 
  • Applying above-the-line and below-the-line testing to an effective benchmarking regime 
  • Assessing the relevance of rules that are seldom triggered or frequently overridden 


About The Hosts

Timothy Willis

Managing Director – RiskSpan

Timothy Willis is head of RiskSpan’s Governance and Controls Practice, with a particular focus on model risk management. He is an experienced engagement manager, financial model validator and mortgage industry analyst who regularly authors and oversees the delivery of technical reports tailored to executive management and regulatory audiences.

Tim has directed projects validating virtually every type of model used by banks. He has also developed business requirements and improved processes for commercial banks of all sizes, mortgage banks, mortgage servicers, Federal Home Loan Banks, rating agencies, Fannie Mae, Freddie Mac, and U.S. Government agencies.

Susan Devine, Cams, CPA

Senior Consultant – Third Pillar Consulting

Susan has more than twenty years of experience as an independent consultant providing business analysis, financial model validations, anti-money laundering reviews in compliance with the Bank Secrecy Act, and technical writing to government and commercial entities. Experience includes developing and documenting business processes, business requirements, security requirements, computer systems, networks, systems development lifecycle activities, and financial models. Experience related to business processes includes business process reviews, security plans in compliance with NIST and GISRA, Sarbanes Oxley compliance documents, Dodd-Frank Annual Stress Testing, functional and technical requirements for application development projects, policies, standards, and operating procedures for business and technology processes.

Chris Marsten

Financial and Data Analyst – RiskSpan

Chris is a financial and data analyst at RiskSpan where he develops automated analytics and reporting for client loan portfolios and provides data analysis in support of model validation projects. He also possesses extensive experience writing ETL code and automating manual processes. Prior to coming to RiskSpan, he developed and managed models for detecting money laundering and terrorist activity for Capital One Financial Corporation, where he also forecasted high-risk customer volumes and created an alert investigation tool for identifying suspicious customers and transactions.


Webinar: Building and Running an Efficient Model Governance Program

webinar

Building and Running an Efficient Model Governance Program

Join RiskSpan Model Governance Expert Tim Willis for a webinar about running an efficient program. This webinar will cover essential elements of a model risk management policy including how to devise policies for open-source models and other applications not easily categorized. They’ll discuss best practices for building and maintaining a model inventory, tips for assigning appropriate risk ratings to models and determining validation frequency.


About The Host

Timothy Willis

Managing Director – RiskSpan

Timothy Willis is head of RiskSpan’s Governance and Controls Practice, with a particular focus on model risk management. He is an experienced engagement manager, financial model validator and mortgage industry analyst who regularly authors and oversees the delivery of technical reports tailored to executive management and regulatory audiences.


Webinar: Managing Down Model Validation Costs

webinar

Managing Down Model Validation Costs

Learn how to make your model validation budget go further for you.  In this webinar, you’ll learn about:  Balancing internal and external resources, prioritizing models with the most risk, documenting to facilitate the process.


About The Hosts

Timothy Willis

Managing Director – RiskSpan

Timothy Willis is an experienced engagement manager, financial model validator and mortgage industry analyst who regularly authors and oversees the delivery of technical reports tailored to executive management and regulatory audiences. Tim has directed projects validating virtually every type of model used by banks. He has also developed business requirements and improved processes for commercial banks of all sizes, mortgage banks, mortgage servicers, Federal Home Loan Banks, rating agencies, Fannie Mae, Freddie Mac, and U.S. Government agencies.

Nick Young

Director of Model Risk Management

Nick Young has more than ten years of experience as a quantitative analyst and economist. At RiskSpan, he performs model validation, development and governance on a wide variety of models including those used for Basel capital planning, reserve/impairment, Asset Liability Management (ALM), CCAR/DFAST stress testing, credit origination, default, prepayment, market risk, Anti-Money Laundering (AML), fair lending, fraud and account management.


eBook: A Validator’s Guide to Model Risk Management

ebook

A Validator’s Guide to Model Risk Management

Learn from RiskSpan model validation experts what constitutes a model, considerations for validating vendor models, how to prepare, how to determine scope, comparisons of performance metrics, and considerations for evaluating model inputs.


Model Validation Programs – Optimizing Value in Model Risk Groups

Watch RiskSpan Managing Director, Tim Willis, discuss how to optimize model validation programs. RiskSpan’s model risk management practice has experience in both building and validating models, giving us unique expertise to provide very high quality validations without diving into activities and exercises of marginal value.

 

Talk Scope

 


Get Started