Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Blog Archives

RiskSpan Adds CRE, C&I Loan Analytics to Edge Platform

ARLINGTON, Va., March 23, 2023 – RiskSpan, a leading technology company and the most comprehensive source for data management and analytics for mortgage and structured products, has announced the addition of commercial real estate (CRE) and commercial and industrial (C&I) loan data intake, valuation, and risk analytics to its award-winning Edge Platform. This enhancement complements RiskSpan’s existing residential mortgage toolbox and provides clients with a comprehensive toolbox for constructing and managing diverse credit portfolios.

Now more than ever, banks and credit portfolio managers need tools to construct well diversified credit portfolios resilient to rate moves and to know the fair market values of their diverse credit assets.

The new support for CRE and C&I loans on the Edge Platform further cements RiskSpan’s position as a single-source provider for loan pricing and risk management analytics across multiple asset classes. The Edge Platform’s AI-driven Smart Mapping (tape cracking) tool lets clients easily work with CRE and C&I loan data from any format. Its forecasting tools let clients flexibly segment loan datasets and apply performance and pricing assumptions by segment to generate cash flows, pricing and risk analytics.

CRE and C&I loans have long been supported by the Edge Platform’s credit loss accounting module, where users provided such loans in the Edge standard data format. The new Smart Mapping support simplifies data intake, and the new support for valuation and risk (including market risk) analytics for these assets makes Edge a complete toolbox for constructing and managing diverse portfolios that include CRE and C&I loans. These tools include cash flow projections with loan-level precision and stress testing capabilities. They empower traders and asset managers to visualize the risks associated with their portfolios like never before and make more informed decisions about their investments.

Comprehensive details of this and other new capabilities are available by requesting a no-obligation demo at riskspan.com.

### 

About RiskSpan, Inc. 

RiskSpan offers cloud-native SaaS analytics for on-demand market risk, credit risk, pricing and trading. With our data science experts and technologists, we are the leader in data as a service and end-to-end solutions for loan-level data management and analytics.

Our mission is to be the most trusted and comprehensive source of data and analytics for loans and structured finance investments. Learn more at www.riskspan.com.


Improving the Precision of MSR Pricing Using Cloud-Native Loan-Level Analytics (Part I)

Traditional MSR valuationTake Away approaches based on rep lines and loan characteristics important primarily to prepayment models fail to adequately account for the significant impact of credit performance on servicing cash flows – even on Agency loans. Incorporating both credit and prepayment modeling into an MSR valuation regime requires a loan-by-loan approach—rep lines are simply insufficient to capture the necessary level of granularity. Performing such an analysis while evaluating an MSR portfolio containing hundreds of thousands of loans for potential purchase has historically been viewed as impractical. But thanks to today’s cloud-native technology, loan-level MSR portfolio pricing is not just practical but cost-effective. Introduction Mortgage Servicing Rights (MSRs) entitle the asset owner to receive a monthly fee in return for providing billing, collection, collateral management and recovery services with respect to a pool of mortgages on behalf of the beneficial owner(s) of those mortgages. This servicing fee consists primarily of two components based on the current balance of each loan:  a base servicing fee (commonly 25bps of the loan balance) and an excess servicing fee.  The latter is simply the difference between each loan rate and the sum of the pass-through rate of interest and the base servicing. The value of a portfolio of MSRs is determined by modeling the projected net cash flows to the owner and discounting them to the present using one of two methodologies:

  1. Static or Single-Path Pricing: A single series of net servicing cash flows are generated using current interest and mortgage rates which are discounted to a present value using a discount rate reflecting current market conditions.
  2. Stochastic or Option-Adjusted Spread (OAS) Pricing: Recognizing that interest rates will vary over time, a statistical simulation of interest rates is used to generate many time series (typically 250 to 1,000) of net servicing cash flows.  Each time series of cash flows is discounted at a specified spread over a simulated base curve (generally the LIBOR or Treasury curve) and the resulting present value is averaged across all of the paths.

While these two pricing methodologies have different characteristics and are based on very different conceptual frameworks, they both strongly depend on the analyst’s ability to generate reliable forecasts of net servicing cashflows.  As the focus of this white paper is to discuss the key factors that determine the net cashflows, we are indifferent here as to the ultimate methodology used to convert those cashflows into a present value and for simplicity will look to project a single path of net cash flows.  RiskSpan’s Edge platform supports both static and OAS pricing and RiskSpan’s clients use each and sometimes both to value their mortgage instruments.

Modeling Mortgage Cash Flows Residential mortgages are complex financial instruments. While they are, at their heart, a fixed income instrument with a face amount and a fixed or a floating rate of interest, the ability of borrowers to voluntarily prepay at any time adds significant complexity.  This prepayment option can be triggered by an economic incentive to refinance into a lower interest rate, by a decision to sell the underlying property or by a change in life circumstances leading the borrower to pay off the mortgage but retain the property. The borrower also has a non-performance option. Though not usually exercised voluntarily, forbearance options made available to borrowers in response to Covid permitted widespread voluntary exercise of this option without meaningful negative consequences to borrowers. This non-performance option ranges from something as simple as a single late payment up to cessation of payments entirely and forfeiture of the underlying property. Forbearance (a payment deferral on a mortgage loan permitted by the servicer or by regulation, such as the COVID-19 CARES Act) became a major factor in understanding the behavior of mortgage cash flows in 2020. Should a loan default, ultimate recovery depends on a variety of factors, including the loan-to-value ratio, external credit support such as primary mortgage insurance as well as costs and servicer advances paid from liquidation proceeds. Both the prepayment and credit performance of mortgage loans are estimated with the use of statistical models which draw their structure and parameters from an extremely large dataset of historical performance.  As these are estimated with reference to backward-looking experience, analysts often adjust the models to reflect their experience adjusted for future expectations. Investors in GSE-guaranteed mortgage pass-through certificates are exposed to voluntary and, to a far less extent, involuntary (default) prepayments of the underlying mortgages.  If the certificates were purchased at a premium and prepayments exceed expectations, the investor’s yield will be reduced.  Conversely, if the certificates were purchased at a discount and prepayments accelerated, the investor’s yield will increase.  Guaranteed pass-through certificate investors are not exposed to the credit performance of the underlying loans except to the extent that delinquencies may suppress voluntary prepayments. Involuntary prepayments and early buyouts of delinquent loans from MBS pools are analogous to prepayments from a cash flow perspective when it comes to guaranteed Agency securities. Investors in non-Agency securities and whole loans are exposed to the same prepayment risk as guaranteed pass-through investors are, but they are also exposed to the credit performance of each loan. And MSR investors are exposed to credit risk irrespective of whether the loans they service are guaranteed. Here is why. The mortgage servicing fee can be simplistically represented by an interest-only (IO) strip carved off of the interest payments on a mortgage. Net MSR cash flows are obtained by subtracting a fixed servicing cost. Securitized IOs are exposed to the same factors as pass-through certificates, but their sensitivity to those factors is magnitudes greater because a prepayment constitutes the termination of all further cash flows – no principal is received.  Consequently, returns on IO strips are very volatile and sensitive to interest rates via the borrower’s prepayment incentive. While subtracting fixed costs from the servicing fee is still a common method of generating net MSR cash flows, it is a very imprecise methodology, subject to significant error. The largest component of this error arises from the fact that servicing cost is highly sensitive to the credit state of a mortgage loan. Is the loan current, requiring no intervention on the part of the servicer to obtain payment, or is the loan delinquent, triggering additional, and potentially costly, servicer processes that attempt to restore the loan to current? Is it seriously delinquent, requiring a still higher level of intervention, or in default, necessitating a foreclosure and liquidation effort? According to the Mortgage Bankers Association, the cost of servicing a non-performing loan ranged from eight to twenty times the cost of servicing a performing loan during the ten-year period from 2009 to 1H2019 (Source: Servicing Operations Study and Forum; PGR 1H2019). Using 2014 as both the mid-point of this ratio and of the time period under consideration, the direct cost of servicing a performing loan was $156, compared to $2,000 for a non-performing loan. Averaged across both performing and non-performing loans, direct servicing costs were $171 per loan, with an additional cost of $31 per loan arising from unreimbursed expenditures related to foreclosure, REO and other costs, plus an estimated $58 per loan of corporate administration expense, totaling $261 per loan. The average loan balance of FHLMC and FNMA loans in 2014 was approximately $176,000, translating to an annual base servicing fee of $440. The margins illustrated by these figures demonstrate the extreme sensitivity of net servicing cash flows to the credit performance of the MSR portfolio. After prepayments, credit performance is the most important factor determining the economic return from investing in MSRs.  A 1% increase in non-performing loans from the 10yr average of 3.8% results in a $20 per loan net cash flow decline across the entire portfolio.  Consequently, for servicers who purchase MSR portfolios, careful integration of credit forecasting models into the MSR valuation process, particularly for portfolio acquisitions, is critical. RiskSpan’s MSR engine integrates both prepayment and credit models, permitting the precise estimation of net cash flows to MSR owners. The primary process affecting the cash inflow to the servicer is prepayment; when a loan prepays, the servicing fee is terminated. The cash outflow side of the equation depends on a number of factors:

  1. First and foremost, direct servicing cost is extremely sensitive to loan performance. The direct cost of servicing rises rapidly as delinquency status becomes increasingly severe. Direct servicing cost of a 30-day delinquent loan varies by servicer but can be as high as 350% of a performing loan. These costs rise to 600% of a performing loan’s cost at 60 days delinquent.
  2. Increasing delinquency causes other costs to escalate, including the cost of principal and interest as well as tax and escrow advances, non-reimbursable collateral protection, foreclosure and liquidation expenses. Float decreases, reducing interest earnings on cash balances.

    Improving-MSR-Pricing-GraphSource: Average servicing cost by delinquency state as supplied by several leading servicers of Agency and non-Agency mortgages.


RiskSpan’s MSR platform incorporates the full range of input parameters necessary to fully characterize the positive and negative cash flows arising from servicing. Positive cash flows include the servicing and other fees collected directly from borrowers as well as various types of ancillary and float income. Major contributors to negative cash flows include direct labor costs associated with performing servicing activities as well as unreimbursed foreclosure and liquidation costs, compensating interest and costs associated with financing principal, interest and escrow advances on delinquent loans. The net cash flows determined at the loan level are aggregated across the entire MSR portfolio and the client’s preferred pricing methodology is applied to calculate a portfolio value.


Improving-MSR-Pricing-Graph


Aggregation of MSR Portfolio Cash Flows – Loan-by-Loan vs “Rep Lines”

Historically, servicer net cash flows were determined using a simple methodology in which the base servicing fee was reduced by the servicing cost, and forecast prepayments were projected using a prepayment model. The impact of credit performance on net cash flows was explicitly considered by only a minority of practitioners.

Because servicing portfolios can contain hundreds of thousands or millions of loans, the computational challenge of generating net servicing cash flows was quite high. As the industry moved increasingly towards using OAS pricing and risk methodologies to evaluate MSRs, this challenge was multiplied by 250 to 1,000, depending on the number of paths used in the stochastic simulation.

In order to make the computational challenge more tractable, loans in large portfolios have historically been allocated to buckets according to the values of the characteristics of each loan that most explained its performance. In a framework that considered prepayment risk to be the major factor affecting MSR value, the superset of characteristics that mattered were those that were inputs to the prepayment model. This superset was then winnowed down to a handful of characteristics that were considered most explanatory. Each bucket would be converted to a “rep line” that represented the average of the values for each loan that were input into the prepayment models.


Improving-MSR-Pricing-Graph


Medium-sized servicers historically might have created 500 to 1,500 rep lines to represent their portfolio. Large servicers today may use tens of thousands.

The core premise supporting the distillation of a large servicing portfolio into a manageable number of rep lines is that each bucket represents a homogenous group of loans that will perform similarly, so that the aggregated net cash flows derived from the rep lines will approximate the performance of the sum of all the individual loans to a desired degree of precision.

The degree of precision obtained from using rep lines was acceptable for valuing going-concern portfolios, particularly if variations in the credit of individual loans and the impact of credit on net cash flows were not explicitly considered.  Over time, movement in MSR portfolio values would be driven mostly by prepayments, which themselves were driven by interest rate volatility. If the modeled value diverged sufficiently from “fair value” or a mark provided by an external provider, a valuation adjustment might be made and reported, but this was almost always a result of actual prepayments deviating from forecast.

Once an analyst looks to incorporate credit performance into MSR valuation, the number of meaningful explanatory loan characteristics grows sharply.  Not only must one consider all the variables that are used to project a mortgage’s cash flows according to its terms (including prepayments), but it also becomes necessary to incorporate all the factors that help one project exercise of the “default option.” Suddenly, the number of loans that could be bucketed together and be considered homogenous with respect to prepayment and credit performance would drop sharply; the number of required buckets would increase dramatically –to the point where the number of rep lines begins to rival the number of loans. The sheer computational power needed for such complex processing has only recently become available to most practitioners and requires a scalable, cloud-native solution to be cost effective.

Two significant developments have forced mortgage servicers to more precisely project net mortgage cash flows:

  1. As the accumulation of MSRs by large market participants through outright purchase, rather than through loan origination, has been growing dramatically, imprecision in valuation became less tolerable as it could result in the servicer bidding too low or too high for a servicing package.
  2. FASB Accounting Standard 2016-13 obligated entities holding “financial assets and net investment in leases that are not accounted for at fair value through net income” to estimate “incurred losses,” or estimated futures losses over the life of the asset. While the Standard does not necessarily apply to MSRs because most MSR investors account for the asset at fair value and flow fair value mark-to-market through income, it did lead to a statement from the major regulators:

“If a financial asset does not share risk characteristics with other financial assets, the new accounting standard requires expected credit losses to be measured on an individual asset basis.” 

(Source: Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, National Credit Union Administration, and Office of the Comptroller of the Currency. “Joint Statement on the New Accounting Standard on Financial Instruments – Credit Losses.” June 17, 2016.).

The result of these developments is that a number of large servicers are revisiting their bucketing methodologies and considering using loan-level analyses to better incorporate the impact of credit on MSR value, particularly when purchasing new packages of MSRs. By enabling MSR investors to re-combine and re-aggregate cash flow results on the fly, loan-level projections open the door to a host of additional, scenario-based analytics. RiskSpan’s cloud-native Edge Platform is uniquely positioned to support these emerging methodologies because it was envisioned and built from the ground up as a loan-level analytical engine. The flexibility afforded by its parallel computing framework allows for complex net-cash-flow calculations on hundreds of thousands of individual mortgage loans simultaneously. The speed and scalability this affords makes the Edge Platform ideally suited for pricing even the largest portfolios of MSR assets and making timely trading decisions with confidence.


In Part II of this series, we will delve into property-level risk characteristics—factors that are not easily rolled up into portfolio rep lines and must be evaluated at the loan level—impact credit risk and servicing cash flows. We will also quantify the impact of a loan-level analysis incorporating these factors on an MSR valuation.

Contact us to learn more.


RiskSpan a Winner of HousingWire’s RiskTech100 Award

For the third consecutive year, RiskSpan is a winner of HousingWire’s prestigious annual HW Tech100 Mortgage award, recognizing the most innovative technology companies in the housing economy.

The recognition is the latest in a parade of 2021 wins for the data and analytics firm whose unique blend of tech and talent enables traders and portfolio managers to transact quickly and intelligently to find opportunities. RiskSpan’s comprehensive solution also provides risk managers access to modeling capabilities and seamless access to the timely data they need to do their jobs effectively.

“I’ve been involved in choosing Tech100 winners since we started the program in 2014, and every year it manages to get more competitive,” HousingWire Editor and Chief Sarah Wheeler said. “These companies are truly leading the way to a more innovative housing market!”

Other major awards collected by RiskSpan and its flagship Edge Platform in 2021 include winning Chartis Research’s “Risk as a Service” category and being named “Buy-side Market Risk Management Product of the Year” by Risk.net.

RiskSpan’s cloud-native Edge platform is valued by users seeking to run structured products analytics fast and granularly. It provides a one-stop shop for models and analytics that previously had to be purchased from multiple vendors. The platform is supported by a first-rate team, most of whom come from industry and have walked in the shoes of our clients.

“After the uncertainty and unpredictability of last year, we expected a greater adoption of technology. However, these 100 real estate and mortgage companies took digital disruption to a whole new level and propelled a complete digital revolution, leaving a digital legacy that will impact borrowers, clients and companies for years to come,” said Brena Nath, HousingWire’s HW+ Managing Editor. ”Knowing what these companies were able to navigate and overcome, we’re excited to announce this year’s list of the most innovative technology companies serving the mortgage and real estate industries.”


Get in touch with us to explore why RiskSpan is a best-in-class partner for data and analytics in mortgage and structured finance. 

HousingWire is the most influential source of news and information for the U.S. mortgage and housing markets. Built on a foundation of independent and original journalism, HousingWire reaches over 60,000 newsletter subscribers daily and over 1.0 million unique visitors each month. Our audience of mortgage, real estate and fintech professionals rely on us to Move Markets Forward. Visit www.housingwire.com or www.solutions.housingwire.com to learn more


Model Validation Programs – Optimizing Value in Model Risk Groups

Watch RiskSpan Managing Director, Tim Willis, discuss how to optimize model validation programs. RiskSpan’s model risk management practice has experience in both building and validating models, giving us unique expertise to provide very high quality validations without diving into activities and exercises of marginal value.

 

Talk Scope

 


Here Come the CECL Models: What Model Validators Need to Know

As it turns out, model validation managers at regional banks didn’t get much time to contemplate what they would do with all their newly discovered free time. Passage of the Economic Growth, Regulatory Relief, and Consumer Protection Act appears to have relieved many model validators of the annual DFAST burden. But as one class of models exits the inventory, a new class enters—CECL models.

Banks everywhere are nearing the end of a multi-year scramble to implement a raft of new credit models designed to forecast life-of-loan performance for the purpose of determining appropriate credit-loss allowances under the Financial Accounting Standards Board’s new Current Expected Credit Loss (CECL) standard, which takes full effect in 2020 for public filers and 2021 for others.

The number of new models CECL adds to each bank’s inventory will depend on the diversity of asset portfolios. More asset classes and more segmentation will mean more models to validate. Generally model risk managers should count on having to validate at least one CECL model for every loan and debt security type (residential mortgage, CRE, plus all the various subcategories of consumer and C&I loans) plus potentially any challenger models the bank may have developed.

In many respects, tomorrow’s CECL model validations will simply replace today’s allowance for loan and lease losses (ALLL) model validations. But CECL models differ from traditional allowance models. Under the current standard, allowance models typically forecast losses over a one-to-two-year horizon. CECL requires a life-of-loan forecast, and a model’s inputs are explicitly constrained by the standard. Accounting rules also dictate how a bank may translate the modeled performance of a financial asset (the CECL model’s outputs) into an allowance. Model validators need to be just as familiar with the standards governing how these inputs and outputs are handled as they are with the conceptual soundness and mathematical theory of the credit models themselves.

CECL Model Inputs – And the Magic of Mean Reversion

Not unlike DFAST models, CECL models rely on a combination of loan-level characteristics and macroeconomic assumptions. Macroeconomic assumptions are problematic with a life-of-loan credit loss model (particularly with long-lived assets—mortgages, for instance) because no one can reasonably forecast what the economy is going to look like six years from now. (No one really knows what it will look like six months from now, either, but we need to start somewhere.) The CECL standard accounts for this reality by requiring modelers to consider macroeconomic input assumptions in two separate phases: 1) a “reasonable and supportable” forecast covering the time frame over which the entity can make or obtain such a forecast (two or three years is emerging as common practice for this time frame), and 2) a “mean reversion” forecast based on long-term historical averages for the out years. As an alternative to mean reverting by the inputs, entities may instead bypass their models in the out years and revert to long-term average performance outcomes by the relevant loan characteristics.

Assessing these assumptions (and others like them) requires a model validator to simultaneously wear a “conceptual soundness” testing hat and an “accounting policy” compliance hat. Because the purpose of the CECL model is to prove an accounting answer and satisfy an accounting requirement, what can validators reasonably conclude when confronted with an assumption that may seem unsound from purely statistical point of view but nevertheless satisfies the accounting standard?

Taking the mean reversion requirement as an example, the projected performance of loans and securities beyond the “reasonable and supportable” period is permitted to revert to the mean in one of two ways: 1) modelers can feed long-term history into the model by supplying average values for macroeconomic inputs, allowing modeled results to revert to long-term means in that way, or 2) modelers can mean revert “by the outputs” – bypassing the model and populating the remainder of the forecast with long-term average performance outcomes (prepayment, default, recovery and/or loss rates depending on the methodology). Either of these approaches could conceivably result in a modeler relying on assumptions that may be defensible from an accounting perspective despite being statistically dubious, but the first is particularly likely to raise a validator’s eyebrow. The loss rates that a model will predict when fed “average” macroeconomic input assumptions are always going to be uncharacteristically low. (Because credit losses are generally large in bad macroeconomic environments and low in average and good environments, long-term average credit losses are higher than the credit losses that occur during average environments. A model tuned to this reality—and fed one path of “average” macroeconomic inputs—will return credit losses substantially lower than long-term average credit losses.) A credit risk modeler is likely to think that these are not particularly realistic projections, but an auditor following the letter of the standard may choose not find any fault with them. In such situations, validators need to fall somewhere in between these two extremes—keeping in mind that the underlying purpose of CECL models is to reasonably fulfill an accounting requirement—before hastily issuing a series of high-risk validation findings.

CECL Model Outputs: What are they?

CECL models differ from some other models in that the allowance (the figure that modelers are ultimately tasked with getting to) is not itself a direct output of the underlying credit models being validated. The expected losses that emerge from the model must be subject to a further calculation in order to arrive at the appropriate allowance figure. Whether these subsequent calculations are considered within the scope of a CECL model validation is ultimately going to be an institutional policy question, but it stands to reason that they would be.

Under the CECL standard, banks will have two alternatives for calculating the allowance for credit losses: 1) the allowance can be set equal to the sum of the expected credit losses (as projected by the model), or 2) the allowance can be set equal to the cost basis of the loan minus the present value of expected cash flows. While a validator would theoretically not be in a position to comment on whether the selected approach is better or worse than the alternative, principles of process verification would dictate that the validator ought to determine whether the selected approach is consistent with internal policy and that it was computed accurately.

When Policy Trumps Statistics

The selection of a mean reversion approach is not the only area in which a modeler may make a statistically dubious choice in favor of complying with accounting policy.

Discount Rates

Translating expected losses into an allowance using the present-value-of-future-cash-flows approach (option 2—above) obviously requires selecting an appropriate discount rate. What should it be? The standard stipulates the use of the financial asset’s Effective Interest Rate (or “yield,” i.e., the rate of return that equates an instrument’s cash flows with its amortized cost basis). Subsequent accounting guidance affords quite a bit a flexibility in how this rate is calculated. Institutions may use the yield that equates contractual cash flows with the amortized cost basis (we can call this “contractual yield”), or the rate of return that equates cash flows adjusted for prepayment expectations with the cost basis (“prepayment-adjusted yield”).

The use of the contractual yield (which has been adjusted for neither prepayments nor credit events) to discount cash flows that have been adjusted for both prepayments and credit events will allow the impact of prepayment risk to be commingled with the allowance number. For any instruments where the cost basis is greater than unpaid principal balance (a mortgage instrument purchased at 102, for instance) prepayment risk will exacerbate the allowance. For any instruments where the cost basis is less than the unpaid principal balance, accelerations in repayment will offset the allowance. This flaw has been documented by FASB staff, with the FASB Board subsequently allowing but not requiring the use of a prepay-adjusted yield.

Multiple Scenarios

The accounting standard neither prohibits nor requires the use of multiple scenarios to forecast credit losses. Using multiple scenarios is likely more supportable from a statistical and model validation perspective, but it may be challenging for a validator to determine whether the various scenarios have been weighted properly to arrive at the correct, blended, “expected” outcome.

Macroeconomic Assumptions During the “Reasonable and Supportable” Period

Attempting to quantitatively support the macro assumptions during the “reasonable and supportable” forecast window (usually two to three years) is likely to be problematic both for the modeler and the validator. Such forecasts tend to be more art than science and validators are likely best off trying to benchmark them against what others are using than attempting to justify them using elaborately contrived quantitative methods. The data that is mostly likely to be used may turn out to be simply the data that is available. Validators must balance skepticism of such approaches with pragmatism. Modelers have to use something, and they can only use the data they have.

Internal Data vs. Industry Data

The standard allows for modeling using internal data or industry proxy data. Banks often operate under the dogma that internal data (when available) is always preferable to industry data. This seems reasonable on its face, but it only really makes sense for institutions with internal data that is sufficiently robust in terms of quantity and history. And the threshold for what constitutes “sufficiently robust” is not always obvious. Is one business cycle long enough? Is 10,000 loans enough? These questions do not have hard and fast answers.

———-

Many questions pertaining to CECL model validations do not yet have hard and fast answers. In some cases, the answers will vary by institution as different banks adopt different policies. Industry best practices will doubtless emerge in response to others. For the rest, model validators will need to rely on judgment, sometimes having to balance statistical principles with accounting policy realities. The first CECL model validations are around the corner. It’s not too early to begin thinking about how to address these questions.


A Brief Introduction to Agile Philosophy

Reducing time to delivery by developing in smaller incremental chunks and incorporating an ability to pivot is the cornerstone of Agile software development methodology.

“Agile” software development is a rarity among business buzz words in that it is actually a fitting description of what it seeks to accomplish. Optimally implemented, it is capable of delivering value and efficiency to business-IT partnerships by incorporating flexibility and an ability to pivot rapidly when necessary.

As a technology company with a longstanding management consulting pedigree, RiskSpan values the combination of discipline and flexibility inherent to Agile development and regularly makes use of the philosophy in executing client engagements. Dynamic economic environments contribute to business priorities that are seemingly in a near-constant state of flux. In response to these ever-evolving needs, clients seek to implement applications and application feature changes quickly and efficiently to realize business benefits early.

This growing need for speed and “agility” makes Agile software development methods an increasingly appealing alternative to traditional “waterfall” methodologies. Waterfall approaches move in discrete phases—treating analysis, design, coding, and testing as individual, stand-alone components of a software project. Historically, when the cost of changing plans was high, such a discrete approach worked best. Nowadays, however, technological advances have made changing the plan more cost-feasible. In an environment where changes can be made inexpensively, rigid waterfall methodologies become unnecessarily counterproductive for at least four reasons:

  1. When a project runs out of time (or money), individual critical phases—often testing—must be compressed, and overall project quality suffers.
  2. Because working software isn’t produced until the very end of the project, it is difficult to know whether the project is really on track prior to project completion.
  3. Not knowing whether established deadlines will be met until relatively late in the game can lead to schedule risks.
  4. Most important, discrete phase waterfalls simply do not respond well to the various ripple effects created by change.

 

Continuous Activities vs. Discrete Project Phases

Agile software development methodologies resolve these traditional shortcomings by applying techniques that focus on reducing overhead and time to delivery. Instead of treating fixed development stages as discrete phases, Agile treats them as continuous activities. Doing things simultaneously and continuously—for example, incorporating testing into the development process from day one—improves quality and visibility, while reducing risk. Visibility improves because being halfway through a project means that half of a project’s features have been built and tested, rather than having many partially built features with no way of knowing how they will perform in testing. Risk is reduced because feedback comes in from earliest stages of development and changes without paying exorbitant costs. This makes everybody happy.

 

Flexible but Controlled

Firms sometimes balk at Agile methods because of a tendency to equate “flexibility” and “agility” with a lack of organization and planning, weak governance and controls, and an abandonment of formal documentation. This, however, is a misconception. “Agile” does not mean uncontrolled—on the contrary, it is no more or less controlled than the existing organizational boundaries of standardized processes into which it is integrated. Most Agile methods do not advocate any particular methodology for project management or quality control. Rather, their intent is on simplifying the software development approach, embracing changing business needs, and producing working software as quickly as possible. Thus, Agile frameworks are more like a shell which users of the framework have full flexibility to customize as necessary.

 

Frameworks and Integrated Teams

Agile methodologies can be implemented using a variety of frameworks, including Scrum, Kanban, and XP. Scrum is the most popular of these and is characterized by producing a potentially shippable set of functionalities at the end of every iteration in two-week time boxes called sprints. Delivering high-quality software at the conclusion of such short sprints requires supplementing team activities with additional best practices, such as automated testing, code cleanup and other refactoring, continuous integration, and test-driven or behavior-driven development.

Agile teams are built around motivated individuals subscribing what is commonly referred to as a “lean Agile mindset.” Team members who embrace this mindset share a common vision and are motivated to contribute in ways beyond their defined roles to attain success. In this way, innovation and creativity is supported and encouraged. Perhaps most important, Agile promotes building relationships based on trust among team members and with the end-user customer in providing fast and high-quality delivery of software. When all is said and done, this is the aim of any worthwhile endeavor. When it comes to software development, Agile is showing itself to be an impressive means to this end.


Private-Label Securities – Technological Solutions to Information Asymmetry and Mistrust

At its heart, the failure of the private-label residential mortgage-backed securities (PLS) market to return to its pre-crisis volume is a failure of trust. Virtually every proposed remedy, in one way or another, seeks to create an environment in which deal participants can gain reasonable assurance that their counterparts are disclosing information that is both accurate and comprehensive. For better or worse, nine-figure transactions whose ultimate performance will be determined by the manner in which hundreds or thousands of anonymous people repay their mortgages cannot be negotiated on the basis of a handshake and reputation alone. The scale of these transactions makes manual verification both impractical and prohibitively expensive. Fortunately, the convergence of a stalled market with new technologies presents an ideal time for change and renewed hope to restore confidence in the system.

 

Trust in Agency-Backed Securities vs Private-Label Securities

Ginnie Mae guaranteed the world’s first mortgage-backed security nearly 50 years ago. The bankers who packaged, issued, and invested in this MBS could scarcely have imagined the technology that is available today. Trust, however, has never been an issue with Ginnie Mae securities, which are collateralized entirely by mortgages backed by the federal government—mortgages whose underwriting requirements are transparent, well understood, and consistently applied.

Further, the security itself is backed by the full faith and credit of the U.S. Government. This degree of “belt-and-suspenders” protection afforded to investors makes trust an afterthought and, as a result, Ginnie Mae securities are among the most liquid instruments in the world.

Contrast this with the private-label market. Private-label securities, by their nature, will always carry a higher degree of uncertainty than Ginnie Mae, Fannie Mae, and Freddie Mac (i.e., “Agency”) products, but uncertainty is not the problem. All lending and investment involves uncertainty. The problem is information asymmetry—where not all parties have equal access to the data necessary to assess risk. This asymmetry makes it challenging to price deals fairly and is a principal driver of illiquidity.

 

Using Technology to Garner Trust in the PLS Market

In many transactions, ten or more parties contribute in some manner to verifying and validating data, documents, or cash flow models. In order to overcome asymmetry and restore liquidity, the market will need to refine (and in some cases identify) technological solutions to, among other challenges, share loan-level data with investors, re-envision the due diligence process, and modernize document custody.

 

Loan-Level Data

During SFIG’s Residential Mortgage Finance symposium last month, RiskSpan moderated a panel that featured significant discussion around loan-level disclosures. At issue was whether the data required by the SEC’s Regulation AB provided investors with all the information necessary to make an investment decision. Specifically debated was the mortgaged property’s zip code, which provides investors valuable information on historical valuation trends for properties in a given geographic area.

Privacy advocates question the wisdom of disclosing full, five-digit zip codes. Particularly in sparsely populated areas where zip codes contain a relatively small number of addresses, knowing the zip code along with the home’s sale price and date (which are both publicly available) can enable unscrupulous data analysts to “triangulate” in on an individual borrower’s identity and link the borrower to other, more sensitive personal information in the loan-level disclosure package.

The SEC’s “compromise” is to require disclosing only the first two digits of the zip code, which provide a sense of a property’s geography without the risk of violating privacy. Investors counter that two-digit zip codes do not provide nearly enough granularity to make an informed judgment about home-price stability (and with good reason—some entire states are covered entirely by a single two-digit zip code).

The competing demands of disclosure and privacy can be satisfied in large measure by technology. Rather than attempting to determine which individual data fields should be included in a loan-level disclosure (and then publishing it on the SEC’s EDGAR site for all the world to see) the market ought to be able to develop a technology where a secure, encrypted, password-protected copy of the loan documents (including the loan application, tax documents, pay-stubs, bank statements, and other relevant income, employment, and asset verifications) is made available on a need-to-know basis to qualified PLS investors who share in the responsibility for safeguarding the information.

 

Due Diligence Review

Technologically improving the transparency of the due diligence process to investors may also increase investor trust, particularly in the representation and warranty review process. Providing investors with a secure view of the loan-level documentation used to underwrite and close the underlying mortgage loan, as described above, may reduce the scope of due diligence review as it exists in today’s market. Technology companies, which today support initiatives such as Fannie Mae’s “Day 1 Certainty” program, promise to further disrupt the due diligence process in the future. Through automation, the due diligence process becomes less burdensome and fosters confidence in the underwriting process while also reducing costs and bringing representation and warranty relief.

Today’s insistence on 100% file reviews in many cases is perhaps the most obvious evidence of the lack of trust across transactions. Investors will likely always require some degree of assurance that they are getting what they pay for in terms of collateral. However, an automated verification process for income, assets, and employment will launch the industry forward with investor confidence. Should any reconciliation of individual loan file documentation with data files be necessary, results of these reconciliations could be automated and added to a secure blockchain accessible only via private permissions. Over time, investors will become more comfortable with the reliability of the electronic data files describing the mortgage loans submitted to them.

The same technology could be implemented to allow investors to view supporting documents when reps and warrants are triggered and a review of the underlying loan documents needs to be completed.

 

Document Custody

Smart document technologies also have the potential to improve the transparency of the document custody process. At some point the industry is going to have to move beyond today’s humidity-controlled file cabinets and vaults, where documents are obtained and viewed only on an exception basis or when loans are paid off. Adding loan documents that have been reviewed and accepted by the securitization’s document custodian to a secure, permissioned blockchain will allow investors in the securities to view and verify collateral documents whenever questions arise without going to the time and expense of retrieving paper from the custodian’s vault.

——————————-

Securitization makes mortgages and other types of borrowing affordable for a greater population by leveraging the power of global capital markets. Few market participants view mortgage loan securitization dominated by government corporations and government-sponsored enterprises as a desirable permanent solution. Private markets, however, are going to continue to lag markets that benefit from implicit and explicit government guarantees until improved processes, supported by enhanced technologies, are successful in bridging gaps in trust and information asymmetry.

With trust restored, verified by technology, the PLS market will be better positioned to support housing financing needs not supported by the Agencies.

Get a Demo


Why Model Validation Does Not Eliminate Spreadsheet Risk

Model risk managers invest considerable time in determining which spreadsheets qualify as models, which are end-user computing (EUC) applications, and which are neither. Seldom, however, do model risk managers consider the question of whether a spreadsheet is the appropriate tool for the task at hand.

Perhaps they should start.

Buried in the middle of page seven of the joint Federal Reserve/OCC supervisory guidance on model risk management is this frequently overlooked principle:

“Sound model risk management depends on substantial investment in supporting systems to ensure data and reporting integrity, together with controls and testing to ensure proper implementation of models, effective systems integration, and appropriate use.”

It brings to mind a fairly obvious question: What good is a “substantial investment” in data integrity surrounding the modeling process when the modeling itself is carried out in Excel? Spreadsheets are useful tools, to be sure, but they meet virtually none of the development standards to which traditional production systems are held. What percentage of “spreadsheet models” are subjected to the rigors of the software development life cycle (SDLC) before being put into use?

 

Model Validation vs. SDLC

More often than not, and usually without realizing it, banks use model validation as a substitute for SDLC when it comes to spreadsheet models. The main problem with this approach is that SDLC and model validation are complementary processes and are not designed to stand in for one another. SDLC is a primarily forward-looking process to ensure applications are implemented properly. Model validation is primarily backward looking and seeks to determine whether existing applications are working as they should.

SDLC includes robust planning, design, and implementation—developing business and technical requirements and then developing or selecting the right tool for the job. Model validation may perform a few cursory tests designed to determine whether some semblance of a selection process has taken place, but model validation is not designed to replicate (or actually perform) the selection process.

This presents a problem because spreadsheet models are seldom if ever built with SDLC principles in mind. Rather, they are more likely to evolve organically as analysts seek increasingly innovative ways of automating business tasks. A spreadsheet may begin as a simple calculator, but as analysts become more sophisticated, they gradually introduce increasingly complex functionality and coding into their spreadsheet. And then one day, the spreadsheet gets picked up by an operational risk discovery tool and the analyst suddenly becomes a model owner. Not every spreadsheet model evolves in such an unstructured way, of course, but more than a few do. And even spreadsheet-based applications that are designed to be models from the outset are seldom created according to a disciplined SDLC process.

I am confident that this is the primary reason spreadsheet models are often so poorly documented. They simply weren’t designed to be models. They weren’t really designed at all. A lot of intelligent, critical thought may have gone into their code and formulas, but little if any thought was likely given to the question of whether a spreadsheet is the best tool for what the spreadsheet has evolved to be able to do.
 

Challenging the Spreadsheets Themselves

Outside of banking, a growing number of firms are becoming wary of spreadsheets and attempting to move away from them. A Wall Street Journal article last week cited CFOs from companies as diverse as P.F. Chang’s China Bistro Inc., ABM Industries, and Wintrust Financial Corp. seeking to “reduce how much their finance teams use Excel for financial planning, analysis and reporting.”

Many of the reasons spreadsheets are falling out of favor have little to do with governance and risk management. But one core reason will resonate with anyone who has ever attempted to validate a spreadsheet model. Quoting from the article: “Errors can bloom because data in Excel is separated from other systems and isn’t automatically updated.”

It is precisely this “separation” of spreadsheet data from its sources that is so problematic for model validators. Even if a validator can determine that the input data in the spreadsheet is consistent with the source data at the time of validation, it is difficult to ascertain whether tomorrow’s input data will be. Even spreadsheets that pull input data in via dynamic linking or automated feeds can be problematic because the code governing the links and feeds can so easily become broken or corrupted.
 

An Expanded Way of Thinking About “Conceptual Soundness”

Typically, when model validators speak of evaluating conceptual soundness, they are referring to the model’s underlying theory, how its variables were selected, the reasonableness of its inputs and assumptions, and how well everything is documented. In diving into these details, it is easy to overlook the supervisory guidance’s opening sentence in the Evaluation of Conceptual Soundness section: “This element involves assessing the quality of the model design and construction.”

How often, in assessing a spreadsheet model’s design and construction, do validators ask, “Is Excel even the right application for this?” Not very often, I suspect. When an analyst is assigned to validate a model, the medium is simply a given. In a perfect world, model validators would be empowered to issue a finding along the lines of, “Excel is not an appropriate tool for a high-risk production model of this scope and importance.” Practically speaking, however, few departments will be willing to upend the way they work and analyze data in response to a model validation finding. (In the WSJ article, it took CFOs to affect that kind of change.)

Absent the ability to nudge model owners away from spreadsheets entirely, model validators would do well to incorporate certain additional “best practices” checks into their validation procedures when the model in question is a spreadsheet. These might include the following:

  • Incorporation of a cover sheet on the first tab of the workbook that includes the model’s name, the model’s version, a brief description of what the model does, and a table of contents defining and describing the purpose of each tab
  • Application of a consistent color key so that inputs, assumptions, macros, and formulas can be easily identified
  • Grouping of inputs by source, e.g., raw data versus transformed data versus calculations
  • Grouping of inputs, processing, and output tabs together by color
  • Separate instruction sheets for data import and transformation

Spreadsheets present unique challenges to model validators. By accounting for the additional risk posed by the nature of spreadsheets themselves, model risk managers can contribute value by identifying situations where the effectiveness of sound data, theory, and analysis is blunted by an inadequate tool.


AML Models: Applying Model Validation Principles to Non-Models

Anti-money-laundering (AML) solutions have no business being classified as models. To be sure, AML “models” are sophisticated, complex, and vitally important. But it requires a rather expansive interpretation of the OCC/Federal Reserve/FDIC1 definition of the term model to realistically apply the term to AML solutions.

Supervisory guidance defines model as “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.”

While AML compliance models are consistent with certain elements of that definition, it is a stretch to argue that these elaborate, business-rule engines are generating outputs that qualify as “quantitative estimates.” They flag transactions and the people who make them, but they do not estimate or predict anything quantitative.

We could spend a lot more time arguing that AML tools (including automated OFAC and other watch-list checks) are not technically models. But in the end, these arguments are moot if an examining regulator holds a differing view. If a bank’s regulator declares the bank’s AML applications to be models and orders that they be validated, then presenting a well-reasoned argument about how these tools don’t rise to the technical definition of a model is not the most prudent course of action (probably).

 

Tailoring Applicable Model Validation Principles to AML Models

What makes it challenging to validate AML “models” is not merely the additional level of effort, it’s that most model validation concepts are designed to evaluate systems that generate quantitative estimates. Consequently, in order to generate a model validation report that will withstand scrutiny, it is important to think of ways to adapt the three pillars of model validation—conceptual soundness review, benchmarking, and back-testing—to the unique characteristics of a non-model.

 

Conceptual Soundness of AML Solutions

The first pillar of model validation—conceptual soundness—is also its most universally applicable. Determining whether an application is well designed and constructed, whether its inputs and assumptions are reasonably sourced and defensible, whether it is sufficiently documented, and whether it meets the needs for which it was developed is every bit as applicable to AML solutions, EUCs and other non-predictive tools as it is to models.

For AML ”models,” a conceptual soundness review generally encompasses the following activities:

  • Documentation review: Are the rule and alert definitions and configurations identified? Are they sufficiently explained and justified? This requires detailed documentation not only from the application vendor, but also from the BSA/AML group within the bank that uses it.
  • Transaction verification: Verifying that all transactions and customers are covered and evaluated by the tool.
  • Risk assessment review: Evaluating the institution’s risk assessment methodology and whether the application’s configurations are consistent with it.
  • Data review: Are all data inputs mapped, extracted, transformed, and loaded correctly from their respective source systems into the AML engine?
  • Watchlist filtering: Are watchlist criteria configured correctly? Is the AML model receiving all the information it needs to generate alerts?

 

Benchmarking (and Process Verification) of AML Tools

Benchmarking is primarily geared toward comparing a model’s uncertain outputs against the uncertain outputs of a challenger model. AML outputs are not particularly well-suited to such a comparison. As such, benchmarking one AML tool against another is not usually feasible. Even in the unlikely event that a validator has access to a separate, “challenger” AML “model,” integrating it with all of a bank’s necessary customer and transaction systems and making sure it works is a months-long project. The nature of AML monitoring—looking at every customer and every single transaction—makes integrating a second, benchmarking engine highly impractical. And even if it were practical, the functionality of any AML system is primarily determined by its calibration and settings. Once the challenger system has been configured to match the system being tested, the objective of the benchmarking exercise is largely defeated.

So, now what? In a model validation context, benchmarking is typically performed and reported in the context of a broader “process verification” exercise—tests to determine whether the model is accomplishing what it purports to. Process verification has broad applicability to AML reviews and typically includes the following components:

  • Above-the-line testing: An evaluation of the alerts triggered by the application and identification of any “false positives” (Type I error).
  • Below-the-line testing: An evaluation of all bank activity to determine whether any transactions that should have been flagged as alerts were missed by the application. These would constitute “false negatives” (Type II error).
  • Documentation comparison: Determination of whether the application is calculating risk scores in a manner consistent with documented methodology.

 

Back-Testing (and Outcomes Analysis) of AML Applications

Because AML applications are not designed to predict the future, the notion of back-testing does not really apply to them. However, in the model validation context, back-testing is typically performed as part of a broader analysis of model outcomes. Here again, a number of AML tests apply, including the following:

  • Rule relevance: How many rules are never triggered? Are there any rules that, when triggered, are always overridden by manual review of the alert?
  • Schedule evaluation: Evaluation of the AML system’s performance testing schedule.
    Distribution analysis: Determining whether the distribution of alerts is logical in light of typical customer transaction activity and the bank’s view of its overall risk profile.
  • Management reporting: How do the AML system’s outputs, including the resulting Suspicious Activity Reports, flow into management reports? How are these reports reviewed for accuracy, presented, and archived?
  • Output maintenance: How are reports created and maintained? How is AML system output archived for reporting and ongoing monitoring purposes?

 

Testing AML Models: Balancing Thoroughness and Practicality

Generally speaking, model validators are given to being thorough. When presented with the task of validating an AML “model,” they are likely to look beyond the limitations associated with applying model validation principles to non-models and focus on devising tests designed to assess whether the AML solution is working as intended.

Left to their own devices, many model validation analysts will likely err on the side of doing more than is necessary to fulfill the requirements of an AML model validation. Devising an approach that aligns effective challenge testing with the three defined pillars of model validation has a dual benefit. It results in a model validation report that maps back to regulatory guidance and is therefore more likely to stand up to scrutiny. It also helps confine the universe of potential testing to only those areas that require testing. Restricting testing to only what is necessary and then thoroughly pursuing that narrowly defined set of tests is ultimately the key to maintaining the effectiveness and efficiency of AML testing in particular and of model risk management programs as a whole.

 


[1] On June 7, 2017, the FDIC formally adopted the Supervisory Guidance previously set forth jointly by the OCC (2011-12) and Federal Reserve (SR 11-7).


AML Model Validation: Effective Process Verification Requires Thorough Documentation

Increasing regulatory scrutiny due to the catastrophic risk associated with anti-money-laundering (AML) non-compliance is prompting many banks to tighten up their approach to AML model validation. Because AML applications would be better classified as highly specialized, complex systems of algorithms and business rules than as “models,” applying model validation techniques to them presents some unique challenges that make documentation especially important.

In addition to devising effective challenges to determine the “conceptual soundness” of an AML system and whether its approach is defensible, validators must determine the extent to which various rules are firing precisely as designed. Rather than commenting on the general reasonableness of outputs based on back-testing and sensitivity analysis, validators must rely more heavily on a form of process verification that requires precise documentation.

Vendor Documentation of Transaction Monitoring Systems

Above-the-line and below-the-line testing—the backbone of most AML transaction monitoring testing—amounts to a process verification/replication exercise. For any model replication exercise to return meaningful results, the underlying model must be meticulously documented. If not, validators are left to guess at how to fill in the blanks. For some models, guessing can be an effective workaround. But it seldom works well when it comes to a transaction monitoring system and its underlying rules. Absent documentation that describes exactly what rules are supposed to do, and when they are supposed to fire, effective replication becomes nearly impossible.

Anyone who has validated an AML transaction monitoring system knows that they come with a truckload of documentation. Vendor documentation is often quite thorough and does a reasonable job of laying out the solution’s approach to assessing transaction data and generating alerts. Vendor documentation typically explains how relevant transactions are identified, what suspicious activity each rule is seeking to detect, and (usually) a reasonably detailed description of the algorithms and logic each rule applies.

This information provided by the vendor is valuable and critical to a validator’s ability to understand how the solution is intended to work. But because so much more is going on than what can reasonably be captured in vendor documentation, it alone provides insufficient information to devise above-the-line and below-the-line testing that will yield worthwhile results.

Why An AML Solution’s Vendor Documentation is Not Enough

Every model validator knows that model owners must supplement vendor-supplied documentation with their own. This is especially true with AML solutions, in which individual user settings—thresholds, triggers, look-back periods, white lists, and learning algorithms—are arguably more crucial to the solution’s overall performance than the rules themselves.

Comprehensive model owner documentation helps validators (and regulatory supervisors) understand not only that AML rules designed to flag suspicious activity are firing correctly, but also that each rule is sufficiently understood by those who use the solution. It also provides the basis for a validator’s testing that rules are calibrated reasonably. Testing these calibrations is analogous to validating the inputs and assumptions of a predictive model. If they are not explicitly spelled out, then they cannot be evaluated.

Here are some examples.

Transaction Input Transformations

Details about how transaction data streams are mapped, transformed, and integrated into the AML system’s database vary by institution and cannot reasonably be described in generic vendor documentation. Consequently, owner documentation needs to fully describe this. To pass model validation muster, the documentation should also describe the review process for input data and field mapping, along with all steps taken to correct inaccuracies or inconsistencies as they are discovered.

Mapping and importing AML transaction data is sometimes an inexact science. To mitigate risks associated with missing fields and customer attributes, risk-based parameters must be established and adequately documented. This documentation enables validators who test the import function to go into the analysis with both eyes open. Validators must be able to understand the circumstances under which proxy data is used in order to make sound judgments about the reasonableness and effectiveness of established proxy parameters and how well they are being adhered to. Ideally, documentation pertaining to transaction input transformation should describe the data validations that are performed and define any error messages that the system might generate.

Risk Scoring Methodologies and Related Monitoring

Specific methodologies used to risk score customers and countries and assign them to various lists (e.g., white, gray, or black lists) also vary enough by institution that vendor documentation cannot be expected to capture them. Processes and standards employed in creating and maintaining these lists must be documented. This documentation should include how customers and countries get on these lists to begin with, how frequently they are monitored once they are on a list, what form that monitoring takes, the circumstances under which they can move between lists, and how these circumstances are ascertained. These details are often known and usually coded (to some degree) in BSA department procedures. This is not sufficient. They should be incorporated in the AML solution’s model documentation and include data sources and a log capturing the history of customers and countries moving to and from the various risk ratings and lists.

Output Overrides

Management overrides are more prevalent with AML solutions than with most models. This is by design. AML solutions are intended to flag suspicious transactions for review, not to make a final judgment about them. That job is left to BSA department analysts. Too often, important metrics about the work of these analysts are not used to their full potential. Regular analysis of these overrides should be performed and documented so that validators can evaluate AML system performance and the justification underlying any tuning decisions based on the frequency and types of overrides.

Successful AML model validations require rule replication, and incompletely documented rules simply cannot be replicated. Transaction monitoring is a complicated, data-intensive process, and getting everything down on paper can be daunting, but AML “model” owners can take stock of where they stand by asking themselves the following questions:

  1. Are my transaction monitoring rules documented thoroughly enough for a qualified third-party validator to replicate them? (Have I included all systematic overrides, such as white lists and learning algorithms?)
  2. Does my documentation give a comprehensive description of how each scenario is intended to work?
  3. Are thresholds adequately defined?
  4. Are the data and parameters required for flagging suspicious transactions described well enough to be replicated?

If the answer to all these questions is yes, then AML solution owners can move into the model validation process reasonably confident that the state of their documentation will not be a hindrance to the AML model validation process.


Get Started
Log in

Linkedin   

risktech2024