Anti-money-laundering (AML) solutions have no business being classified as models. To be sure, AML “models” are sophisticated, complex, and vitally important. But it requires a rather expansive interpretation of the OCC/Federal Reserve/FDIC1 definition of the term model to realistically apply the term to AML solutions.

Supervisory guidance defines model as “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.”

While AML compliance models are consistent with certain elements of that definition, it is a stretch to argue that these elaborate, business-rule engines are generating outputs that qualify as “quantitative estimates.” They flag transactions and the people who make them, but they do not estimate or predict anything quantitative.

We could spend a lot more time arguing that AML tools (including automated OFAC and other watch-list checks) are not technically models. But in the end, these arguments are moot if an examining regulator holds a differing view. If a bank’s regulator declares the bank’s AML applications to be models and orders that they be validated, then presenting a well-reasoned argument about how these tools don’t rise to the technical definition of a model is not the most prudent course of action (probably).

 

Tailoring Applicable Model Validation Principles to AML Models

What makes it challenging to validate AML “models” is not merely the additional level of effort, it’s that most model validation concepts are designed to evaluate systems that generate quantitative estimates. Consequently, in order to generate a model validation report that will withstand scrutiny, it is important to think of ways to adapt the three pillars of model validation—conceptual soundness review, benchmarking, and back-testing—to the unique characteristics of a non-model.

 

Conceptual Soundness of AML Solutions

The first pillar of model validation—conceptual soundness—is also its most universally applicable. Determining whether an application is well designed and constructed, whether its inputs and assumptions are reasonably sourced and defensible, whether it is sufficiently documented, and whether it meets the needs for which it was developed is every bit as applicable to AML solutions, EUCs and other non-predictive tools as it is to models.

For AML ”models,” a conceptual soundness review generally encompasses the following activities:

  • Documentation review: Are the rule and alert definitions and configurations identified? Are they sufficiently explained and justified? This requires detailed documentation not only from the application vendor, but also from the BSA/AML group within the bank that uses it.
  • Transaction verification: Verifying that all transactions and customers are covered and evaluated by the tool.
  • Risk assessment review: Evaluating the institution’s risk assessment methodology and whether the application’s configurations are consistent with it.
  • Data review: Are all data inputs mapped, extracted, transformed, and loaded correctly from their respective source systems into the AML engine?
  • Watchlist filtering: Are watchlist criteria configured correctly? Is the AML model receiving all the information it needs to generate alerts?

 

Benchmarking (and Process Verification) of AML Tools

Benchmarking is primarily geared toward comparing a model’s uncertain outputs against the uncertain outputs of a challenger model. AML outputs are not particularly well-suited to such a comparison. As such, benchmarking one AML tool against another is not usually feasible. Even in the unlikely event that a validator has access to a separate, “challenger” AML “model,” integrating it with all of a bank’s necessary customer and transaction systems and making sure it works is a months-long project. The nature of AML monitoring—looking at every customer and every single transaction—makes integrating a second, benchmarking engine highly impractical. And even if it were practical, the functionality of any AML system is primarily determined by its calibration and settings. Once the challenger system has been configured to match the system being tested, the objective of the benchmarking exercise is largely defeated.

So, now what? In a model validation context, benchmarking is typically performed and reported in the context of a broader “process verification” exercise—tests to determine whether the model is accomplishing what it purports to. Process verification has broad applicability to AML reviews and typically includes the following components:

  • Above-the-line testing: An evaluation of the alerts triggered by the application and identification of any “false positives” (Type I error).
  • Below-the-line testing: An evaluation of all bank activity to determine whether any transactions that should have been flagged as alerts were missed by the application. These would constitute “false negatives” (Type II error).
  • Documentation comparison: Determination of whether the application is calculating risk scores in a manner consistent with documented methodology.

 

Back-Testing (and Outcomes Analysis) of AML Applications

Because AML applications are not designed to predict the future, the notion of back-testing does not really apply to them. However, in the model validation context, back-testing is typically performed as part of a broader analysis of model outcomes. Here again, a number of AML tests apply, including the following:

  • Rule relevance: How many rules are never triggered? Are there any rules that, when triggered, are always overridden by manual review of the alert?
  • Schedule evaluation: Evaluation of the AML system’s performance testing schedule.
    Distribution analysis: Determining whether the distribution of alerts is logical in light of typical customer transaction activity and the bank’s view of its overall risk profile.
  • Management reporting: How do the AML system’s outputs, including the resulting Suspicious Activity Reports, flow into management reports? How are these reports reviewed for accuracy, presented, and archived?
  • Output maintenance: How are reports created and maintained? How is AML system output archived for reporting and ongoing monitoring purposes?

 

Testing AML Models: Balancing Thoroughness and Practicality

Generally speaking, model validators are given to being thorough. When presented with the task of validating an AML “model,” they are likely to look beyond the limitations associated with applying model validation principles to non-models and focus on devising tests designed to assess whether the AML solution is working as intended.

Left to their own devices, many model validation analysts will likely err on the side of doing more than is necessary to fulfill the requirements of an AML model validation. Devising an approach that aligns effective challenge testing with the three defined pillars of model validation has a dual benefit. It results in a model validation report that maps back to regulatory guidance and is therefore more likely to stand up to scrutiny. It also helps confine the universe of potential testing to only those areas that require testing. Restricting testing to only what is necessary and then thoroughly pursuing that narrowly defined set of tests is ultimately the key to maintaining the effectiveness and efficiency of AML testing in particular and of model risk management programs as a whole.

 


[1] On June 7, 2017, the FDIC formally adopted the Supervisory Guidance previously set forth jointly by the OCC (2011-12) and Federal Reserve (SR 11-7).