Increasing regulatory scrutiny due to the catastrophic risk associated with anti-money-laundering (AML) non-compliance is prompting many banks to tighten up their approach to AML model validation. Because AML applications would be better classified as highly specialized, complex systems of algorithms and business rules than as “models,” applying model validation techniques to them presents some unique challenges that make documentation especially important.
In addition to devising effective challenges to determine the “conceptual soundness” of an AML system and whether its approach is defensible, validators must determine the extent to which various rules are firing precisely as designed. Rather than commenting on the general reasonableness of outputs based on back-testing and sensitivity analysis, validators must rely more heavily on a form of process verification that requires precise documentation.
Vendor Documentation of Transaction Monitoring Systems
Above-the-line and below-the-line testing—the backbone of most AML transaction monitoring testing—amounts to a process verification/replication exercise. For any model replication exercise to return meaningful results, the underlying model must be meticulously documented. If not, validators are left to guess at how to fill in the blanks. For some models, guessing can be an effective workaround. But it seldom works well when it comes to a transaction monitoring system and its underlying rules. Absent documentation that describes exactly what rules are supposed to do, and when they are supposed to fire, effective replication becomes nearly impossible.
Anyone who has validated an AML transaction monitoring system knows that they come with a truckload of documentation. Vendor documentation is often quite thorough and does a reasonable job of laying out the solution’s approach to assessing transaction data and generating alerts. Vendor documentation typically explains how relevant transactions are identified, what suspicious activity each rule is seeking to detect, and (usually) a reasonably detailed description of the algorithms and logic each rule applies.
This information provided by the vendor is valuable and critical to a validator’s ability to understand how the solution is intended to work. But because so much more is going on than what can reasonably be captured in vendor documentation, it alone provides insufficient information to devise above-the-line and below-the-line testing that will yield worthwhile results.
Why An AML Solution’s Vendor Documentation is Not Enough
Every model validator knows that model owners must supplement vendor-supplied documentation with their own. This is especially true with AML solutions, in which individual user settings—thresholds, triggers, look-back periods, white lists, and learning algorithms—are arguably more crucial to the solution’s overall performance than the rules themselves.
Comprehensive model owner documentation helps validators (and regulatory supervisors) understand not only that AML rules designed to flag suspicious activity are firing correctly, but also that each rule is sufficiently understood by those who use the solution. It also provides the basis for a validator’s testing that rules are calibrated reasonably. Testing these calibrations is analogous to validating the inputs and assumptions of a predictive model. If they are not explicitly spelled out, then they cannot be evaluated.
Here are some examples.
Transaction Input Transformations
Details about how transaction data streams are mapped, transformed, and integrated into the AML system’s database vary by institution and cannot reasonably be described in generic vendor documentation. Consequently, owner documentation needs to fully describe this. To pass model validation muster, the documentation should also describe the review process for input data and field mapping, along with all steps taken to correct inaccuracies or inconsistencies as they are discovered.
Mapping and importing AML transaction data is sometimes an inexact science. To mitigate risks associated with missing fields and customer attributes, risk-based parameters must be established and adequately documented. This documentation enables validators who test the import function to go into the analysis with both eyes open. Validators must be able to understand the circumstances under which proxy data is used in order to make sound judgments about the reasonableness and effectiveness of established proxy parameters and how well they are being adhered to. Ideally, documentation pertaining to transaction input transformation should describe the data validations that are performed and define any error messages that the system might generate.
Risk Scoring Methodologies and Related Monitoring
Specific methodologies used to risk score customers and countries and assign them to various lists (e.g., white, gray, or black lists) also vary enough by institution that vendor documentation cannot be expected to capture them. Processes and standards employed in creating and maintaining these lists must be documented. This documentation should include how customers and countries get on these lists to begin with, how frequently they are monitored once they are on a list, what form that monitoring takes, the circumstances under which they can move between lists, and how these circumstances are ascertained. These details are often known and usually coded (to some degree) in BSA department procedures. This is not sufficient. They should be incorporated in the AML solution’s model documentation and include data sources and a log capturing the history of customers and countries moving to and from the various risk ratings and lists.
Output Overrides
Management overrides are more prevalent with AML solutions than with most models. This is by design. AML solutions are intended to flag suspicious transactions for review, not to make a final judgment about them. That job is left to BSA department analysts. Too often, important metrics about the work of these analysts are not used to their full potential. Regular analysis of these overrides should be performed and documented so that validators can evaluate AML system performance and the justification underlying any tuning decisions based on the frequency and types of overrides.
Successful AML model validations require rule replication, and incompletely documented rules simply cannot be replicated. Transaction monitoring is a complicated, data-intensive process, and getting everything down on paper can be daunting, but AML “model” owners can take stock of where they stand by asking themselves the following questions:
- Are my transaction monitoring rules documented thoroughly enough for a qualified third-party validator to replicate them? (Have I included all systematic overrides, such as white lists and learning algorithms?)
- Does my documentation give a comprehensive description of how each scenario is intended to work?
- Are thresholds adequately defined?
- Are the data and parameters required for flagging suspicious transactions described well enough to be replicated?
If the answer to all these questions is yes, then AML solution owners can move into the model validation process reasonably confident that the state of their documentation will not be a hindrance to the AML model validation process.