Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: Model Validation

The Why and How of a Successful SAS-to-Python Model Migration

A growing number of financial institutions are migrating their modeling codebases from SAS to Python. There are many reasons for this, some of which may be unique to the organization in question, but many apply universally. Because of our familiarity not only with both coding languages but with the financial models they power, my colleagues and I have had occasion to help several clients with this transition.

Here are some things we’ve learned from this experience and what we believe is driving this change.

Python Popularity

The popularity of Python has skyrocketed in recent years. Its intuitive syntax and a wide array of packages available to aid in development make it one of the most user-friendly programming languages in use today. This accessibility allows users who may not have a coding background to use Python as a gateway into the world of software development and expand their toolbox of professional qualifications.

Companies appreciate this as well. As an open-source language with tons of resources and low overhead costs, Python is also attractive from an expense perspective. A cost-conscious option that resonates with developers and analysts is a win-win when deciding on a codebase.

Note: R is another popular and powerful open-source language for data analytics. Unlike R, however, which is specifically used for statistical analysis, Python can be used for a wider range of uses, including UI design, web development, business applications, and others. This flexibility makes Python attractive to companies seeking synchronicity — the ability for developers to transition seamlessly among teams. R remains popular in academic circles where a powerful, easy-to-understand tool is needed to perform statistical analysis, but additional flexibility is not necessarily required. Hence, we are limiting our discussion here to Python.

Python is not without its drawbacks. As an open-source language, less oversight governs newly added features and packages. Consequently, while updates may be quicker, they are also more prone to error than SAS’s, which are always thoroughly tested prior to release.

CONTACT US

Visualization Capabilities

While both codebases support data visualization, Python’s packages are generally viewed more favorably than SAS’s, which tend to be on the more basic side. More advanced visuals are available from SAS, but they require the SAS Visual Analytics platform, which comes at an added cost.

Python’s popular visualization packages — matplotlib, plotly, and seaborn, among others — can be leveraged to create powerful and detailed visualizations by simply importing the libraries into the existing codebase.

Accessibility

SAS is a command-driven software package used for statistical analysis and data visualization. Though available only for Windows operating systems, it remains one of the most widely used statistical software packages in both industry and academia.

It’s not hard to see why. For financial institutions with large amounts of data, SAS has been an extremely valuable tool. It is a well-documented language, with many online resources and is relatively intuitive to pick up and understand – especially when users have prior experience with SQL. SAS is also one of the few tools with a customer support line.

SAS, however, is a paid service, and at a standalone level, the costs can be quite prohibitive, particularly for smaller companies and start-ups. Complete access to the full breadth of SAS and its supporting tools tends to be available only to larger and more established organizations. These costs are likely fueling its recent drop-off in popularity. New users simply cannot access it as easily as they can Python. While an academic/university version of the software is available free of charge for individual use, its feature set is limited. Therefore, for new users and start-up companies, SAS may not be the best choice, despite being a powerful tool. Additionally, with the expansion and maturity of the variety of packages that Python offers, many of the analytical abilities of Python now rival those of SAS, making it an attractive, cost-effective option even for very large firms.

Future of tech

Many of the expected advances in data analytics and tech in general are clearly pointing toward deep learning, machine learning, and artificial intelligence in general. These are especially attractive to companies dealing with large amounts of data.

While the technology to analyze data with complete independence is still emerging, Python is better situated to support companies that have begun laying the groundwork for these developments. Python’s rapidly expanding libraries for artificial intelligence and machine learning will likely make future transitions to deep learning algorithms more seamless.

While SAS has made some strides toward adding machine learning and deep learning functionalities to its repertoire, Python remains ahead and consistently ranks as the best language for deep learning and machine learning projects. This creates a symbiotic relationship between the language and its users. Developers use Python to develop ML projects since it is currently best suited for the job, which in turn expands Python’s ML capabilities — a cycle which practically cements Python’s position as the best language for future development in the AI sphere.

Overcoming the Challenges of a SAS-to-Python Migration

SAS-to-Python migrations bring a unique set of challenges that need to be considered. These include the following.

Memory overhead

Server space is getting cheaper but it’s not free. Although Python’s data analytics capabilities rival SAS’s, Python requires more memory overhead. Companies working with extremely large datasets will likely need to factor in the cost of extra server space. These costs are not likely to alter the decision to migrate, but they also should not be overlooked.

The SAS server

All SAS commands are run on SAS’s own server. This tightly controlled ecosystem makes SAS much faster than Python, which does not have the same infrastructure out of the box. Therefore, optimizing Python code can be a significant challenge during SAS-to-Python migrations, particularly when tackling it for the first time.

SAS packages vs Python packages

Calculations performed using SAS packages vs. Python packages can result in differences, which, while generally minuscule, cannot always be ignored. Depending on the type of data, this can pose an issue. And getting an exact match between values calculated in SAS and values calculated in Python may be difficult.

For example, the true value of “0” as a float datatype in SAS is approximated to 3.552714E-150, while in Python float “0” is approximated to 3602879701896397/255. These values do not create noticeable differences in most calculations. But some financial models demand more precision than others. And over the course of multiple calculations which build upon each other, they can create differences in fractional values. These differences must be reconciled and accounted for.

Comparing large datasets

One of the most common functions when working with large datasets involves evaluating how they change over time. SAS has a built-in function (proccompare) which compares datasets swiftly and easily as required. Python has packages for this as well; however, these packages are not as robust as their SAS counterparts. 

Conclusion

In most cases, the benefits of migrating from SAS to Python outweigh the challenges associated with going through the process. The envisioned savings can sometimes be attractive enough to cause firms to trivialize the transition costs. This should be avoided. A successful migration requires taking full account of the obstacles and making plans to mitigate them. Involving the right people from the outset — analysts well versed in both languages who have encountered and worked through the pitfalls — is key.


Changes to Loss Models…and How to Validate Them

So you’re updating all your modeling assumptions. Don’t forget about governance.

Modelers have now been grappling with how COVID-19 should affect assumptions and forecasts for nearly two months. This exercise is raising at least as many questions as it is answering.

No credit model (perhaps no model at all) is immune. Among the latest examples are mortgage servicers having to confront how to bring their forbearance and loss models into alignment with new realities.

These new realities are requiring servicers to model unprecedented macroeconomic conditions in a new and changing regulatory environment. The generous mortgage forbearance provisions ushered in by March’s CARES Act are not tantamount to loan forgiveness. But servicers probably shouldn’t count on reimbursement of their forbearance advances until loan liquidation (irrespective of what form the payoff takes).

The ramifications of these costs and how servicers should modeling them is a central topic to be addressed in a Mortgage Bankers Association webinar on Wednesday, May 13, “Modeling Forbearance Losses in the COVID-19 world” (free for MBA members). RiskSpan CEO Bernadette Kogler will lead a panel consisting of Faith Schwartz, Suhrud Dagli, and Morgan Snyder in a discussion of the forbearance’s regulatory implications, the limitations of existing models, and best practices for modeling forbearance-related advances, losses, and operational costs.

Models, of course, are only as good as their underlying data and assumptions. When it comes to forbearance modeling, those assumptions obviously have a lot to do with unemployment, but also with the forbearance take-up rate layered on top of more conventional assumptions around rates of delinquency, cures, modifications, and bankruptcies.

The unique nature of this crisis requires modelers to expand their horizons in search of applicable data. For example, GSE data showing how delinquencies trend in rising unemployment scenarios might need to be supplemented by data from Greek or other European crises to better simulate extraordinarily high unemployment rates. Expense and liquidation timing assumptions will likely require looking at GSE and private-label data from the 2008 crisis. Having reliable assumptions around these is critically important because liquidity issues associated with servicing advances are often more an issue of timing than of anything else.

Model adjustments of the magnitude necessary to align them with current conditions almost certainly qualify as “material changes” and present a unique set of challenges to model validators. In addition to confronting an expanded workload brought on by having to re-validate models that might have been validated as recently as a few months ago, validators must also effectively challenge the new assumptions themselves. This will likely prove challenging absent historical context.

RiskSpan’s David Andrukonis will address many of these challenges—particularly as they relate to CECL modeling—as he participates in a free webinar, “Model Risk Management and the Impacts of COVID-19,” sponsored by the Risk Management Association. Perhaps fittingly, this webinar will run concurrent with the MBA webinar discussed above.

As is always the case, the smoothness of these model-change validations will depend on the lengths to which modelers are willing to go to thoroughly document their justifications for the new assumptions. This becomes particularly important when introducing assumptions that significantly differ from those that have been used previously. While it will not be difficult to defend the need for changes, justifying the individual changes themselves will prove more challenging. To this end, meticulously documenting every step of feature selection during the modeling process is critical not only in getting to a reliable model but also in ensuring an efficient validation process.

Documenting what they’re doing and why they’re doing it is no modeler’s favorite part of the job—particularly when operating in crisis mode and just trying to stand up a workable solution as quickly as possible. But applying assumptions that have never been used before always attracts increased scrutiny. Modelers will need to get into the habit of memorializing not only the decisions made regarding data and assumptions, but also the other options considered, and why the other considered options were ultimately passed over.

Documenting this decision-making process is far easier at the time it happens, while the details are fresh in a modeler’s mind, than several months down the road when people inevitably start probing.

Invest in the “ounce of prevention” now. You’ll thank yourself when model validation comes knocking.


Webinar: Applying Model Validation Principles to Anti-Money Laundering Tools

webinar

Applying Model Validation Principles to Anti-Money Laundering Tools

This webinar will explore some of the more efficient ways we have encountered for applying model validation principles to AML tools, including:

  • Ensuring that the rationale supporting rules and thresholds is sufficiently documented 
  • Applying above-the-line and below-the-line testing to an effective benchmarking regime 
  • Assessing the relevance of rules that are seldom triggered or frequently overridden 


About The Hosts

Timothy Willis

Managing Director – RiskSpan

Timothy Willis is head of RiskSpan’s Governance and Controls Practice, with a particular focus on model risk management. He is an experienced engagement manager, financial model validator and mortgage industry analyst who regularly authors and oversees the delivery of technical reports tailored to executive management and regulatory audiences.

Tim has directed projects validating virtually every type of model used by banks. He has also developed business requirements and improved processes for commercial banks of all sizes, mortgage banks, mortgage servicers, Federal Home Loan Banks, rating agencies, Fannie Mae, Freddie Mac, and U.S. Government agencies.

Susan Devine, Cams, CPA

Senior Consultant – Third Pillar Consulting

Susan has more than twenty years of experience as an independent consultant providing business analysis, financial model validations, anti-money laundering reviews in compliance with the Bank Secrecy Act, and technical writing to government and commercial entities. Experience includes developing and documenting business processes, business requirements, security requirements, computer systems, networks, systems development lifecycle activities, and financial models. Experience related to business processes includes business process reviews, security plans in compliance with NIST and GISRA, Sarbanes Oxley compliance documents, Dodd-Frank Annual Stress Testing, functional and technical requirements for application development projects, policies, standards, and operating procedures for business and technology processes.

Chris Marsten

Financial and Data Analyst – RiskSpan

Chris is a financial and data analyst at RiskSpan where he develops automated analytics and reporting for client loan portfolios and provides data analysis in support of model validation projects. He also possesses extensive experience writing ETL code and automating manual processes. Prior to coming to RiskSpan, he developed and managed models for detecting money laundering and terrorist activity for Capital One Financial Corporation, where he also forecasted high-risk customer volumes and created an alert investigation tool for identifying suspicious customers and transactions.


Webinar: Building and Running an Efficient Model Governance Program

webinar

Building and Running an Efficient Model Governance Program

Join RiskSpan Model Governance Expert Tim Willis for a webinar about running an efficient program. This webinar will cover essential elements of a model risk management policy including how to devise policies for open-source models and other applications not easily categorized. They’ll discuss best practices for building and maintaining a model inventory, tips for assigning appropriate risk ratings to models and determining validation frequency.


About The Host

Timothy Willis

Managing Director – RiskSpan

Timothy Willis is head of RiskSpan’s Governance and Controls Practice, with a particular focus on model risk management. He is an experienced engagement manager, financial model validator and mortgage industry analyst who regularly authors and oversees the delivery of technical reports tailored to executive management and regulatory audiences.


Webinar: Managing Down Model Validation Costs

webinar

Managing Down Model Validation Costs

Learn how to make your model validation budget go further for you.  In this webinar, you’ll learn about:  Balancing internal and external resources, prioritizing models with the most risk, documenting to facilitate the process.


About The Hosts

Timothy Willis

Managing Director – RiskSpan

Timothy Willis is an experienced engagement manager, financial model validator and mortgage industry analyst who regularly authors and oversees the delivery of technical reports tailored to executive management and regulatory audiences. Tim has directed projects validating virtually every type of model used by banks. He has also developed business requirements and improved processes for commercial banks of all sizes, mortgage banks, mortgage servicers, Federal Home Loan Banks, rating agencies, Fannie Mae, Freddie Mac, and U.S. Government agencies.

Nick Young

Director of Model Risk Management

Nick Young has more than ten years of experience as a quantitative analyst and economist. At RiskSpan, he performs model validation, development and governance on a wide variety of models including those used for Basel capital planning, reserve/impairment, Asset Liability Management (ALM), CCAR/DFAST stress testing, credit origination, default, prepayment, market risk, Anti-Money Laundering (AML), fair lending, fraud and account management.


eBook: A Validator’s Guide to Model Risk Management

ebook

A Validator’s Guide to Model Risk Management

Learn from RiskSpan model validation experts what constitutes a model, considerations for validating vendor models, how to prepare, how to determine scope, comparisons of performance metrics, and considerations for evaluating model inputs.


Commercial Bank: CECL Model Validation

A commercial bank required an independent validation of its CECL models. The models are embedded into three platforms (Trepp, Impairment Studio and Evolv) and included the following:

  • Trepp Default Model (Trepp DM) is used by the Bank to estimate the PD, LGD and EL of the CRE portfolio
  • Moody’s ImpairmentStudio – Lifetime Loss Rate (LLR) Model is used to calculate the Lifetime Loss Rate for the C&I portfolio
  • EVOLV – Lifetime Loss Rate (LLR) model is used to calculate the Lifetime Loss Rate for Capital Call and Venture Capital loans within the Commercial and Industrial (C&I) segment, Non-rated Commercial loans, Consumer as well as Municipal loans
  • EVOLV – Base Loss Rate (BLR) model is used to calculate quantitative allowance for 1-4 Family commercial loans and Personal loans for commercial use within the C&I segment Residential loans, HELOC and Indirect vehicle.

The Solution

Because the CECL models are embedded into three platforms, RiskSpan conducted an independent, comprehensive validation of all three platforms.

Our validation included components typical of a full-scope model validation, focusing on a conceptual soundness review, process verification and outcomes analysis.

Deliverables 

RiskSpan was given access to the models’ platforms, and workpapers, along with the models’ development documentation, and weekly Q&A sessions with the model owners.

Our review evaluated:

i. the business requirements and purpose of the model, and the metrics that used by the developer to select the best model and evaluate its success in meeting these requirements will be judged.

ii. the identification and justification for

  (a) any theoretical basis for the model structure;

  (b) the use of specific developmental data;

  (c) the use of any statistical or econometric technique to estimate the model; and

  (d) the criteria used to identify and select the best model among alternatives.

iii. the reasonableness of model-development decisions, documented assumptions, data adjustments, and model-performance criteria as measured at the time of development.

iv. Process verification to determine the accuracy of data transcription, adjustment, transformation and model code.

RiskSpan produced a written validation report detailing its validation assessments, tests, and findings, and providing a summary assessment of the suitability of the models for their intended uses as an input to the bank’s CECL process, based upon the Conceptual Soundness Review and Process Verification.


Regional Bank: AML/BSA Model Validation

A large regional bank required a qualified, independent third party to perform risk-based procedures designed to provide reasonable assurance that its FCRM anti-money laundering system’s transaction monitoring, customer risk rating, and watch list filtering applications were functioning as designed and intended.

The Solution

RiskSpan reviewed existing materials, past audits and results, testing protocols and all documentation related to the bank’s model risk management standards, model setup and execution. We inventoried all model data sources, scoring processes and outputs related to the AML system.

The solution consisted of testing each of the five model segments: Design and Development; Input Processing; Implementation; Output and Use; and Performance.

The solution also quantified risk and exposure of identified gaps and limitations and presented sound industry practices and resolutions. 

Deliverables

  • A sustainable and robust transaction monitoring tuning methodology, which documented the bank’s approach, processes to be executed, frequency of execution, and the governance structure for executing tuning and optimization in the AML model. This included collecting and assessing previous regulatory feedback.
  • A framework that included a formal, documented, consistent process for sampling and analysis procedures to evaluate the ALM system’s scenarios and change control documentation.
  • A process for managing model risk consistent with the bank’s examiner expectations and business needs.


Model Validation Programs – Optimizing Value in Model Risk Groups

Watch RiskSpan Managing Director, Tim Willis, discuss how to optimize model validation programs. RiskSpan’s model risk management practice has experience in both building and validating models, giving us unique expertise to provide very high quality validations without diving into activities and exercises of marginal value.

 

Talk Scope

 


Here Come the CECL Models: What Model Validators Need to Know

As it turns out, model validation managers at regional banks didn’t get much time to contemplate what they would do with all their newly discovered free time. Passage of the Economic Growth, Regulatory Relief, and Consumer Protection Act appears to have relieved many model validators of the annual DFAST burden. But as one class of models exits the inventory, a new class enters—CECL models.

Banks everywhere are nearing the end of a multi-year scramble to implement a raft of new credit models designed to forecast life-of-loan performance for the purpose of determining appropriate credit-loss allowances under the Financial Accounting Standards Board’s new Current Expected Credit Loss (CECL) standard, which takes full effect in 2020 for public filers and 2021 for others.

The number of new models CECL adds to each bank’s inventory will depend on the diversity of asset portfolios. More asset classes and more segmentation will mean more models to validate. Generally model risk managers should count on having to validate at least one CECL model for every loan and debt security type (residential mortgage, CRE, plus all the various subcategories of consumer and C&I loans) plus potentially any challenger models the bank may have developed.

In many respects, tomorrow’s CECL model validations will simply replace today’s allowance for loan and lease losses (ALLL) model validations. But CECL models differ from traditional allowance models. Under the current standard, allowance models typically forecast losses over a one-to-two-year horizon. CECL requires a life-of-loan forecast, and a model’s inputs are explicitly constrained by the standard. Accounting rules also dictate how a bank may translate the modeled performance of a financial asset (the CECL model’s outputs) into an allowance. Model validators need to be just as familiar with the standards governing how these inputs and outputs are handled as they are with the conceptual soundness and mathematical theory of the credit models themselves.

CECL Model Inputs – And the Magic of Mean Reversion

Not unlike DFAST models, CECL models rely on a combination of loan-level characteristics and macroeconomic assumptions. Macroeconomic assumptions are problematic with a life-of-loan credit loss model (particularly with long-lived assets—mortgages, for instance) because no one can reasonably forecast what the economy is going to look like six years from now. (No one really knows what it will look like six months from now, either, but we need to start somewhere.) The CECL standard accounts for this reality by requiring modelers to consider macroeconomic input assumptions in two separate phases: 1) a “reasonable and supportable” forecast covering the time frame over which the entity can make or obtain such a forecast (two or three years is emerging as common practice for this time frame), and 2) a “mean reversion” forecast based on long-term historical averages for the out years. As an alternative to mean reverting by the inputs, entities may instead bypass their models in the out years and revert to long-term average performance outcomes by the relevant loan characteristics.

Assessing these assumptions (and others like them) requires a model validator to simultaneously wear a “conceptual soundness” testing hat and an “accounting policy” compliance hat. Because the purpose of the CECL model is to prove an accounting answer and satisfy an accounting requirement, what can validators reasonably conclude when confronted with an assumption that may seem unsound from purely statistical point of view but nevertheless satisfies the accounting standard?

Taking the mean reversion requirement as an example, the projected performance of loans and securities beyond the “reasonable and supportable” period is permitted to revert to the mean in one of two ways: 1) modelers can feed long-term history into the model by supplying average values for macroeconomic inputs, allowing modeled results to revert to long-term means in that way, or 2) modelers can mean revert “by the outputs” – bypassing the model and populating the remainder of the forecast with long-term average performance outcomes (prepayment, default, recovery and/or loss rates depending on the methodology). Either of these approaches could conceivably result in a modeler relying on assumptions that may be defensible from an accounting perspective despite being statistically dubious, but the first is particularly likely to raise a validator’s eyebrow. The loss rates that a model will predict when fed “average” macroeconomic input assumptions are always going to be uncharacteristically low. (Because credit losses are generally large in bad macroeconomic environments and low in average and good environments, long-term average credit losses are higher than the credit losses that occur during average environments. A model tuned to this reality—and fed one path of “average” macroeconomic inputs—will return credit losses substantially lower than long-term average credit losses.) A credit risk modeler is likely to think that these are not particularly realistic projections, but an auditor following the letter of the standard may choose not find any fault with them. In such situations, validators need to fall somewhere in between these two extremes—keeping in mind that the underlying purpose of CECL models is to reasonably fulfill an accounting requirement—before hastily issuing a series of high-risk validation findings.

CECL Model Outputs: What are they?

CECL models differ from some other models in that the allowance (the figure that modelers are ultimately tasked with getting to) is not itself a direct output of the underlying credit models being validated. The expected losses that emerge from the model must be subject to a further calculation in order to arrive at the appropriate allowance figure. Whether these subsequent calculations are considered within the scope of a CECL model validation is ultimately going to be an institutional policy question, but it stands to reason that they would be.

Under the CECL standard, banks will have two alternatives for calculating the allowance for credit losses: 1) the allowance can be set equal to the sum of the expected credit losses (as projected by the model), or 2) the allowance can be set equal to the cost basis of the loan minus the present value of expected cash flows. While a validator would theoretically not be in a position to comment on whether the selected approach is better or worse than the alternative, principles of process verification would dictate that the validator ought to determine whether the selected approach is consistent with internal policy and that it was computed accurately.

When Policy Trumps Statistics

The selection of a mean reversion approach is not the only area in which a modeler may make a statistically dubious choice in favor of complying with accounting policy.

Discount Rates

Translating expected losses into an allowance using the present-value-of-future-cash-flows approach (option 2—above) obviously requires selecting an appropriate discount rate. What should it be? The standard stipulates the use of the financial asset’s Effective Interest Rate (or “yield,” i.e., the rate of return that equates an instrument’s cash flows with its amortized cost basis). Subsequent accounting guidance affords quite a bit a flexibility in how this rate is calculated. Institutions may use the yield that equates contractual cash flows with the amortized cost basis (we can call this “contractual yield”), or the rate of return that equates cash flows adjusted for prepayment expectations with the cost basis (“prepayment-adjusted yield”).

The use of the contractual yield (which has been adjusted for neither prepayments nor credit events) to discount cash flows that have been adjusted for both prepayments and credit events will allow the impact of prepayment risk to be commingled with the allowance number. For any instruments where the cost basis is greater than unpaid principal balance (a mortgage instrument purchased at 102, for instance) prepayment risk will exacerbate the allowance. For any instruments where the cost basis is less than the unpaid principal balance, accelerations in repayment will offset the allowance. This flaw has been documented by FASB staff, with the FASB Board subsequently allowing but not requiring the use of a prepay-adjusted yield.

Multiple Scenarios

The accounting standard neither prohibits nor requires the use of multiple scenarios to forecast credit losses. Using multiple scenarios is likely more supportable from a statistical and model validation perspective, but it may be challenging for a validator to determine whether the various scenarios have been weighted properly to arrive at the correct, blended, “expected” outcome.

Macroeconomic Assumptions During the “Reasonable and Supportable” Period

Attempting to quantitatively support the macro assumptions during the “reasonable and supportable” forecast window (usually two to three years) is likely to be problematic both for the modeler and the validator. Such forecasts tend to be more art than science and validators are likely best off trying to benchmark them against what others are using than attempting to justify them using elaborately contrived quantitative methods. The data that is mostly likely to be used may turn out to be simply the data that is available. Validators must balance skepticism of such approaches with pragmatism. Modelers have to use something, and they can only use the data they have.

Internal Data vs. Industry Data

The standard allows for modeling using internal data or industry proxy data. Banks often operate under the dogma that internal data (when available) is always preferable to industry data. This seems reasonable on its face, but it only really makes sense for institutions with internal data that is sufficiently robust in terms of quantity and history. And the threshold for what constitutes “sufficiently robust” is not always obvious. Is one business cycle long enough? Is 10,000 loans enough? These questions do not have hard and fast answers.

———-

Many questions pertaining to CECL model validations do not yet have hard and fast answers. In some cases, the answers will vary by institution as different banks adopt different policies. Industry best practices will doubtless emerge in response to others. For the rest, model validators will need to rely on judgment, sometimes having to balance statistical principles with accounting policy realities. The first CECL model validations are around the corner. It’s not too early to begin thinking about how to address these questions.


Get Started
Log in

Linkedin   

risktech2024