Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Blog Archives

RiskSpan Edge Platform API

The RiskSpan Edge Platform API enables direct access to all data from the RS Edge Platform. This includes both aggregate analytics and loan-and pool-level data.  Standard licensed users may build queries in our browser-based graphical interface. But, our API is a channel for power users with programming skills (Python, R, even Excel) and production systems that are incorporating RS Edge Platform components as part of their Service Oriented Architecture (SOA).

Watch RiskSpan Director LC Yarnelle explain the Edge API in this video!

get a demo


CRT Deal Monitor: Understanding When Credit Becomes Risky

This analysis tracks several metrics related to deal performance and credit profile, putting them into a historical context by comparing the same metrics for recent-vintage deals against those of ‘similar’ cohorts in the time leading up to the 2008 housing crisis. You’ll see how credit metrics are trending today and understand the significance of today’s shifts in the context of historical data. Some of the charts in this post have interactive features, so click around! We’ll be tweaking the analysis and adding new metrics in subsequent months. Please shoot us an email if you have an idea for other metrics you’d like us to track.

Highlights

  • Performance metrics signal steadily increasing credit risk, but no cause for alarm.
    • We’re starting to see the hurricane-related (2017 Harvey and Irma) delinquency spikes subside in the deal data. Investors should expect a similar trend in 2019 due to Hurricane Florence.
    • The overall percentage of delinquent loans is increasing steadily due to the natural age ramp of delinquency rates and the ramp-up of the program over the last 5 years.
    • Overall delinquency levels are still far lower than historical rates.
    • While the share of delinquency is increasing, loans that go delinquent are ending up in default at a lower rate than before.
  • Deal Profiles are becoming riskier as new GSE acquisitions include higher-DTI business.
    • It’s no secret that both GSEs started acquiring a lot of high-DTI loans (for Fannie this moved from around 16% of MBS issuance in Q2 2017 to 30% of issuance as of Q2 this year). We’re starting to see a shift in CRT deal profiles as these loans are making their way into CRT issuance.
    • The credit profile chart toward the end of this post compares the credit profiles of recently issued deals with those of the most recent three months of MBS issuance data to give you a sense of the deal profiles we’re likely to see over the next 3 to 9 months. We also compare these recently issued deals to a similar cohort from 2006 to give some perspective on how much the credit profile has improved since the housing crisis.
    • RiskSpan’s Vintage Quality Index reflects an overall loosening of credit standards–reminiscent of 2003 levels–driven by this increase in high-DTI originations.
  • Fannie and Freddie have fundamental differences in their data disclosures for CAS and STACR.
    • Delinquency rates and loan performance all appear slightly worse for Fannie Mae in both the deal and historical data.
    • Obvious differences in reporting (e.g., STACR reporting a delinquent status in a terminal month) have been corrected in this analysis, but some less obvious differences in reporting between the GSEs may persist.
    • We suspect there is something fundamentally different about how Freddie Mac reports delinquency status—perhaps related to cleaning servicing reporting errors, cleaning hurricane delinquencies, or the way servicing transfers are handled in the data. We are continuing our research on this front and hope to follow up with another post to explain these anomalies.

The exceptionally low rate of delinquency, default, and loss among CRT deals at the moment makes analyzing their credit-risk characteristics relatively boring. Loans in any newly issued deal have already seen between 6 and 12 months of home price growth, and so if the economy remains steady for the first 6 to 12 months after issuance, then that deal is pretty much in the clear from a risk perspective. The danger comes if home prices drift downward right after deal issuance. Our aim with this analysis is to signal when a shift may be occurring in the credit risk inherent in CRT deals. Many data points related to the overall economy and home prices are available to investors seeking to answer this question. This analysis focuses on what the Agency CRT data—both the deal data and the historical performance datasets—can tell us about the health of the housing market and the potential risks associated with the next deals that are issued.

Current Performance and Credit Metrics

Delinquency Trends

The simplest metric we track is the share of loans across all deals that is 60+ days past due (DPD). The charts below compare STACR (Freddie) vs. CAS (Fannie), with separate charts for high-LTV deals (G2 for CAS and HQA for STACR) vs. low-LTV deals (G1 for CAS and DNA for STACR). Both time series show a steadily increasing share of delinquent loans. This slight upward trend is related to the natural aging curve of delinquency and the ramp-up of the CRT program. Both time series show a significant spike in delinquency around January of this year due to the 2017 hurricane season. Most of these delinquent loans are expected to eventually cure or prepay. For comparative purposes, we include a historical time series of the share of loans 60+ DPD for each LTV group. These charts are derived from the Fannie Mae and Freddie Mac loan-level performance datasets. Comparatively, today’s deal performance is much better than even the pre-2006 era. You’ll note the systematically higher delinquency rates of CAS deals. We suspect this is due to reporting differences rather than actual differences in deal performance. We’ll continue to investigate and report back on our findings.

Delinquency Outcome Monitoring

While delinquency rates might be trending up, loans that are rolling to 60-DPD are ultimately defaulting at lower and lower rates. The tables below track the status of loans that were 60+ DPD. Each bar in the chart represents the population of loans that were 60+ DPD exactly 6 months prior to the x-axis date. Over time, we see growing 60-DPD and 60+ DPD groups, and a shrinking Default group. This indicates that a majority of delinquent loans wind up curing or prepaying, rather than proceeding to default. The choppiness and high default rates in the first few observations of the data are related to the very low counts of delinquent loans as the CRT program ramped up. The following table repeats the 60-DPD delinquency analysis for the Freddie Mac Loan Level Performance dataset leading up to and following the housing crisis. (The Fannie Mae loan level performance set yields a nearly identical chart.) Note how many more loans in these cohorts remained delinquent (rather than curing or defaulting) relative to the more recent CRT loans. https://plot.ly/~dataprep/30.embed

Vintage Quality Index

RiskSpan’s Vintage Quality Index (VQI) reflects a reversion to the looser underwriting standards of the early 2000s as a result of the GSEs’ expansion of high-DTI lending. RiskSpan introduced the VQI in 2015 as a way of quantifying the underwriting environment of a particular vintage of mortgage originations. We use the metric as an empirically grounded way to control for vintage differences within our credit model. VQI-History While both GSEs increased high-DTI lending in 2017, it’s worth noting that Fannie Mae saw a relatively larger surge in loans with DTIs greater than 43%. The chart below shows the share of loans backing MBS with DTI > 43. We use the loan-level MBS issuance data to track what’s being originated and acquired by the GSEs because it is the timeliest data source available. CRT deals are issued with loans that are between 6 and 20 months seasoned, and so tracking MBS issuance provides a preview of what will end up in the next cohort of deals. High DTI Share

Deal Profile Comparison

The tables below compare the credit profiles of recently issued deals. We focus on the key drivers of credit risk, highlighting the comparatively riskier features of a deal. Each table separates the high-LTV (80%+) deals from the low-LTV deals (60%-80%). We add two additional columns for comparison purposes. The first is the ‘Coming Cohort,’ which is meant to give an indication of what upcoming deal profiles will look like. The data in this column is derived from the most recent three months of MBS issuance loan-level data, controlling for the LTV group. These are newly originated and acquired by the GSEs—considering that CRT deals are generally issued with an average loan age between 6 and 15 months, these are the loans that will most likely wind up in future CRT transactions. The second comparison cohort consists of 2006 originations in the historical performance datasets (Fannie and Freddie combined), controlling for the LTV group. We supply this comparison as context for the level of risk that was associated with one of the worst-performing cohorts. The latest CAS deals—both high- and low-LTV—show the impact of increased >43% DTI loan acquisitions. Until recently, STACR deals typically had a higher share of high-DTI loans, but the latest CAS deals have surpassed STACR in this measure, with nearly 30% of their loans having DTI ratios in excess of 43%. CAS high-LTV deals carry more risk in LTV metrics, such as the percentage of loans with a CLTV > 90 or CLTV > 95. However, STACR includes a greater share of loans with a less-than-standard level of mortgage insurance, which would provide less loss protection to investors in the event of a default. Credit Profile Low-LTV deals generally appear more evenly matched in terms of risk factors when comparing STACR and CAS. STACR does display the same DTI imbalance as seen in the high-LTV deals, but that may change as the high-DTI group makes its way into deals. Low-LTV-Deal-Credit-Profile-Most-Recent-Deals

Deal Tracking Reports

Please note that defaults are reported on a delay for both GSEs, and so while we have CPR numbers available for August, CDR numbers are not provided because they are not fully populated yet. Fannie Mae CAS default data is delayed an additional month relative to STACR. We’ve left loss and severity metrics blank for fixed-loss deals. STACR-Deals-over-the-past-3-months CAS-Deals-from-the-past-3-months.

Get a Demo


RiskSpan Adds Home Equity Conversion Mortgage Data to Edge Platform

ARLINGTON, VA, September 12, 2018 — Leading mortgage data analytics provider RiskSpan added Home Equity Conversion Mortgage (HECM) Data to the library of datasets available through its RS Edge Platform. The dataset includes over half a billion records from Ginnie Mae that will expand the RS Edge Platform’s critical applications in Reverse-Mortgage Analysis. RS Edge is a SaaS platform that integrates normalized data, predictive models and complex scenario analytics for customers in the capital markets, commercial banking, and insurance industries. The Edge Platform solves the hardest data management and analytical problem – affordable off-the-shelf integration of clean data and reliable models.

The HECM dataset is the latest in a series of recent additions to the RS Edge data libraries. The platform now holds over five billion records across decades of collection and is the solution of choice for whole loan and securities analytics. RiskSpan’s data strategy is simple. Provide our customers with normalized, tested, analysis-ready data that their enterprise modeling and analytics teams can leverage for faster, more reliable insight. We do the grunt work so that you don’t have to, said Patrick Doherty, RiskSpan’s Chief Operating Officer.  The HECM dataset has been subjected to RiskSpan’s comprehensive data normalization process for simpler analysis in RS Edge. Edge users will be able to drill down to snapshot and historical data available through the UI. Users will also be able to benchmark the HECM data against their own portfolio and leverage it to develop and deploy more sophisticated credit models.  RiskSpan’s Edge API also makes it easier-than-ever to access large datasets for analytics, model development and benchmarking. Major quant teams that prefer APIs now have access to normalized and validated data to run scenario analytics, stress testing or shock analysis. RiskSpan makes data available through its proprietary instance of RStudio and Python.

Get a Demo


Data-as-a-Service – Credit Risk Transfer Data

Watch RiskSpan Managing Director Janet Jozwik explain our recent Credit Risk Transfer data (CRT) additions to the RS Edge Platform.

Each dataset has been normalized to the same standard for simpler analysis in RS Edge, enabling users to compare GSE performance with just a few clicks. The data has also been enhanced to include helpful variables, such as mark-to-market loan-to-value ratios based on the most granular house price indexes provided by the Federal Housing Finance Agency. 

get a demo


RiskSpan to Offer Credit Risk Transfer Data Through Edge Platform

ARLINGTON, VA, September 6, 2018 — RiskSpan announced today its rollout of Credit Risk Transfer (CRT) datasets available through its RS Edge Platform. The datasets include over seventy million Agency loans that will expand the RS Edge platform’s data library and add key enhancements for credit risk analysis.  RS Edge is a SaaS platform that integrates normalized data, predictive models and complex scenario analytics for customers in the capital markets, commercial banking, and insurance industries. The Edge Platform solves the hardest data management and analytical problem – affordable off-the-shelf integration of clean data and reliable models.  New additions to the RS Edge Data Library will include key GSE Loan Level Performance datasets going back eighteen years. RiskSpan is also adding Fannie Mae’s Connecticut Avenue Securities (CAS) and Credit Insurance Risk Transfer (CIRT) datasets as well as the Freddie Mac Structured Agency Credit Risk (STACR) datasets.  

Each dataset has been normalized to the same standard for simpler analysis in RS Edge. This will allow users to compare GSE performance with just a few clicks. The data has also been enhanced to include helpful variables, such as mark-to-market loan-to-value ratios based on the most granular house price indexes provided by the Federal Housing Finance Agency.  Managing Director and Co-Head of Quantitative Analytics Janet Jozwik said of the new CRT data, “Our data library is a great, cost-effective resource that can be leveraged to build models, understand assumptions around losses on different vintages, and benchmark performance of their own portfolio against the wider universe.”  RiskSpan’s Edge API also makes it easier-than-ever to access large datasets for analytics, model development and benchmarking. Major quant teams that prefer APIs now have access to normalized and validated data to run scenario analytics, stress testing or shock analysis. RiskSpan makes data available through its proprietary instance of RStudio and Python. 

get a demo


Big Companies; Big Data Issues

Data issues plague organizations of all sorts and sizes. But generally, the bigger the dataset, and the more transformations the data goes through, the greater the likelihood of problems. Organizations take in data from many different sources, including social media, third-party vendors and other structured and unstructured origins, resulting in massive and complex data storage and management challenges. This post presents ideas to keep in mind when seeking to address these.

First, a couple of definitions:

Data quality generally refers to the fitness of a dataset for its purpose in a given context. Data quality encompasses many related aspects, including:

  • Accuracy,
  • Completeness,
  • Update status,
  • Relevance,
  • Consistency across data sources,
  • Reliability,
  • Appropriateness of presentation, and
  • Accessibility

Data lineage tracks data movement, including its origin and where it moves over time. Data lineage can be represented visually to depict how data flows from its source to its destination via various changes and hops.

The challenges facing many organizations relate to both data quality and data lineage issues, and a considerable amount of time and effort is spent both in tracing the source of data (i.e., its lineage) and correcting errors (i.e., ensuring its quality). Business intelligence and data visualization tools can do a magnificent job of teasing stories out of data, but these stories are only valuable when they are true. It is becoming increasingly vital to adopt best practices to ensure that the massive amounts of data feeding downstream processes and presentation engines are both reliable and properly understood.

Financial institutions must frequently deal with disparate systems either because of mergers and acquisitions or in order to support different product types—consumer lending, commercial banking and credit cards, for example. Disparate systems tend to result in data silos, and substantial time and effort must go into providing compliance reports and meeting the various regulatory requirements associated with analyzing data provenance (from source to destination). Understanding the workflow of data and access controls around security are also vital applications of data lineage and help ensure data quality.

In addition to the obvious need for financial reporting accuracy, maintaining data lineage and quality is vital to identifying redundant business rules and data and to ensuring that reliable, analyzable data is constantly available and accessible. It also helps to improve the data governance echo system, enabling data owners to focus on gleaning business insights from their data rather than focusing attention on rectifying data issues.

Common Data Lineage Issues

A surprising number of data issues emerge simply from uncertainty surrounding a dataset’s provenance. Many of the most common data issues stem from one or more of the following categories:

  • Human error: “Fat fingering” is just the tip of the iceberg. Misconstruing and other issues arising from human intervention are at the heart of virtually all data issues.
  • Incomplete Data: Whether it’s drawing conclusions based on incomplete data or relying on generalizations and judgment to fill in the gaps, many data issues are caused by missing data.
  • Data format: Systems expect to receive data in a certain format. Issues arise when the actual input data departs from these expectations.
  • Data consolidation: Migrating data from legacy systems or attempting to integrate newly acquired data (from a merger, for instance) frequently leads to post-consolidation issues.
  • Data processing: Calculation engines, data aggregators, or any other program designed to transform raw data into something more “usable” always run the risk of creating output data with quality issues.

Addressing Issues

Issues relating to data lineage and data quality are best addressed by employing some combination of the following approaches. The specific blend of approaches depends on the types of issues and data in question, but these principles are broadly applicable.

Employing a top-down discovery approach enables data analysts to understand the key business systems and business data models that drive an application. This approach is most effective when logical data models are linked to the physical data and systems.

Creating a rich metadata repository for all the data elements flowing from the source to destination can be an effective way of heading off potential data lineage issues. Because data lineage is dependent on the metadata information, creating a robust repository from the outset often helps preserve data lineage throughout the life cycle.

Imposing useful data quality rules is an important element in establishing a framework in which data is always validated against a set of well-conceived business rules. Ensuring not only that data passes comprehensive rule sets but also that remediation factors are in place for appropriately dealing with data that fails quality control checks is crucial for ensuring end-to-end data quality.

Data lineage and data quality both require continuous monitoring by a defined stewardship council to ensure that data owners are taking appropriate steps to understand and manage the idiosyncrasies of the datasets they oversee.

Our Data Lineage and Data Quality Background

RiskSpan’s diverse client base includes several large banks (with we define as banks with assets totaling in excess of $50 billion). Large banks are characterized by a complicated web of departments and sub-organizations, each offering multiple products, sometimes to the same base of customers. Different sub-organizations frequently rely on disparate systems (sometimes due to mergers/acquisitions; sometimes just because they develop their businesses independent of one another). Either way, data silos inevitably result.

RiskSpan has worked closely with chief data officers of large banks to help establish data stewardship teams charged with taking ownership of the various “areas” of data within the bank. This involves the identification of data “curators” within each line of business to coordinate with the CDO’s office and be the advocate (and ultimately the responsible party) for the data they “own.” In best practice scenarios, a “data curator” group is formed to facilitate collaboration and effective communication for data work across the line of business.

We have found that a combination of top-down and bottom-up data discovery approaches is most effective when working accross stakeholders to understand existing systems and enterprise data assets. RiskSpan has helped create logical data flow diagrams (based on the top-down approach) and assisted with linking physical data models to the logical data models. We have found Informatica and Collibra tools to be particularly useful in creating data lineage, tracking data owners, and tracing data flow from source to destination.

Complementing our work with financial clients to devise LOB-based data quality rules, we have built data quality dashboards using these same tools to enable data owners and curators to rectify and monitor data quality issues. These projects typically include elements of the following components.

  • Initial assessment review of the current data landscape.
  • Establishment of a logical data flow model using both top-down and bottom-up data discovery approaches.
  • Coordination with the CDO / CIO office to set up a data governance stewardship team and to identify data owners and curators from all parts of the organization.
  • Delineation of data policies, data rules and controls associated with different consumers of the data.
  • Development of a target state model for data lineage and data quality by outlining the process changes from a business perspective.
  • Development of future-state data architecture and associated technology tools for implementing data lineage and data quality.
  • Invitation to client stakeholders to reach a consensus related to future-state model and technology architecture.
  • Creation of a project team to execute data lineage and data quality projects by incorporating the appropriate resources and client stakeholders.
  • Development of a change management and migration strategy to enable users and stakeholders to use data lineage and data quality tools.

Ensuring data quality and lineage is ultimately the responsibility of business lines that own and use the data. Because “data management” is not the principal aim of most businesses, it often behooves them to leverage the principles outlined in this post (sometimes along with outside assistance) to implement tactics that will to help ensure that the stories their data tell are reliable.


Houston Strong: Communities Recover from Hurricanes. Do Mortgages?

The 2017 hurricane season devastated individual lives, communities, and entire regions. As one would expect, dramatic increases in mortgage delinquencies accompanied these events. But the subsequent recoveries are a testament both to the resilience of the people living in these areas and to relief mechanisms put into place by the mortgage holders.

Now, nearly a year later, we wanted to see what the credit-risk transfer data (as reported by Fannie Mae CAS and Freddie Mac STACR) could tell us about how these borrowers’ mortgage payments are coming along.

The timing of the hurricanes’ impact on mortgage payments can be approximated by identifying when Current-to-30 days past due (DPD) roll rates began to spike. Barring other major macroeconomic events, we can reasonably assume that most of this increase is directly due to hurricane-related complications for the borrowers.

Houston Strong - Analysis by Edge

The effect of the hurricanes is clear—Puerto Rico, the U.S. Virgin Islands, and Houston all experienced delinquency spikes in September. Puerto Rico and the Virgin Islands then experienced a second wave of delinquencies in October due to Hurricanes Irma and Maria.

But what has been happening to these loans since entering delinquency? Have they been getting further delinquent and eventually defaulting, or are they curing? We focus our attention on loans in Houston (specifically the Houston-The Woodlands-Sugar Land Metropolitan Statistical Area) and Puerto Rico because of the large number of observable mortgages in those areas.

First, we look at Houston. Because the 30-DPD peak was in September, we track that bucket of loans. To help us understand the path 30-DPD might reasonably be expected to take, we compared the Houston delinquencies to 30-DPD loans in the 48 states other than Texas and Florida.

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

Of this group of loans in Houston that were 30 DPD in September, we see that while many go on to be 60+ DPD in October, over time this cohort is decreasing in size.

Recovery is slower than the non-hurricane-affected U.S. loans, but persistent. The biggest difference is that a significant number of 30-day delinquencies in the rest of the country loans continue to hover at 30 DPD (rather than curing or progressing to 60 DPD) while the Houston cohort is more evenly split between the growing number loans that cure and the shrinking number of loans progressing to 60+ DPD.

Puerto Rico (which experienced its 30 DPD peak in October) shows a similar trend:

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

To examine loans even more affected by the hurricanes, we can perform the same analysis on loans that reached 60 DPD status.

Houston Strong - Analysis by Edge

Here, Houston’s peak is in October while Puerto Rico’s is in November.

Houston vs. the non-hurricane-affected U.S.:

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

Puerto Rico vs. the non-hurricane-affected U.S.:

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

In both Houston and Puerto Rico, we see a relatively small 30-DPD cohort across all months and a growing Current cohort. This indicates many people paying their way to Current from 60+ DPD status. Compare this to the rest of the US where more people pay off just enough to become 30 DPD, but not enough to become Current.

The lack of defaults in post-hurricane Houston and Puerto Rico can be explained by several relief mechanisms Fannie Mae and Freddie Mac have in place. Chiefly, disaster forbearance gives borrowers some breathing room with regards to payment. The difference is even more striking among loans that were 90 days delinquent, where eventual default is not uncommon in the non-hurricane affected U.S. grouping:

Houston Strong - Analysis by Edge

Houston Strong - Analysis by Edge

And so, both 30-DPD and 60-DPD loans in Houston and Puerto Rico proceed to more serious levels of delinquency at a much lower rate than similarly delinquent loans in the rest of the U.S. To see if this is typical for areas affected by hurricanes of a similar scale, we looked at Fannie Mae loan-level performance data for the New Orleans MSA after Hurricane Katrina in August 2005.

As the following chart illustrates, current-to-30 DPD roll rates peaked in New Orleans in the month following the hurricane:

Houston Strong - Analysis by Edge

What happened to these loans?

Houston Strong - Analysis by Edge

Here we see a relatively speedy recovery, with large decreases in the number of 60+ DPD loans and a sharp increase in prepayments. Compare this to non-hurricane affected states over the same period, where the number of 60+ DPD loans held relatively constant, and the number of prepayments grew at a noticeably slower rate than in New Orleans.

Houston Strong - Analysis by Edge

The remarkable number of prepayments in New Orleans was largely due to flood insurance payouts, which effectively prepay delinquent loans. Government assistance lifted many others back to current. As of March, we do not see this behavior in Houston and Puerto Rico, where recovery is moving much more slowly. Flood insurance incidence rates are known to have been low in both areas, a likely suspect for this discrepancy.

While loans are clearly moving out of delinquency in these areas, it is at a much slower rate than the historical precedent of Hurricane Katrina. In the coming months we can expect securitized mortgages in Houston and Puerto Rico to continue to improve, but getting back to normal will likely take longer than what was observed in New Orleans following Katrina. Of course, the impending 2018 hurricane season may complicate this matter.

—————————————————————————————————————-

Note: The analysis in this blog post was developed using RiskSpan’s Edge Platform. The RiskSpan Edge Platform is a module-based data management, modeling, and predictive analytics software platform for loans and fixed-income securities. Click here to learn more.

 


MDM to the Rescue for Financial Institutions

Data becomes an asset only when it is efficiently harnessed and managed. Because firms tend to evolve into silos, their data often gets organized that way as well, resulting in multiple references and unnecessary duplication of data that dilute its value. Master Data Management (MDM) architecture helps to avoid these and other pitfalls by applying best practices to maximize data efficiency, controls, and insights.

MDM has particular appeal to banks and other financial institutions where non-integrated systems often make it difficult to maintain a comprehensive, 360-degree view of a customer who simultaneously has, for example, multiple deposit accounts, a mortgage, and a credit card. MDM provides a single, common data reference across systems that traditionally have not communicated well with each other. Customer-level reports can point to one central database instead of searching for data across multiple sources.

Financial institutions also derive considerable benefit from MDM when seeking to comply with regulatory reporting requirements and when generating reports for auditors and other examiners. Mobile banking and the growing number of new payment mechanisms make it increasingly important for financial institutions to have a central source of data intelligence. An MDM strategy enables financial institutions to harness their data and generate more meaningful insights from it by:

  • Eliminating data redundancy and providing one central repository for common data;
  • Cutting across data “silos” (and different versions of the same data) by providing a single source of truth;
  • Streamlining compliance reporting (through the use of a common data source);
  • Increasing operational and business efficiency;
  • Providing robust tools to secure and encrypt sensitive data;
  • Providing a comprehensive 360-degree view of customer data;
  • Fostering data quality and reducing the risks associated with stale or inaccurate data, and;
  • Reducing operating costs associated with data management.

Not surprisingly, there’s a lot to think about when contemplating and implementing a new MDM solution. In this post, we lay out some of the most important things for financial institutions to keep in mind.

 

MDM Choice and Implementation Priorities

MDM is only as good as the data it can see. To this end, the first step is to ensure that all of the institution’s data owners are on board. Obtaining management buy-in to the process and involving all relevant stakeholders is critical to developing a viable solution. This includes ensuring that everyone is “speaking the same language”—that everyone understands the benefits related to MDM in the same way—and  establishing shared goals across the different business units.

Once all the relevant parties are on board, it’s important to identify the scope of the business process within the organization that needs data refinement through MDM. Assess the current state of data quality (including any known data issues) within the process area. Then, identify all master data assets related to the process improvement. This generally involves identifying all necessary data integration for systems of record and the respective subscribing systems that would benefit from MDM’s consistent data. The selected MDM solution should be sufficiently flexible and versatile that it can govern and link any sharable enterprise data and connect to any business domain, including reference data, metadata and any hierarchies.

An MDM “stewardship team” can add value to the process by taking ownership of the various areas within the MDM implementation plan. MDM is just not about technology itself but also involves business and analytical thinking around grouping data for efficient usage. Members of this team need to have the requisite business and technical acumen in order for MDM implementation to be successful. Ideally this team would be responsible for identifying data commonalities across groups and laying out a plan for consolidating them. Understanding the extent of these commonalities helps to optimize architecture-related decisions.

Architecture-related decisions are also a function of how the data is currently stored. Data stored in heterogeneous legacy systems calls for a different sort of MDM solution than does a modern data lake architecture housing big data. The solutions should be sufficiently flexible and scalable to support future growth. Many tools in the marketplace offer MDM solutions. Landing on the right tool requires a fair amount of due diligence and analysis. The following evaluation criteria are often helpful:

  • Enterprise Integration: Seamless integration into the existing enterprise set of tools and workflows is an important consideration for an MDM solution.  Solutions that require large-scale customization efforts tend to carry additional hidden costs.
  • Support for Multiple Devices: Because modern enterprise data must by consumable by a variety of devices (e.g., desktop, tablet and mobile) the selected MDM architecture must support each of these platforms and have multi-device access capability.
  • Cloud and Scalability: With most of today’s technology moving to the cloud, an MDM solution must be able to support a hybrid environment (cloud as well as on-premise). The architecture should be sufficiently scalable to accommodate seasonal and future growth.
  • Security and Compliance: With cyber-attacks becoming more prevalent and compliance and regulatory requirements continuing to proliferate, the MDM architecture must demonstrate capabilities in these areas.

 

Start Small; Build Gradually; Measure Success

MDM implementation can be segmented into small, logical projects based on business units or departments within an organization. Ideally, these projects should be prioritized in such a way that quick wins (with obvious ROI) can be achieved in problem areas first and then scaling outward to other parts of the organization. This sort of stepwise approach may take longer overall but is ultimately more likely to be successful because it demonstrates success early and gives stakeholders confidence about MDM’s benefits.

The success of smaller implementations is easier to measure and see. A small-scale implementation also provides immediate feedback on the technology tool used for MDM—whether it’s fulfilling the needs as envisioned. The larger the implementation, the longer it takes to know whether the process is succeeding or failing and whether alternative tools should be pursued and adopted. The success of the implementation can be measured using the following criteria:

  • Savings on data storage—a result of eliminating data redundancy.
  • Increased ease of data access/search by downstream data consumers.
  • Enhanced data quality—a result of common data centralization.
  • More compact data lineage across the enterprise—a result of standardizing data in one place.

Practical Case Studies

RiskSpan has helped several large banks consolidate multiple data stores across different lines of business. Our MDM professionals work across heterogeneous data sets and teams to create a common reference data architecture that eliminates data duplication, thereby improving data efficiency and reducing redundant data. These professionals have accomplished this using a variety of technologies, including Informatica, Collibra and IBM Infosphere.

Any successful project begins with a survey of the current data landscape and an assessment of existing solutions. Working collaboratively to use this information to form the basis of an approach for implementing a best-practice MDM strategy is the most likely path to success.


Making Data Dictionaries Beautiful Using Graph Databases

Most analysts estimate that for a given project well over half of the time is spent on collecting, transforming, and cleaning data in preparation for analysis. This task is generally regarded as one of the least appetizing portions of the data analysis process and yet it is the most crucial, as trustworthy analyses are borne out of clean, reliable data. Gathering and preparing data for analysis can be either enhanced or hindered based on the data management practices in place at a firm. When data are readily available, clearly defined, and well documented it will lead to faster and higher-quality insights. As the size and variability of data grows, however, so too does the challenge of storing and managing it. Like many firms, RiskSpan manages a multitude of large, complex datasets with varying degrees of similarity and connectedness. To streamline the analysis process and improve the quantity and quality of our insights, we have made our datasets, their attributes, and relationships transparent and quickly accessible using graph database technology. Graph databases differ significantly from traditional relational databases because data are not stored in tables. Instead, data are stored in either a node or a relationship (also called an edge), which is a connection between two nodes. The image below contains a grey node labeled as a dataset and a blue node labeled as a column. The line connecting these two nodes is a relationship which, in this instance, signifies that the dataset contains the column. Graph 1 There are many advantages to this data structure including decreased redundancy. Rather than storing the same “Column1” in multiple tables for each dataset that contain it (as you would in a relational database), you can simply create more relationships between the datasets demonstrated below: Graph 2 With this flexible structure it is possible to create complex schema that remain visually intuitive. In the image below the same grey (dataset) -contains-> blue (column) format is displayed for a large collection of datasets and columns. Even at such a high level, the relationships between datasets and columns reveal patterns about the data. Here are three quick observations:

  1. In the top right corner there is a dataset with many unique columns.
  2. There are two datasets that share many columns between them and have limited connectivity to the other datasets.
  3. Many ubiquitous columns have been pulled to the center of the star pattern via the relationships to the multiple datasets on the outer rim.

Graph 3 In addition to containing labels, nodes can store data as key-value pairs. The image below displays the column “orig_upb” from dataset “FNMA_LLP”, which is one of Fannie Mae’s public datasets that is available on RiskSpan’s Edge Platform. Hovering over the column node displays some information about it, including the name of the field in the RiskSpan Edge platform, its column type, format, and data type. Graph 4 Relationships can also store data in the same key-value format. This is an incredibly useful property which, for the database in this example, can be used to store information specific to a dataset and its relationship to a column. One of the ways in which RiskSpan has utilized this capability is to hold information pertinent to data normalization in the relationships. To make our datasets easier to analyze and combine, we have normalized the formats and values of columns found in multiple datasets. For example, the field “loan_channel” has been mapped from many unique inputs across datasets to a set of standardized values. In the images below, the relationships between two datasets and loan_channel are highlighted. The relationship key-value pairs contain a list of “mapped_values” identifying the initial values from the raw data that have been transformed. The dataset on the left contains the list: [“BROKER”, “CORRESPONDENT”, “RETAIL”] Graph 5 While the dataset on the right contains: [“R”, “B”, “C”, “T”, “9”] Graph 6 We can easily merge these lists with a node containing a map of all the recognized enumerations for the field. This central repository of truth allows us to deploy easy and robust changes to the ETL processes for all datasets. It also allows analysts to easily query information related to data availability, formats, and values. Graph 7 In addition to queries specific to a column, this structure allows an analyst to answer questions about data availability across datasets with ease. Normally, comparing pdf data dictionaries, excel worksheets, or database tables can be a painstaking process. Using the graph database, however, a simple query can return the intersection of three datasets as shown below. The resulting graph is easy to analyze and use to define the steps required to obtain and manipulate the data. Graph 8 In addition to these benefits for analysts and end users, utilizing graph database technology for data management comes with benefits from a data governance perspective. Within the realm of data stewardship, ownership and accountability of datasets can be assigned and managed within a graph database like the one in this blog. The ability to store any attribute in a node and create any desired relationship makes it simple to add nodes representing data owners and curators connected to their respective datasets. Graph 9 The ease and transparency with which any data related information can be stored makes graph databases very attractive. Graph databases can also support a nearly infinite number of nodes and relationships while also remaining fast. While every technology has a learning curve, the intuitive nature of graphs combined with their flexibility makes them an intriguing and viable option for data management.


Applying Machine Learning to Conventional Model Validations

In addition to transforming the way in which financial institutions approach predictive modeling, machine learning techniques are beginning to find their way into how model validators assess conventional, non-machine-learning predictive models. While the array of standard statistical techniques available for validating predictive models remains impressive, the advent of machine learning technology has opened new avenues of possibility for expanding the rigor and depth of insight that can be gained in the course of model validation. In this blog post, we explore how machine learning, in some circumstances, can supplement a model validator’s efforts related to:

  • Outlier detection on model estimation data
  • Clustering of data to better understand model accuracy
  • Feature selection methods to determine the appropriateness of independent variables
  • The use of machine learning algorithms for benchmarking
  • Machine learning techniques for sensitivity analysis and stress testing

 

 

Outlier Detection

Conventional model validations include, when practical, an assessment of the dataset from which the model is derived. (This is not always practical—or even possible—when it comes to proprietary, third-party vendor models.) Regardless of a model’s design and purpose, virtually every validation concerns itself with at least a cursory review of where these data are coming from, whether their source is reliable, how they are aggregated, and how they figure into the analysis.

Conventional model validation techniques sometimes overlook (or fail to look deeply enough at) the question of whether the data population used to estimate the model is problematic. Outliers—and the effect they may be having on model estimation—can be difficult to detect using conventional means. Developing descriptive statistics and identifying data points that are one, two, or three standard deviations from the mean (i.e., extreme value analysis) is a straightforward enough exercise, but this does not necessarily tell a modeler (or a model validator) which data points should be excluded.

Machine learning modelers use a variety of proximity and projection methods for filtering outliers from their training data. One proximity method employs the K-means algorithm, which groups data into clusters centered around defined “centroids,” and then identifies data points that do not appear to belong to any particular cluster. Common projection methods include multi-dimensional scaling, which allows analysts to view multi-dimensional relationships among multiple data points in just two or three dimensions. Sophisticated model validators can apply these techniques to identify dataset problems that modelers may have overlooked.

 

Data Clustering

The tendency of data to cluster presents another opportunity for model validators. Machine learning techniques can be applied to determine the relative compactness of individual clusters and how distinct individual clusters are from one another. Clusters that do not appear well defined and blur into one another are evidence of a potentially problematic dataset—one that may result in non-existent patterns being identified in random data. Such clustering could be the basis of any number of model validation findings.

 

 

Feature (Variable) Selection

What conventional predictive modelers typically refer to as variables are commonly referred to by machine learning modelers as features. Features and variables serve essentially the same function, but the way in which they are selected can differ. Conventional modelers tend to select variables using a combination of expert judgment and statistical techniques. Machine learning modelers tend to take a more systematic approach that includes stepwise procedures, criterion-based procedures, lasso and ridge regresssion and dimensionality reduction. These methods are designed to ensure that machine learning models achieve their objectives in the simplest way possible, using the fewest possible number of features, and avoiding redundancy. Because model validators frequently encounter black-box applications, directing applying these techniques is not always possible. In some limited circumstances, however, model validators can add to the robustness of their validations by applying machine learning feature selection methods to determine whether conventionally selected model variables resemble those selected by these more advanced means (and if not, why not).

 

Benchmarking Applications

Identifying and applying an appropriate benchmarking model can be challenging for model validators. Commercially available alternatives are often difficult to (cost effectively) obtain, and building challenger models from scratch can be time-consuming and problematic—particularly when all they do is replicate what the model in question is doing.

While not always feasible, building a machine learning model using the same data that was used to build a conventionally designed predictive model presents a “gold standard” benchmarking opportunity for assessing the conventionally developed model’s outputs. Where significant differences are noted, model validators can investigate the extent to which differences are driven by data/outlier omission, feature/variable selection, or other factors.

 

 Sensitivity Analysis and Stress Testing

The sheer quantity of high-dimensional data very large banks need to process in order to develop their stress testing models makes conventional statistical analysis both computationally expensive and problematic. (This is sometimes referred to as the “curse of dimensionality.”) Machine learning feature selection techniques, described above, are frequently useful in determining whether variables selected for stress testing models are justifiable.

Similarly, machine learning techniques can be employed to isolate, in a systematic way, those variables to which any predictive model is most and least sensitive. Model validators can use this information to quickly ascertain whether these sensitivities are appropriate. A validator, for example, may want to take a closer look at a credit model that is revealed to be more sensitive to, say, zip code, than it is to credit score, debt-to-income ratio, loan-to-value ratio, or any other individual variable or combination of variables. Machine learning techniques make it possible for a model validator to assess a model’s relative sensitivity to virtually any combination of features and make appropriate judgments.

 

————————–

Model validators have many tools at their disposal for assessing the conceptual soundness, theory, and reliability of conventionally developed predictive models. Machine learning is not a substitute for these, but its techniques offer a variety of ways of supplementing traditional model validation approaches and can provide validators with additional tools for ensuring that models are adequately supported by the data that underlies them.


Get Started
Log in

Linkedin   

risktech2024