Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: Featured Home

Industry Veteran Patricia Black Named RiskSpan Chief Client Officer

ARLINGTON, Va., Sept. 19, 2022 — RiskSpan, a leading technology company and the most comprehensive source for data management and analytics for residential mortgage and structured products, has appointed Patricia Black as its Chief Client Officer.  

Black takes over responsibility for managing client success across the full array of RiskSpan’s Edge Platform and services offerings. She brings more than twenty years of diversified experience as a senior financial services executive. Her expertise ranges from enterprise risk management, compliance, finance, program management, audit and controls to operations and technology, regulatory requirements, and corporate governance  

As a senior leader at Fannie Mae between 2005 and 2016, Black served in a number of key roles, including as Chief Audit Executive in the aftermath of the 2008 financial crisis, Head of Strategic Initiatives, and Head of Financial Controls and SOX while the firm underwent an extensive earnings restatement process.  

More recently, Black headed operations at SoFi Home Loans where she expanded the company’s partner relationships, technological capabilities, and risk management practices. Prior to SoFi, as Chief of Staff at Caliber Home Loans, she was an enterprise leader focusing on transformation, strategy, technology and operations. 

“Tricia’s reputation throughout the mortgage industry for building collaborative relationships in challenging environments and working across organizational boundaries to achieve targeted outcomes is second to none,” said Bernadette Kogler, CEO of RiskSpan. “Her astounding breadth of expertise will contribute to the success of our clients by helping ensure we are optimally structured to serve them.”  

“I feel it a privilege to be able to serve RiskSpan’s impressive and growing clientele in this new capacity,” said Black. “I look forward to helping these forward-thinking institutions rethink their mortgage and structured finance data and analytics and fully maximize their investment in RiskSpan’s award-winning platform and services.” 

CONNECT WITH THE RISKSPAN TEAM

About RiskSpan, Inc.  

RiskSpan offers cloud-native SaaS analytics for on-demand market risk, credit risk, pricing and trading. With our data science experts and technologists, we are the leader in data as a service and end-to-end solutions for loan-level data management and analytics. 

Our mission is to be the most trusted and comprehensive source of data and analytics for loans and structured finance investments. 

Rethink loan and structured finance data. Rethink your analytics. Learn more at www.riskspan.com. 


“Reject Inference” Methods in Credit Modeling: What are the Challenges?

Reject inference is a popular concept that has been used in credit modeling for decades. Yet, we observe in our work validating credit models that the concept is still dynamically evolving. The appeal of reject inference, whose aim is to develop a credit scoring model utilizing all available data, including that of rejected applicants, is easy enough to grasp. But the technique also introduces a number of fairly vexing challenges.

The technique seeks to rectify a fundamental shortcoming in traditional credit modeling: Models predicting the probability that a loan applicant will repay the loan can be trained to historical loan application data with a binary variable representing whether a loan was repaid or charged off. This information, however, is only available for accepted applications. And many of these applications are not particularly recent. This limitation results in a training dataset that may not be representative of the broader loan application universe.

Credit modelers have devised several techniques for getting around this data representativeness problem and increasing the number of observations by inferring the repayment status of rejected loan applications. These techniques, while well intentioned, are often treated empirically and lack a deeper theoretical basis. They often result in “hidden” modeling assumptions, the reasonableness of which is not fully investigated. Additionally, no theoretical properties of the coefficient estimates, or predictions are guaranteed.

This article summarizes the main challenges of reject inference that we have encountered in our model validation practice.

Speak With A MODEL VALIDATION EXPERT

Selecting the Right Reject Inference Method

Many approaches exist for reject inference, none of which is clearly and universally superior to all the others. Empirical studies have been conducted to compare methods and pick a winner, but the conclusions of these studies are often contradictory. Some authors argue that reject inference cannot improve scorecard models[1]and flatly recommend against their use. Others posit that certain techniques can outperform others[2] based on empirical experiments. The results of these experiments, however, tend to be data dependent. Some of the most popular approaches include the following:

  • Ignoring rejected applications: The simplest approach is to develop a credit scoring model based only on accepted applications. The underlying assumption is that rejected applications can be ignored and that the “missingness” of this data from the training dataset can be classified as missing at random. Supporters of this method point to the simplicity of the implementation, clear assumptions, and good empirical results. Others argue that the rejected applications cannot be dismissed simply as random missing data and thus should not be ignored.
  • Hard cut-off method: In this method, a model is first trained using only accepted application data. This trained model is then used to predict the probabilities of charge-off for the rejected applications. A cut-off value is then chosen. Hypothetical loans from rejected applications with probabilities higher than this cut-off value are considered charged off. Hypothetical loans from the remaining applications are assumed to be repaid. The specified model is then re-trained using a dataset including both accepted and rejected applications.
  • Fuzzy augmentation: Similar to the hard cut-off method, fuzzy augmentation begins by training the model on accepted applications only. The resulting model with estimated coefficients is then used to predict charge-off probabilities for rejected applications. Data from rejected applications is then duplicated and a repaid or charged-off status is assigned to each. The specified model is then retrained on the augmented dataset—including accepted applications and the duplicated rejects. Each rejected application is weighted by either a) the predicted probability of charge-off if its assigned status is “charged-off,” or b) the predicted probability of it being repaid if its assigned status is “repaid.”
  • Parceling: The parceling method resembles the hard cut-off method. However, rather than classifying all rejects above a certain threshold as charged-off, this method classifies the repayment status in proportion to the expected “bad” rate (charge-off frequency) at that score. The predicted charge-off probabilities are partitioned into k intervals. Then, for each interval, an assumption is made about the bad rate, and loan applications in each interval are assigned a repayment status randomly according to the bad rate. Bad rates are assumed to be higher in the reject dataset than among the accepted loans. This method considers the missingness to be not at random (MNAR), which requires the modeler to supplement the additional information about the distribution of charge-offs among rejects.

Proportion of Accepted Applications to Rejects

An institution with a relatively high percentage of rejected applications will necessarily end up with an augmented training dataset whose quality is heavily dependent on the quality of the selected reject inference method and its implementation. One might argue it is best to limit the proportion of rejected applications to acceptances. The level at which such a cap is established should reflect the “confidence” in the method used. Estimating such a confidence level, however, is a highly subjective endeavor.

The Proportion of Bad Rates for Accepts and Rejects

It is reasonable to assume that the “bad rate,” i.e., proportion of charged-off loans to repaid loans, will be higher among rejected applications. Some modelers set a threshold based on their a priori belief that the bad rate among rejects is at least p-times the bad rate among acceptances. If the selected reject inference method produces a dataset with a bad rate that is perceived to be artificially low, actions are taken to increase the bad rate above some threshold. Identifying where to establish this threshold is notoriously difficult to justify.

Variable Selection

As outlined above, most approaches begin by estimating a preliminary model based on accepted applications only. This model is then used to infer how rejected loans would have performed. The preliminary model is then retrained on a dataset consisting both of actual data from accepted applications and of the inferred data from rejects. This means that the underlying variables themselves are selected based only on the actual loan performance data from accepted applications. The statistical significance of the selected variables might change, however, when moving to the complete dataset. Variable selection is sometimes redone using the complete data. This, however, can lead to overfitting.

Measuring Model Performance

From a model validator’s perspective, an ideal solution would involve creating a control group in which applications would not be scored and filtered and every application would be accepted. Then the discriminating power of a credit model could be assessed by comparing the charge-off rate of the control group with the charge-off rate of the loans accepted by the model. This approach of extending credit indiscriminately is impractical, however, as it would require the lender to engage in some degree of irresponsible lending.

Another approach is to create a test set. The dilemma here is whether to include only accepted applications. A test set that includes only accepted applications will not necessarily reflect the population for which the model will be used. Including rejected applications, however, obviously necessitates the use of reject inference. For all the reasons laid out above, this approach risks overstating the model’s performance due to the fact that a similar model (trained only on the accepted cases) was used for reject inference.

A third approach that avoids both of these problems involves using information criteria such as AIC and BIC. This, however, is useful only when comparing different models (for model or variable selection). The values of information criteria cannot be interpreted as an absolute measure of performance.

A final option is to consider utilizing several models in production (the main model and challenger models). Under this scenario, each application would be evaluated by a model selected at random. The models can then be compared retroactively by calculating their bad rates on accepted application after the financed loans mature. Provided that the accept rates are similar, the model with the lowest bad rate is the best.

Conclusion

Reject inference remains a progressing field in credit modeling. Its ability to improve model performance is still the subject of intense debate. Current results suggest that while reject inference can improve model performance, its application can also lead to overfitting, thus worsening the ability to generalize. The lack of a strong theoretical basis for reject inference methods means that applications of reject inference need to rely on empirical results. Thus, if reject inference is used, key model stakeholders need to possess a deep understanding of the modeled population, have strong domain knowledge, emphasize conducting experiments to justify the applied modeling techniques, and, above all, adopt and follow a solid ongoing monitoring plan.

Doing this will result in a modeling methodology that is most likely to produce reliable outputs for the institutions while also satisfying MRM and validator requirements.

Contact Us

[1] https://www.sciencedirect.com/science/article/abs/pii/S0378426603002036

[2] https://economix.fr/pdf/dt/2016/WP_EcoX_2016-10.pdf


Rising Rates; Rising Temperatures: What Higher Interest Rates Portend for Mortgage Climate Risk — An interview with Janet Jozwik  

Janet Jozwik leads RiskSpan’s sustainability analytics (climate risk and ESG) team. She is also an expert in mortgage credit risk and a recognized industry thought leader on incorporating climate risk into credit modeling. We sat down with Janet to get her views on whether the current macroeconomic environment should impact how mortgage investors prioritize their climate risk mitigation strategies.


You contend that higher interest rates are exposing mortgage lenders and investors to increased climate risk. Why is that?

JJ: My concern is primarily around the impact of higher rates on credit risk overall, of which climate risk is merely a subset – a largely overlooked and underappreciated subset, to be sure, and one with potentially devastating consequences, but ultimately one of many. The simple reason is that, because interest rates are up, loans are going to remain on your books longer. The MBA’s recent announcement of refinance applications (and mortgage originations overall) hitting their lowest levels since 2000 is stark evidence of this.

And because these loans are going to be lasting longer, borrowers will have more opportunities to get into trouble (be it a loss of income or a natural disaster) and everybody should be taking credit risk more seriously. One of the biggest challenges posed by a high-rate environment is borrowers don’t have a lot of the “outs” available to them as they do when they encounter stress during more favorable macroeconomic environments. They can no longer simply refi into a lower rate. Modification options become more complicated. They might have no option other than to sell the home – and even that isn’t going to be as easy as it was, say, a year ago. So, we’ve entered this phase where credit risk analytics, both at origination and life of loan, really need to be taken seriously. And credit risk includes climate risk.

So longer durations mean more exposure to credit risk – more time for borrowers to run into trouble and experience credit events. What does climate have to do with it? Doesn’t homeowners’ insurance mitigate most of this risk anyway?

JJ: Each additional month or year that a mortgage loan remains outstanding is another month or year that the underlying property is exposed to some form of natural disaster risk (hurricane, flood, wildfire, earthquake, etc.). When you look at a portfolio in aggregate – one whose weighted average life has suddenly ballooned from four years to, say eight years – it is going to experience more events, more things happening to it. Credit risk is the risk of a borrower failing to make contractual payments. And having a home get blown down or flooded by a hurricane tends to have a dampening effect on timely payment of principal and interest.

As for insurance, yes, insurance mitigates portfolio exposure to catastrophic loss to some degree. But remember that not everyone has flood insurance, and many loans don’t require it. Hurricane-specific policies often come with very high deductibles and don’t always cover all the damage. Many properties lack wildfire insurance or the coverage may not be adequate. Insurance is important and valuable but should not be viewed as a panacea or a substitute for good credit-risk management or taking climate into account when making credit decisions.

But the disaster is going to hit when the disaster is going to hit, isn’t it? How should I be thinking about this if I am a lender who recaptures a considerable portion of my refis? Haven’t I just effectively replaced three shorter-lived assets with a single longer-lived one? Either way, my portfolio’s going to take a hit, right?

JJ: That is true as far as it goes. And if in the steady state that you are envisioning, one where you’re just churning through your portfolio, prepaying existing loans with refis that look exactly like the loans they’re replacing, then, yes, the risk will be similar, irrespective of expected duration.

But do not forget that each time a loan turns over, a lender is afforded an opportunity to reassess pricing (or even reassess the whole credit box). Every refi is an opportunity to take climate and other credit risks into account and price them in. But in a high-rate environment, you’re essentially stuck with your credit decisions for the long haul.

Do home prices play any role in this?

JJ: Near-zero interest rates fueled a run-up in home prices like nothing we’ve ever seen before. This arguably made disciplined credit-risk management less important because, worst case, all the new equity in a property served as a buffer against loss.

But at some level, we all had to know that these home prices were not universally sustainable. And now that interest rates are back up, existing home prices are suddenly starting to look a little iffy. Suddenly, with cash-out refis off the table and virtually no one in the money for rate and term refis, weighted average lives have nowhere to go but up. This is great, of course, if your only exposure is prepayment risk. But credit risk is a different story.

And so, extremely low interest rates over an extended period played a significant role in unsustainably high home values. But the pandemic had a lot to do with it, as well. It’s well documented that the mass influx of home buyers into cities like Boise from larger, traditionally more expensive markets drove prices in those smaller cities to astronomical levels. Some of these markets (like Boise) have not only reached an equilibrium point but are starting to see property values decline. Lenders with excessive exposure to these traditionally smaller markets that experienced the sharpest home price increases during the pandemic will need to take a hard look at their credit models’ HPI assumptions (in addition to those properties’ climate risk exposure).

What actions should lenders and investors be considering today?

JJ: If you are looking for a silver lining in the fact that origination volumes have fallen off a cliff, it has afforded the market an opportunity to catch its breath and reassess where it stands risk-wise. Resources that had been fully deployed in an effort simply to keep up with the volume can now be reallocated to taking a hard look at where the portfolio stands in terms of credit risk generally and climate risk in particular.

This includes assessing where the risks and concentrations are in mortgage portfolios and, first, making sure not to further exacerbate existing concentration risks by continuing to acquire new assets in overly exposed geographies. Investors may be wise to go so far even to think about selling certain assets if they feel like they have too much risk in problematic areas.

Above all, this is a time when lenders need to be taking a hard look at the fundamentals underpinning their underwriting standards. We are coming up on 15 years since the start of the “Great Recession” – the last time mortgage underwriting was really “tight.” For the past decade, the industry has had nothing but calm waters – rising home values and historically low interest rates. It’s been like tech stocks in the ‘90s. Lenders couldn’t help but make money.

I am concerned that this has allowed complacency to take hold. We’re in a new world now. One with shaky home prices and more realistic interest rates. The temptation will be to loosen underwriting standards in order to wring whatever volume might be available out of the economy. But in reality, they need to be doing precisely the opposite. Underwriting standards are going to have tighten a bit in order effectively manage the increased credit (and climate) risks inherent to longer-duration lending.

It’s okay for lenders and investors to be taking these new risks on. They just need to be doing it with their eyes wide open and they need to be pricing for it.

Speak To an Expert

How Do You Rate on Fannie Mae’s New Social Index?

Quick take-aways

  • HMDA data contains nearly every factor needed to replicate Fannie Mae’s Single Family Social Index. We use this data to explore how the methodology would look if the Fannie Mae Social Index were applied to other market participants.
  • The Agencies and Ginnie Mae are not the only game in town when it comes socially responsible lending. Non-agency loans would also perform reasonably well under Fannie Mae’s proposed Social Index.
  • Not surprisingly, Ginnie Mae outperforms all other “purchaser types” under the framework, buoyed by its focus on low-income borrowers and underserved communities. The gap between Ginnie and the rest of the market can be expected to expand in low-refi environments.
  • With a few refinements to account for socially responsible lending beyond low-income borrowers, Fannie Mae’s framework can work as a universally applicable social measure across the industry.

Fannie Mae’s new “Single Family Social Index

Last week, Fannie Mae released a proposed methodology for its Single Family Social Index.” The index is designed to provide “socially conscious investors” a means of “allocat[ing] capital in support of affordable housing and to provide access to credit for underserved individuals.”

The underlying methodology is simple enough. Each pool of mortgages receives a score based on how many of its loans meet one or more specified “social criteria” across three dimensions: borrower income, borrower characteristics and property location/type. Fannie Mae succinctly illustrates the defined criteria and framework in the following overview deck slide.


Social Index Figure 1: Source: Designing for Impact — A Proposed Methodology for Single-Family Social Disclosure


Each of the criteria is binary (yes/no) which facilitates the scoring. Individual loans are simply rated based on the number of boxes they check. Pools are measured in two ways: 1) a “Social Criteria Share,” which identifies the percentage of loans that meet any of the criteria, and 2) a “Social Density Score,” which assigns a “Social Score” of 0 thru 3 to each individual loan based on how many of the three dimensions (borrower income, borrower characteristics, and property characteristics) it covers and then averaging that score across all the loans in the pool.

If other issuers adopt this methodology, what would it look like?

The figure below is one of many charts and tables provided by Fannie Mae that illustrate how the Index works. This figure shows the share of acquisitions meeting one or more of the Social Index criteria (i.e., the overall “Social Criteria Share.” We have drawn a box approximately around the 2020 vintage,[1] which appears to have a Social Criteria Share of about 52% by loan count. We will refer back to this value later as we seek to triangulate in on a Social Criteria Share for other market participants.

SPEAK TO AN EXPERT

Graph Figure 2: Source: Designing for Impact — A Proposed Methodology for Single-Family Social Disclosure


We can get a sense of other issuers’ Social Criteria Share by looking at HMDA data. This dataset provides everything we need to re-create the Index at a high-level, with the exception of a flag for first time home buyers. The process involves some data manipulation as several Index criteria require us to connect to two census-tract level data sources published by FHFA.

HMDA allows us break down the loan population by purchaser type, which gives us an idea of each loan’s ultimate destination—Fannie, Freddie, Ginnie, etc. The purchaser type does not capture this for every loan, however, because originators are only obligated to report loans that are closed and sold during the same calendar year.  

The two tables below reflect two different approaches to approximating the population of Fannie, Freddie, and Ginnie loans. The left-hand table compares the 2020 origination loan count based on HMDA’s Purchaser Type field with loan counts based on MBS disclosure data pulled from RiskSpan’s Edge Platform.

The right-hand table enhances this definition by first re-categorizing as Ginnie Mae all FHA/VA/USDA loans with non-agency purchaser types. It also looks at the Automated Underwriting System field and re-maps all owner-occupied loans previously classified as “Other or NA” to Fannie (DU AUS) or Freddie (LP/LPA AUS).


Social Index



The adjusted purchaser type approach used in the right-hand table reallocates a considerable number of “Other or NA” loans from the left-hand table. The approach clearly overshoots the Fannie Mae population, as some loans underwritten using Fannie’s automated underwriting system likely wind up at Freddie and other segments of the market. This limitation notwithstanding, we believe this approximation lends a more accurate view of the market landscape than does the unadjusted purchaser type approach. We consequently rely primarily on the adjusted approach in this analysis.

Given the shortcomings in aligning the exact population, the idea here is not to get an exact calculation of the Social Index metrics via HMDA, but to use HMDA to give us a rough indication of how the landscape would look if other issuers adopted Fannie’s methodology. We expect this to provide a rough rank-order understanding of where the richest pools of ‘Social’ loans (according to Fannie’s methodology) ultimately wind up. Because the ultimate success of a social scoring methodology can truly be measured only to the extent it is adopted by other issuers, having a universally useful framework is crucial.

The table below estimates the Social Criteria Share by adjusted purchaser using seven of Fannie Mae’s eight social index criteria.[2] Not surprisingly, Ginnie, Fannie, and Freddie boast the highest overall shares. It is encouraging to note, however, that other purchaser types also originate significant percentages of socially responsible loans. This suggests that Fannie’s methodology could indeed be applied more universally. The table looks at each factor separately and could warrant its own blog post entirely to dissect, so take a closer look at the dynamics.[3]


Social Index


Ginnie Mae’s strong performance on the Index comes as no surprise. Ginnie pools, after all, consist primarily of FHA loans, which skew toward the lower end of the income spectrum, first-time borrowers, and traditionally underserved communities. Indeed, more than 56 percent of Ginnie Mae loans tick at least one box on the Index. And this does not include first-time homebuyers, which would likely push that percentage even higher.

Income’s Outsized Impact

Household income contributes directly or indirectly to most components of Fannie’s Index. Beyond the “Low-income” criterion (borrowers below 80 percent of adjusted median income), nearly every other factor favors income levels be below 120 percent of AMI. Measuring income is tricky, especially outside of the Agency/Ginnie space. The non-Agency segment serves many self-employed borrowers, borrowers who qualify based on asset (rather than income) levels, and foreign national borrowers. Nailing down precise income has historically proven challenging with these groups.

Given these dynamics, one could reasonably posit that the 18 percent of PLS classified as “low-income” is actually inflated by self-employed or wealthier borrowers whose mortgage applications do not necessarily reflect all of their income. Further refinements may be needed to fairly apply the Index framework to this and market segments that pursue social goals beyond expanding credit opportunities for low-income borrowers. This could just be further definitions on how to calculate income (or alternatives to the income metric when not available) and certain exclusions from the framework altogether (foreign national borrowers, although these may be excluded already based on the screen for second homes).

Positive effects of a purchase market

The Social Criteria Share is positively correlated with purchase loans as a percentage of total origination volume (even before accounting for the FTHB factor). This relationship is apparent in Fannie Mae’s time series chart near the top of this post. Shares clearly drop during refi waves.

Our analysis focuses on 2020 only. We made this choice because of HMDA reporting lags and the inherent facility of dealing with a single year of data. The table below breaks down the HMDA analysis (referenced earlier) by loan purpose to give us a sense for what our current low-refi environment could look like. (Rate/term refis are grouped together with cash-out refis.) As the table below indicates, Ginnie Mae’s SCS for refi loans is about the same as it is for GSE refi loans — it’s really on purchase loans where Ginnie shines. This implies that Ginnie’s SCS will improve even further in a purchase rate environment.


Social Index


Accounting for First-time Homebuyers

As described above, our methodology for estimating the Social Criteria Share omits loans to first-time homebuyers (because the HMDA data does not capture it). This likely accounts for the roughly 6 percentage point difference between our estimate of Fannie’s overall Social Criteria Share for 2020 (approximately 46 percent) and Fannie Mae’s own calculation (approximately 52 percent).

To back into the impact of the FTHB factor, we can pull in data about the share of FTHBs from RiskSpan’s Edge platform. The chart above that looks a Purchase vs. Refi tells us the SCS share without the FTHB factor for purchase loans. Using MBS data sources, we can obtain the share of 2020 originations that were FTHBs. If we assume that FTHB loans look the same as purchase loans overall in terms of how many other Social Index boxes they check, then we can back into the overall SCS incorporating all factors in Fannie’s methodology.

Applying this approach to Ginnie Mae, we conclude that, because 29 percent of Ginnie’s purchase loans (one minus 71 percent) do not tick any of the Index’s boxes, 29 percent of FTHB loans (which account for 33 percent of Ginnie’s overall population) also do not tick any Index boxes. Taking 29 percent of this 33 percent results in an additional 9.6 percent that should be tacked on to Ginnie Mae’s pre-FTHB share, bringing it up to 66 percent.


Social Index


Validating this estimation approach is the fact it increases Fannie Mae’s share from 46 percent (pre-FTHB) to 52 percent, which is consistent with the historical graph supplied by Fannie Mae (see Figure 2, above). Our FTHB approach implies that 92 percent of Ginnie Mae purchase loans meet one or more of the Index criteria. One could reasonably contend that Ginnie Mae FTHB loans might be more likely than Ginnie purchase loans overall to satisfy other social criteria (i.e., that 92 percent is a bit rich), in which case the 66 percent share for Ginnie Mae in 2020 might be overstated. Even if we mute this FTHB impact on Ginnie, however, layering FTHB loans on top of a rising purchase-loan environment would likely put today’s Ginnie Mae SCS in the low 80s.




[1] The chart is organized by acquisition month, our analysis of HMDA looks at 2020 originations, so we’ve tried to push the box slightly to the right to reflect the 1–3-month lag between origination and acquisition. Additionally, we think the chart and numbers throughout Fannie’s document are just Fixed Rate 30 loans, our analysis includes all loans. We did investigate what our numbers would look like if filtered to Fixed 30 and it would only increase the SCS slightly across the board.

[2] As noted above, we are unable to discern first-time homebuyer information from the HMDA data.

[3] We can compare the Fannie numbers for each factor to published rates in their documentation representing the time period 2017 forward. The only metric where we stand out as being meaningfully off is the percentage of loans in minority census tracts. We took this flag from FHFA’s Low-Income Area File for 2020 which defines a minority census tract having a ‘…minority population of at least 30 percent and a median income of less than 100 percent of the AMI.’ It is not 100% clear that this is what Fannie Mae is using in its definition.


Get Started
Log in

Linkedin   

risktech2024