Reject inference is a popular concept that has been used in credit modeling for decades. Yet, we observe in our work validating credit models that the concept is still dynamically evolving. The appeal of reject inference, whose aim is to develop a credit scoring model utilizing all available data, including that of rejected applicants, is easy enough to grasp. But the technique also introduces a number of fairly vexing challenges.
The technique seeks to rectify a fundamental shortcoming in traditional credit modeling: Models predicting the probability that a loan applicant will repay the loan can be trained to historical loan application data with a binary variable representing whether a loan was repaid or charged off. This information, however, is only available for accepted applications. And many of these applications are not particularly recent. This limitation results in a training dataset that may not be representative of the broader loan application universe.
Credit modelers have devised several techniques for getting around this data representativeness problem and increasing the number of observations by inferring the repayment status of rejected loan applications. These techniques, while well intentioned, are often treated empirically and lack a deeper theoretical basis. They often result in “hidden” modeling assumptions, the reasonableness of which is not fully investigated. Additionally, no theoretical properties of the coefficient estimates, or predictions are guaranteed.
This article summarizes the main challenges of reject inference that we have encountered in our model validation practice.
Selecting the Right Reject Inference Method
Many approaches exist for reject inference, none of which is clearly and universally superior to all the others. Empirical studies have been conducted to compare methods and pick a winner, but the conclusions of these studies are often contradictory. Some authors argue that reject inference cannot improve scorecard models[1]and flatly recommend against their use. Others posit that certain techniques can outperform others[2] based on empirical experiments. The results of these experiments, however, tend to be data dependent. Some of the most popular approaches include the following:
- Ignoring rejected applications: The simplest approach is to develop a credit scoring model based only on accepted applications. The underlying assumption is that rejected applications can be ignored and that the “missingness” of this data from the training dataset can be classified as missing at random. Supporters of this method point to the simplicity of the implementation, clear assumptions, and good empirical results. Others argue that the rejected applications cannot be dismissed simply as random missing data and thus should not be ignored.
- Hard cut-off method: In this method, a model is first trained using only accepted application data. This trained model is then used to predict the probabilities of charge-off for the rejected applications. A cut-off value is then chosen. Hypothetical loans from rejected applications with probabilities higher than this cut-off value are considered charged off. Hypothetical loans from the remaining applications are assumed to be repaid. The specified model is then re-trained using a dataset including both accepted and rejected applications.
- Fuzzy augmentation: Similar to the hard cut-off method, fuzzy augmentation begins by training the model on accepted applications only. The resulting model with estimated coefficients is then used to predict charge-off probabilities for rejected applications. Data from rejected applications is then duplicated and a repaid or charged-off status is assigned to each. The specified model is then retrained on the augmented dataset—including accepted applications and the duplicated rejects. Each rejected application is weighted by either a) the predicted probability of charge-off if its assigned status is “charged-off,” or b) the predicted probability of it being repaid if its assigned status is “repaid.”
- Parceling: The parceling method resembles the hard cut-off method. However, rather than classifying all rejects above a certain threshold as charged-off, this method classifies the repayment status in proportion to the expected “bad” rate (charge-off frequency) at that score. The predicted charge-off probabilities are partitioned into k intervals. Then, for each interval, an assumption is made about the bad rate, and loan applications in each interval are assigned a repayment status randomly according to the bad rate. Bad rates are assumed to be higher in the reject dataset than among the accepted loans. This method considers the missingness to be not at random (MNAR), which requires the modeler to supplement the additional information about the distribution of charge-offs among rejects.
Proportion of Accepted Applications to Rejects
An institution with a relatively high percentage of rejected applications will necessarily end up with an augmented training dataset whose quality is heavily dependent on the quality of the selected reject inference method and its implementation. One might argue it is best to limit the proportion of rejected applications to acceptances. The level at which such a cap is established should reflect the “confidence” in the method used. Estimating such a confidence level, however, is a highly subjective endeavor.
The Proportion of Bad Rates for Accepts and Rejects
It is reasonable to assume that the “bad rate,” i.e., proportion of charged-off loans to repaid loans, will be higher among rejected applications. Some modelers set a threshold based on their a priori belief that the bad rate among rejects is at least p-times the bad rate among acceptances. If the selected reject inference method produces a dataset with a bad rate that is perceived to be artificially low, actions are taken to increase the bad rate above some threshold. Identifying where to establish this threshold is notoriously difficult to justify.
Variable Selection
As outlined above, most approaches begin by estimating a preliminary model based on accepted applications only. This model is then used to infer how rejected loans would have performed. The preliminary model is then retrained on a dataset consisting both of actual data from accepted applications and of the inferred data from rejects. This means that the underlying variables themselves are selected based only on the actual loan performance data from accepted applications. The statistical significance of the selected variables might change, however, when moving to the complete dataset. Variable selection is sometimes redone using the complete data. This, however, can lead to overfitting.
Measuring Model Performance
From a model validator’s perspective, an ideal solution would involve creating a control group in which applications would not be scored and filtered and every application would be accepted. Then the discriminating power of a credit model could be assessed by comparing the charge-off rate of the control group with the charge-off rate of the loans accepted by the model. This approach of extending credit indiscriminately is impractical, however, as it would require the lender to engage in some degree of irresponsible lending.
Another approach is to create a test set. The dilemma here is whether to include only accepted applications. A test set that includes only accepted applications will not necessarily reflect the population for which the model will be used. Including rejected applications, however, obviously necessitates the use of reject inference. For all the reasons laid out above, this approach risks overstating the model’s performance due to the fact that a similar model (trained only on the accepted cases) was used for reject inference.
A third approach that avoids both of these problems involves using information criteria such as AIC and BIC. This, however, is useful only when comparing different models (for model or variable selection). The values of information criteria cannot be interpreted as an absolute measure of performance.
A final option is to consider utilizing several models in production (the main model and challenger models). Under this scenario, each application would be evaluated by a model selected at random. The models can then be compared retroactively by calculating their bad rates on accepted application after the financed loans mature. Provided that the accept rates are similar, the model with the lowest bad rate is the best.
Conclusion
Reject inference remains a progressing field in credit modeling. Its ability to improve model performance is still the subject of intense debate. Current results suggest that while reject inference can improve model performance, its application can also lead to overfitting, thus worsening the ability to generalize. The lack of a strong theoretical basis for reject inference methods means that applications of reject inference need to rely on empirical results. Thus, if reject inference is used, key model stakeholders need to possess a deep understanding of the modeled population, have strong domain knowledge, emphasize conducting experiments to justify the applied modeling techniques, and, above all, adopt and follow a solid ongoing monitoring plan.
Doing this will result in a modeling methodology that is most likely to produce reliable outputs for the institutions while also satisfying MRM and validator requirements.