In part 1 of this series (Can Loan Defaults Be Predicted?) some of the basic concepts of the approach I am taking to predict defaults were introduced. In this post I'll be covering a few implementation details.The Learning Process
The learning process in machine learning starts with a set of data known as the training set. In our case, this consists of a set of loans, the "features" of each loan (such as length, interest rate, home ownership, etc.) and a binary label indicating whether this loan defaulted or was fully paid. To produce a reliable model an accurately labeled training set is very important. Therefore, I did not include any currently active loans. This obviously reduces the size of our available training data. But we have no way of knowing if a current loan will fully pay or get charged off. And I felt that guessing would only compromise the accuracy of the model.
Over 20 features of each loan were included in the training data. Most are included with the loan data from Lending Club. But several additional features that made sense intuitively were computed. Features of this type include values such as:
- Monthly Payment/Income
- Loan Description Length
- Description Spelling Errors
- Readability Measures (ex. Flesch, McLaughlin's SMOG formula)
- State Unemployment Rate
Anything that tells us more information about the borrower and their economic condition might help improve the accuracy of the model.Tuning the Model
The learning algorithm takes our training set and several tuning parameters as input and produces a model that classifies the loans into "good" and "bad" sets. To avoid over fitting the data we go through a process of training and testing known as cross-validation. Cross validation divides the data into k subsets of equal size. For each combination of tuning parameters, the model is trained k times, each time leaving out one of the subsets from training. The left out subset is used to assess the performance of the model. This process repeats until a set of parameters is found that produces the best tradeoff between sensitivity and specificity.Sensitivity and Specificity
Sensitivity and specificity are statistical measures of the performance of a binary classification test. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives that are correctly identified. Specificity measures the proportion of negatives that are correctly identified. The goal is of course to find a model that has high sensitivity (finding the most default loans) and high specificity (not including “good” loans). But this is usually impossible. So tradeoffs need to be made. In P2P lending the cost of investing in a loan that defaults is very high. So it’s better to have a model that finds a very high number of the defaulting loans at the expense of including some good ones. As long as there are enough loans left over to support the dollar amount we want to periodically invest.Generating Default Predictions
To generate the predictions, we start "walking" forward from the day Lending Club started issuing loans. For each day we take the latest Lending Club data, remove all loans issued before the current simulated date and split the remaining loans into two sets, a training set and a test set. The training set consists of loans that have been fully paid or charged off. And the test set consists of loans that we don't have predictions for yet. After doing feature generation and training of the model, we use it to generate a binary prediction for every loan in the test set. Once a loan gets assigned a prediction, it is never changed. Doing so would compromise the validity of the predictions.
Processing all of Lending Club loan data from 2007 until the present day takes several days of continuous computer time. But this process ensures for the most part that we have a historically accurate representation of that point in time and results in a prediction that is not influenced by future data. At the end of this process, each loan is assigned a value of 0 (no-default) or 1 (default).
In the next post I’ll explore how these predictions can be used to increase our return on investment.