Author: Soumya Sethuraman
Companies, from small enterprises to giant corporations, are a great opportunity for banks and financial service providers to expand their credit lending business. A robust and dynamic risk management strategy empowers banks in credit monitoring and to take advantage of this opportunity regardless of whether economies are thriving or in turmoil. Banks must continually revise their prediction of whether their corporate customers are likely to face financial distress, and if so, when. Being warned in advance, banks can take mitigative action to minimize or possibly avoid loss in the event of customer default.
Time to try something new
The established risk rating models employ company data such as financial ratios, industry classification, workforce, etc. alongside conventional credit payment behavioral variables. In our work with a major European bank and in reviewing existing research, we found that traditional statistical models were less efficient in providing early warnings for SMEs and start-ups where data from credit bureaux and public tracking agencies were unavailable.
There is an urgent need for more agile, more sensitive credit risk models that can leverage the wealth of internal transactional and behavioral data and depend less on the external sources that traditional models require. ML-based models efficiently capture complex non-linear relationships among a diverse set of variables.
As the start-up culture grows, financial institutions, wishing to make the most of credit lending opportunities in this uncharted market, are willing to experiment with new approaches that go beyond the legacy frameworks mandating ‘white box’ standard statistical approaches. ML-driven models are the right choice.
Developing ML models for early warning
On the face of it, predicting whether a company is likely to default on credit seems to be a standard classification problem, with a set of factors pointing towards the occurrence of a default. In practice, the primary challenge is to train models to recognize the risk as significant early enough for mitigative action. Working with what is usually considered ‘weak signals,’ the models are trained on behavioral data from at least three months prior to the actual default event.
When fixing the model design parameters (target event definition, gap between prediction and event period, etc.), feature engineering comes into play. This involves defining both simple and complex variables reflecting the potential signals preceding a default event. Typical transformations include velocity variables to capture trends; standard deviations, and z-scores to normalize client behavior within micro-clusters of similar clients by industry; size; credit exposure; and other meaningful ratios.
The underlying data consist of transactions from current accounts and cards across instruments and channels; credit utilization and payment patterns within the bank; credit utilization data from the central bank; and ownership and features of other products and services availed within the bank or from other banks. Credit monitoring and quality analysts, with their expertise in customer behavior, provide many of the inputs used to identify these features.
Next, we must segment customers whose operations are alike and may have similar predictors of default. This step is important: a one-size-fits-all model may not call attention to specific clusters of customers who are underrepresented in the overall population of corporate borrowers.
One such segmentation criterion is data availability. For example, central banks typically provide credit utilization data only for companies above a certain exposure threshold. Segment-level models ensure that we do not have to deal with low fill-rates for variables that largely do not apply to a given segment. Other segmentation considerations are current credit exposure and the type of credit line.
With robust feature selection techniques, over 2,000 features can be reduced to under a hundred key variables that contribute significantly to default prediction.
ML models perform better versus traditional techniques
In our work with a leading European bank, we evaluated several classification models, starting with the basic approach of Logistic Regression to the more complex Random Forest (RF) and highly advanced techniques such as XG Boost (XG).
While Logistic Regression delivered results comparable to RF and XG in accuracy, XG had a larger AUC (that is, better power to distinguish) and a consistently good K-S score of over 65% across segments. Further, the ML-based models performed about twice as well as the bank’s internal rule-based early warning system, both by defaulter count and overall exposure value. XG was also superior in handling the scarce observational data from some variables.
ML models are often criticized as being a black box, obscuring the role of predictors in determining the outcome. Packages like SHAP in Python enable non-practitioners to see the exact order of the top predictors at a customer level, giving them more confidence in the underlying signals and analysis that drive model results.
The top predictors vary by customer segment
We observed that for SMEs, corporate clients with lower exposure and overdraft account-based lines of credit, the top predictors of default risk are the current account balance and transaction-related variables. Next come credit utilization and overdraft trends over the preceding six months.
For the low exposure segment with term loans, top predictors include fund transfer behavior and delays in the six most recent payments, followed by trends in overdraft accounts.
In high exposure segments, credit utilized-to-granted ratios from central banks and other agencies are more influential together with the bank’s internal ratings, which reflect company-related information.
For businesses with a factoring line of credit with the bank, the typical expiry dates and credit utilized/days to expiry ratios were useful early predictors of the risk of default.
Changing business thinking for changing times
Developing and deploying models is only the first step in credit monitoring: the key challenge is getting financial institutions to adopt them with confidence.
It is important to ensure that the model is easy to interpret, especially in the context of early warnings, when even the top predictors look like ‘weak signals’ three to six months out from the actual event. Another challenge is having models that are dynamic and can adjust themselves to the new normal of accelerated change. Financial institutions should evaluate early warning models that can learn to normalize customer behavioral variables by changing macroeconomic and industry-specific indicators.
Coming soon: We share our perspectives from working with audit and compliance teams to unlock the potential of AI and ML to fight money laundering, cybersecurity attacks, and investment advisory fraud.Tags: Advanced Analytics for Banking Artificial Intelligence Machine Learning