• Home  >  
  • Perspectives  >  
  • CECL in Loss Forecasting – Practical Approaches for Credit Cards  
Blog January 14, 2021
10 min read

CECL in Loss Forecasting – Practical Approaches for Credit Cards

Authors: Badrish Prakash, Karthik Gandhi, Saipriya Ashok Summary In the first part of a two-part series on CECL, we discussed the […]

TAGS:

Authors: Badrish PrakashKarthik Gandhi, Saipriya Ashok

Summary

In the first part of a two-part series on CECL, we discussed the road to the inception of CECL as a framework, the challenges, and the opportunities. In this part of the blog, we will discuss pragmatic approaches to simplify the modeling process and reduce the cost of regulatory compliance. We will do so by describing a recent case study in model development for a midsize bank in the US to forecast losses under CECL for their unsecured consumer loan (bankcard) portfolio.

Introduction

A midsize US bank wants to create a statistical loss forecasting model for the unsecured consumer bankcard portfolios and small businesses bankcard portfolios to calculate current expected credit losses (CECL) over the life of the loan for their internal business planning and CECL reporting requirements. 

Under CECL, the expected lifetime losses of loans are recognized at the time a loan is recorded. The model suite and its components forecast the current expected credit loss as an aggregation of the account-level forecasts for the unsecured lending (bankcard) portfolios. 

Model Size

Design Objective: The primary goal for this custom model development is to forecast losses under CECL for an unsecured credit card portfolio. The model provides a credit loss forecast for the life of a loan at a loan level that can be aggregated ‘bottom-up’ to create a portfolio loss forecast. A segment-level ‘top-down’ aggregated model is created for certain segments with a short time frame and very predictable performance because of prepayments or charge-off. (Refer to Part 1 for the comparative discussion on the ‘top-down’ and ‘bottom-up’ approaches). The custom forecasting model is intended to have the following features: 

• Predict current expected credit losses on existing, active credit card members with the balance outstanding through the life of the loan

• Output a monthly loss forecast that can be used for internal business requirements and allowance calculation

• Leverage FICO Score as the risk score in the model

• Provide clear guidelines for model performance monitoring and validation to allow model users to explain the root cause of forecast error

Success Criteria: The custom model is gauged by both in-time and out-of-time validations based on the following guidelines: 

• Model performance – Back-testing on both in-time and out-of-time validation data, which typically can have up to a maximum of 10% variation, incrementally over time

– The mean absolute deviation can be much larger since such a long period can compromise the forecast horizon

– For the segments and sub-populations where volumes are very small, the variation expressed in percentage could be higher due to a smaller denominator. In such cases, the forecast errors are evaluated in dollar difference

• Aggregate risk estimation – back-testing performed on the entire portfolio to assess the fit of the model in its entirety.  The aggregate risk assessment would be the total portfolio expected credit loss (ECL) for all accounts and significant portfolios and segments for the duration of the loss forecast horizon

• Model sensitivity – the aggregated forecast and the component models should be sensitive to changes in internal portfolio characteristics and external macroeconomic factors

• Model implementability – The model suite and the scoring equations generated as an output can be implemented in the production system

Data Sources: The portfolio data consists of origination and portfolio characteristics, expressed as monthly snapshots at month-end and cycle-end at an account level. Apart from this, various demographic as well macroeconomic factors were used for model development. The representative list of account-level characteristics are below: 

• Origination characteristics like sourcing channel, FICO score at the time of origination, etc.

• Underwriting actions such as initial credit line, interest rate (APR), balance transfer, and their changes over time 

• Demographic information like state of residence

• Credit usage like balance, payment, fee, purchase, utilization 

• Derogatory behavior such as days past due, max delinquency 

• Macroeconomic indicators like local unemployment, income, GDP, etc 

Data Processing and Exploration

Monthly data spanning several years is considered for model development. The data is chosen to cover portfolio performance during the recessionary, recovery, and growth cycles. Additionally, around three years of data were used for an out of time validation. Following data processing steps were taken to make the data appropriate for modeling:

• Different type of duplicate records where identified and treated 

• Defaults related to fraud and stolen accounts were removed since it was not the true indicator of credit default 

• Records were examined and aligned for account number transfers, thus avoiding misclassification of transferred account as “paid down” or otherwise closed 

Candidate Variable Selection: 

The models were built based on the portfolio and macroeconomic attributes. Except for the FICO score, no other bureau attributes were considered for model development. To gain confidence in the data, descriptive statistics on the macroeconomic variables were performed and compared with the publicly available sources before using it in the model development. 

Modeling Approach / Definition: 

The default event can be described as competing risk events or terminal events, namely prepayment and charge off occurring over a period of time. The discrete-time hazard modeling approach determines the probability of such an event to occur within a specific timeframe. For each of the competing risks, an account is considered ‘survived’ at time t (between 1 to 39 months) when the risk event does not happen. The dependent variable for the PD models is defined as =1 when the account is in one of the terminal events. In other words, the account is charged-off (CO) or the account is paid down (Pre-pay). Otherwise, the dependent variable is set as =0.

For this, we define the survivor function at time tj any given time as the probability that the month of survival T is at least tj.

cecl formula 1

Therefore, the hazard at time tj is defined as a conditional probability of a terminal event at that time, given that the account survived up to that point.

 

cecl formula 2

Hence, the conditional odds of the terminal event at each discrete time can be expressed as conditional odds of the conditional event at each time tj, given survival Sj up to that point. Specifically,

 

cecl formula 3

Here,

cecl formula legend

 

 

 

Taking logs, we obtain a model on the logit of the hazard or conditional probability of a terminal event at tj, given survival up to that time. Expressed in the equation, the model is:

 

cecl formula 4

Here, the model essentially treats time as a discrete factor by introducing one parameter αj for each possible time terminal event. Interpretation of the parameters β associated with the other covariates follows along the same lines as in logistic regression. Thus, one can fit the discrete-time proportional-hazards model by running a logistic regression on a set of pseudo observations generated by creating a terminal event indicator for an account as eventJ = 1 on the month j, 0 otherwise. It has been observed that there is no significant difference between Complementary Log-Log (c-log-log) link function and logit transformation. Hence, the PD is estimated via logistic regression. Lastly, the binary probabilities are converted to multinomial probabilities using Begg-Gray (1984) transformation method[1], which is defined as below:

cecl formula 5

 

where j=1,2 representing events such as prepay and charge-off, respectively.

The LGD can be determined by the percentage of loss by facility or collateral type. LGD estimates could also be driven or influenced by product type, industry, or geography. For the exercise, the LGD was considered to be 100% due to the unsecured nature of the credit card loans.

The EAD is calculated based on the following formulas,

 

cecl formula 6

where P1 and P2 represent conditional probabilities of charge-off and prepayment respectively calculated by using the above Begg-Gray transformation, and the ratio rt is calculated as a monthly ratio between the balance survival curve and the number of accounts probability survival curve.

Segmentation Analysis

Segmentation analysis was performed separately for each of the dependent events namely, for charge-off and pre-pay. This is done to determine whether there are sub-populations within the development dataset that would benefit from separate scorecards. The purpose of this analysis is to determine how many scorecards and which specific segmentation schemes would be optimal. The segmentation analysis is done using non-parametric survival analysis with censored effects in SAS, combined with business intuition and keeping implementability of the model in mind.

These initial segments reflected the delinquency status, transition states, payment activity (payment ratio) and tenure at observation points. The segment distribution is then examined at different snapshots to ensure that the segments are stable across a period of time. A loan-level ‘bottom-up’ methodology was identified for major segments. Whereas the ‘top-down’ approach was selected for minor segments where loan-level data may not have provided extra discrimination, for example, transactor segments. 

Model Development

Based on the insights derived from the segmentation process, the models were developed at the portfolio level.
Model development includes the below parameters:
i) probability of default (PD), which gives the average percentage of accounts, or borrowers, that experience a default event;

ii) loss given default (LGD), which gives the percentage of exposure the bank might lose if the borrower defaults; and

iii) exposure at default (EAD), which gives an estimate of the outstanding amount (drawn amounts) in case the borrower defaults.

The loss projections are derived from the PD models applied to the monthly LGD and EAD estimates that yield expected losses for each month. Summing across all months gives each account’s total expected loss and summing across all accounts gives the total portfolio expected loss.

Finally, the three components are combined to give an expected loss (ECL) for an account. This framework is described in the following equation:

cecl formula 7

Standard logistic and linear regression is used for the PD model developments in estimation. Each model is put through a series of stepwise logistic regression and development tests to build and refine the initial models, evaluating the variable significance levels (p-values), variance inflation factors (VIF) to thwart multicollinearity and improve model parsimony, and the signs of the parameter estimates in each model iteration. To prevent over-fitting, linear regression is run alongside the primary logistic regression to assess the cross-variable correlation as indicated by VIF. Bivariate charts and weight-of-evidence patterns are also examined to help ensure that each variable utilized exhibited both a clearly discernible trend and a solid business rationale. Each dataset is divided into an estimation sample, upon which the model is built, and validation, or hold-out sample, to ensure model stability. The hold-out sample is kept at 30%.

Model Validation

Each model is evaluated on several qualitative and quantitative performance measures such as model parsimony, model lift (Kolmogorov-Smirnov or KS), measures of statistical dispersion (Gini), event capture rate, and measures of accuracy. The KS and Gini statistics are the primary targets for the individual model optimizations. All variables are tested and considered during model building not only for their statistical significance but also for the theoretical or intuitive explanation, relevance and materiality, and redundancy. Where candidate variables failed on these criteria, they are dropped from the model, even at the expense of KS and Gini.

Model components were also tested for their performance for short-term forecast windows of 6-month and 12-month, apart from the long term forecast window of 24-month and 39-month. The errors are calculated as absolute values of errors, expressed in percentage. Examination of the absolute value, as well as the direction of percentage errors across the twentiles, provide an indication of the model fit accuracy across the score range.

Population Stability Index:

PSI report is used to identify population change over time, as compared with the model development sample. The PSI reports offer useful insights to check data quality and to evaluate credit policy effects on the portfolio. A significant population shift often is an early indicator that the model assumptions may no longer hold, and the model may require fine-tuning.

Sensitivity Analysis:

Sensitivity analysis is needed during model development to check the impact of small changes in inputs on model outputs and ensure they fall within an expected range. One more usage of sensitivity analysis in the context of loss estimation is demonstrating that a model is indeed conservative in its forecast.

Model Back-testing and Out of Time Validation

Model back-testing is one form of outcome analysis that involves comparing actual outcomes with model forecasts in a historical period that matches the model’s forecast horizon or performance window at different snapshots of time.

Back-testing entails analyzing a large number of forecasts over different conditions at a point in time or over multiple periods. This process may reveal significant errors or inaccuracies in model development, wherein such cases, model adjustment, recalibration, or redevelopment is warranted. The purpose is to test the overall loss prediction rather than individual forecast values. 

Model Documentation

The CECL standard emphasizes the institutions to be more involved in the entire allowance process, especially on the management/executive level. Therefore, explanations, justifications, and rationales must be discussed, understood, and documented. The CECL loss forecast model intended for regulatory submission should also be used by the banks for their internal loan loss reserve calculations process.

Tiger Analytics applies a “document-as-you-go” principle so that the assumptions and discussions around the modeling process and decisions are captured as they happen. The model documentation relies on several contributors supplying information in a pre-determined format and template. However, it is necessarily authored by a risk and modeling specialist to ensure accountability and completeness. For this exercise, the Tiger Analytics team created around 300 pages of main documentation for the bank’s regulatory submission. Several addendums and appendices running to hundreds of pages are also created in a standard format to aid model validation and subsequent internal and external audits.

Conclusion

The FASB’s CECL standards require timely, forward-looking measurement of risk using “reasonable and supportable” forecasts over the lifetime of the loan. This presents a unique challenge for credit cards since issuers will have to estimate loss from current outstanding balance and ignore future draws. An important part of the modeling approach is calculating paydown balance curves at an account level and then rolling it up to the segment and then to the portfolio. A practical approach for credit cards should be selected to balance the need for output accuracy, model sensitivity, and ease of implementation. Model documentation is an important part of the exercise and should be undertaken by risk and modeling specialists to ensure completeness and assign accountability.

 

To know more about how we can support your CECL journey and to schedule a 1:1 discussion with our specialists, send an email to info@tigeranalytics.com

 

References:

[1] Begg C, Gray R. “Calculation of polychotomous logistic regression parameters using individualized regressions.” Biometrika 1984; 71 :11-18

Copyright © 2024 Tiger Analytics | All Rights Reserved