CECL: Where we’ve been and where we’re going
This is one of five key articles featured in our 2022 Financial Institutions Advisor. Download the entire whitepaper here.
As we reflect on the experience of the first 350 institutions to adopt the current expected credit losses (CECL) standard and prepare for the remaining 10,000 institutions to adopt over the coming years, it appears CECL may be achieving the results it was designed for. While it’s difficult to distinguish the impact of CECL adoption from that of the pandemic, it appears the standard has allowed institutions to be more responsive to ongoing uncertainty than was common under the incurred loss method.
As we’ve worked with institutions that have adopted the standard and others that are working toward adoption, we’ve identified two areas your institution should consider focusing on.
Understanding your model
We’ve seen institutions design models to estimate credit losses where the resulting reserve conceptually didn’t make sense, even though the individual decisions made in developing the model were well supported. This could be the result of not understanding how the model works or certain assumptions having unexpected impacts. To combat this, we suggest taking a moment to assess your model and results from a high level and qualitatively evaluate what you’re seeing. Do you have minimal loss history and yet, with almost no adjustments to that history, are projecting a large reserve? Are you expecting an improvement in the economic environment over the next year, but your model is using loss rates that exceed the historical average? The results of these reflections, at a minimum, support the effective challenge of the model and could even lead to some significant adjustments to how you approach establishing a reserve.
Establishing model documentation
Estimating expected credit losses under CECL inherently requires a significant amount of judgment — judgment that will need to be explained to stakeholders such as management and regulators throughout the use of your CECL model. Because of this, we can’t stress the importance of establishing and maintaining clear and thorough documentation of decisions made during the development and ongoing operation of the model. While many vendors provide white papers and other technical discussions of the theories underlying third-party models, this documentation should be supplemented with a discussion of how management is applying the model and the related decisions, assumptions, and limitations.
As each institution is in a different stage of addressing the new standard, we’ve outlined additional considerations for both those who have already adopted and those yet to adopt below.
For institutions yet to adopt
Most institutions yet to adopt this standard are working toward implementation of a CECL model on Jan. 1, 2023. Below are some key considerations to build into your timeline as you plan for adoption:
- Data evaluation: Data availability has been a key consideration in selection of a methodology and/or model. Understanding the data available and trends in your historical loss rates is a good first step for CECL adoption.
- Model selection: The first major decision in selecting a model is determining whether your institution will use an in-house model (often based in Excel) or one developed by a third-party vendor. This decision is often based on an evaluation of the complexity of the institution’s loan portfolio, relationships with existing vendors, and level of ongoing effort to maintain an in-house model vs. third-party model. The next decision is which method/model to use and should be based on a thorough understanding of the various options being considered. Again, depth and robustness of data available will likely play a key role in this consideration.
- Parallel runs: Institutions benefit from the opportunity to analyze how their CECL model responds to changes overtime by running the CECL model in parallel with the incurred loss model for a few quarters prior to implementation. Not only does this provide an opportunity to work out process and model issues ahead of adoption, but many institutions we’ve worked with have found that it helps them better understand their model and provides an opportunity to adjust, if needed, prior to implementation.
- Model validation: Based on the size and complexity of your institution, a model validation prior to implementation may be expected by management, regulators, and/or other stakeholders. We’ve observed this process to take about three months to complete, and you may want to schedule this to allow for adjustments to the model and additional parallel runs prior to implementation of the model.
For institutions that have already adopted
Due to the complex nature of many of the models and methods used to estimate credit losses and the importance of this estimate to your institution, many institutions are realizing that the effort to implement this new standard doesn’t stop at the adoption date. As outlined by the regulators, management has a responsibility to perform ongoing monitoring and continue effective challenges of the model and key assumptions throughout the life of the model.
Several important aspects of ongoing monitoring are outlined below. A process to address each consideration should be established and executed on a frequency commensurate with the complexity of the institution and the model.
- Performing sensitivity analysis to identity key assumptions and verify that the model’s response to changes in those assumptions aligns with expectations
- Establishing a framework for effective challenge of changes to key assumptions, once identified
- Considering how known limitations, overlays, or overrides in your model (for instance, using a floor loss rate in instances where a segment has limited loss history) impact the output of your model at each measurement date
- Completing a model validation in accordance with your institution’s model risk management program and when significant changes are made to the model
- Assessing whether inputs continue to be accurate and consistent with the model’s purpose and design
- Monitoring the effectiveness of third-party models, including review of SOC-1 reports
- Establishing a plan to complete benchmarking and/or outcomes analysis to evaluate the performance of the model