Introduction
- Credit plays a crucial role in both private and public sectors, facilitating economic activities and funding sources. Over the years, credit has evolved significantly, impacting its volume, provision channels, types, and regulatory framework.
- Credit risk, the likelihood of borrowers failing to meet debt obligations, is a major concern across industries. While commonly associated with financial institutions, it also affects non- financial sectors, where firms extend and receive credit.
- Statistics from the Bank of International Settlements indicate substantial credit expansion in the Eurozone and the USA, with private debt exceeding 150% of GDP in both regions. Notably, credit composition varies, with the Eurozone seeing more credit to non-financial corporations and the USA having higher household credit. The banking sector remains a significant contributor to credit provision, though new financial instruments and platforms like credit derivatives and peer-to-peer lending introduce additional complexities and risks.
- Managing credit risk has become multifaceted, involving regulatory, methodological, and technical challenges, impacting various organizations exposed to credit risk.
- Two major developments have significantly impacted credit risk management in recent years. Firstly, the regulatory framework, particularly concerning credit institutions like banks, underwent significant changes. The Basel Committee on Banking Supervision introduced the Basel I Accord in 1988, establishing basic capital requirements based on asset risk. This was succeeded by the more comprehensive Basel II Accord in 2004, covering credit, operational, and market risks, and emphasizing supervisory guidelines and market discipline. Basel III, currently under development, aims to enforce stricter capital, liquidity, and leverage requirements. These regulations have heavily shaped credit risk management practices in the financial sector.
- Secondly, there has been a widespread adoption of analytical methods for credit risk modeling and management. Early methods relied on empirical evaluation systems like CAMEL, which combined various factors but lacked objectivity. The introduction of advanced analytical approaches dates back to the 1960s and 1970s with bankruptcy prediction models and the 1980s with credit scoring models like the FICO score. Tighter regulatory frameworks incentivized the adoption of analytical models, which were seen as more sophisticated and reliable. This trend was further fueled by advancements in mathematical finance, data science, and operations research. Analytical credit risk modeling applies to both individual loans and portfolios, aiding in assessing creditworthiness, estimating expected losses, and guiding portfolio management decisions.
- Despite these advancements, credit risk models faced criticism during the global credit crisis of 2007-2008 for their role in facilitating excessive credit expansion and failing to accurately estimate risk exposure. However, they have undeniably played a crucial role in facilitating access to credit by reducing risk premiums for borrowers.
The Camel System
- When bank credit analysts have all the required information, including multi-year financial data, they typically use the CAMEL system or its variations to assess credit risk. Though initially designed by U.S. bank supervisors for examination, it’s now widely embraced by rating agencies, counterparty analysts, and even equity analysts for valuing bank stocks.
- CAMEL is a simplified acronym that doesn’t encompass all necessary aspects or allocate appropriate weights to them. It represents the five critical factors in bank financial health:
- C for Capital – This reflects the bank’s ability to absorb losses and remain solvent. The bank’s capital adequacy is determined by both the amount and quality of available capital. Tier 1 capital, such as common equity, holds more significance than tier 2 capital, such as subordinated debt. Factors such as adhering to interest and dividend regulations, growth strategies, economic conditions, and investment focus are also analyzed.
- A for Asset Quality – The quality of a bank’s assets, such as loans and investments, plays a pivotal role. High-quality assets reduce the risk of defaults and losses.
- M for Management – The bank’s leadership and management practices are fundamental to ensure prudent risk-taking, strategic planning, and efficient operations.
- E for Earnings – A bank’s profitability and consistent earnings are indicative of its financial strength, including its ability to generate revenue and cover expenses. Analysts evaluate a bank’s capacity to generate growth-fueling returns and attain necessary capital levels. Key considerations include the bank’s growth, stability, net interest margin, and net worth.
- L for Liquidity – Liquidity signifies the bank’s ability to meet its short-term obligations promptly. Sufficient liquidity ensures the bank can manage unexpected financial challenges without resorting to distress measures. The liquidity level is significantly influenced by the ratio of cash held by banks and central bank balances to total assets. Additionally, the analysis includes evaluating a bank’s reliance on short-term and potentially unstable financial resources.
- Except for “management”, most elements can be analyzed using ratios. However, quantifying “liquidity” is challenging.
- Since CAMEL’s inception, banks have engaged into many transactions that no longer fit its categories. Some transactions are off the balance sheet or driven by asset/liability management, making “asset quality” too narrow. While often termed a model, CAMEL is essentially a checklist of critical bank attributes for financial evaluation. It can lay the foundation for systematic approaches to credit assessment.
- In the United States, the CAMEL system operates as a scoring model for regulators. Examiners assign scores from “1” (best) to “5” (worst) for each acronym letter. Composite scores combine attribute scores, and scores of 3 or higher signal concerns, prompting regulatory review.
Elements Of Credit Risk Modeling
- Credit risk modeling and management systems primarily aim to estimate the expected loss (EL) for a loan within a specified period, typically one year, aligning with standard financial reporting timeframes. The expected loss comprises three main components and can be summarized as follows:
\(EL = PD \times EAD \times LGD\)
- Three key components in the above estimation are:
- Probability of default (PD): This measures the likelihood that the borrower will fail to make scheduled loan payments within the analysis period, constituting a default. Typically, default is defined as a payment delay of at least 90 days, although more nuanced definitions may be applied, such as considering the overdue amount. 𝑃𝐷 is often calculated using analytical models, such as credit scoring or rating models, which take into account various factors related to the borrower’s status.
- Exposure at default (𝐸𝐴𝐷): 𝐸𝐴𝐷 represents the capital amount outstanding at the time of default. 𝐸𝐴𝐷 is predominantly determined by loan characteristics rather than borrower attributes. While it is straightforward to specify 𝐸𝐴𝐷 for certain loans with fixed payment schedules (e.g., simple loans or bonds), estimating 𝐸𝐴𝐷 can be more complex for products like credit cards, where the outstanding amount fluctuates over time.
- Loss given default (LGD): LGD represents the loss incurred if a default occurs and is typically expressed as a percentage of EAD. LGD ranges from 0 to 100% and is often described inversely as the recovery rate (recovery rate = 100% – LGD). Various factors influence LGD, including borrower and loan characteristics, and the broader economic environment. For example, 𝐿𝐺𝐷 for corporate loans may be influenced by factors like firm size, industry sector, and financial health. Loan-specific attributes such as collateral and seniority also impact 𝐿𝐺𝐷. While historical data may suffice for simple 𝐿𝐺𝐷 definitions, the Basel II Accord promotes more sophisticated approaches like statistical 𝐿𝐺𝐷 estimation models.
Capital Adequacy Ratio
- Financial institutions are closely regulated in their measurement, management, and reporting of credit risk exposures. This regulatory oversight began with the Basel Accord in 1988, which evolved into Basel II, the current framework. Basel II mandates that financial institutions maintain sufficient capital to cover asset risk, as indicated by capital adequacy ratios (𝐶𝐴𝑅). These ratios, expressed as a percentage, are calculated by dividing capital by risk-weighted assets (𝑅𝑊𝐴).
\(CAR = \frac{Capital}{RWA} \geq \alpha\)
- where,
Capital includes tier 1 and tier 2 capital, RWA represents a weighted average of a financial institution’s assets, with weights determined by their risk levels.
𝛼 is the minimum requirement imposed by the regulator. For example, under Basel II, it is set at 8%, and under Basel III, it is set at 10.5%
- Under Basel II, financial institutions have two methods for calculating risk-weighted assets (RWA) based on their risk modeling framework.
- Standardized approach: Also known as the basic scheme, this method utilizes predetermined risk estimates set by supervisory authorities. It applies common risk weights to all cases, without considering specific loan, portfolio, or institutional characteristics. This approach relies on external models, resulting in higher capital requirements due to the standardized nature of risk assessment.
- Internal ratings-based approach (IRB): Offering more flexibility, the IRB approach requires sophisticated modeling systems based on historical data. Financial institutions using this method employ internal credit rating/scoring systems to assess default probabilities and internal statistical models to determine loss given default (LGD). Capital requirements for unexpected losses are derived from risk weight functions, typically based on the Asymptotic Risk Factor (ASFR) model. This model assumes complete portfolio diversification, allowing institutions to calculate capital charges on a loan-by-loan basis and aggregate them at the portfolio level. In the context of the ASFR model, for a corporate exposure, RWA is defined as \(RWA = K \times 12.5 \times EAD\). where K denotes the capital requirement for some exposure
- The formula for calculating, 𝐾, is given by: \(K = \left[ LGD \times N \left( \frac{N^{-1}(PD)}{\sqrt{1 – R}} + N^{-1}(0.999) \sqrt{\frac{R}{1 – R}} \right) – PD \times LGD \right] \times \frac{1 + (M – 2.5) \beta}{1 – 1.5 \beta}\)
- where
- 𝑀 is the Maturity of the loan
- 𝑅 is the asset correlation parameter
- 𝑁 represents the cumulative standard normal distribution function
- \(N^{-1}\) is the inverse of the cumulative standard normal distribution function 𝛽 is a maturity adjustment factor which is computed as \beta = \left[ 0.11852 – 0.05478 \times \left( \log(PD) \right)^2 \right]
- The asset correlation parameter R, set by the Basel Committee, reflects the borrower’s dependence on the economy:
- Asset correlations decrease as the probability of default (PD) increases, suggesting higher PD levels entail greater idiosyncratic risk potential.
- Asset correlations increase with firm size, showing larger firms are more impacted by the overall economy, while smaller ones are likelier to default due to idiosyncratic factors.
Maturity effects in the model are influenced by both loan maturity and PD:
a) Long-term borrowers are riskier than short-term ones, with higher likelihood of downgrades, necessitating increased capital requirements with maturity.
b) Low PD borrowers face higher downgrade potential than high PD ones, requiring greater maturity adjustments for low PD borrowers.
The asymptotic capital formula assumes perfect portfolio diversification, but real-world portfolios have residual idiosyncratic risk. Neglecting this leads to underestimating capital requirements. Basel Committee recommends granularity adjustments at the portfolio level to account for concentration risk, based on portfolio diversification levels.
The Basel III Accord introduced several enhancements to risk measurement, including:
- Increasing the Capital Adequacy Ratio (CAR) from the Basel II requirement of 8% to 10.5%.
- Introducing liquidity and leverage requirements.
- Placing emphasis on counterparty risk, particularly for derivatives and securitized products.
- Introducing a new risk management framework that includes stress tests to address extreme market volatility, model validation processes, and testing programs aimed at fostering realistic expectations during turbulent market conditions.
Credit Risk Assessment Approaches
- Various methods exist for evaluating the probability of default within credit risk modeling. These approaches differ primarily in the data necessary for their implementation, as well as their scope and applicability. Three main frameworks are used:
- judgmental approaches,
- empirical models, and
- financial models
- Judgmental Approaches
- Judgmental approaches in credit risk assessment, often dubbed qualitative approaches or expert systems, are the oldest and least analytically complex methods. They rely solely on the expert judgment of credit analysts to evaluate both qualitative and quantitative characteristics of the borrower.
- The “5C analysis” method, a prominent judgmental evaluation scheme, assesses five key dimensions of a borrower’s creditworthiness:
- Character: Reflects the borrower’s personality.
- Capacity: Evaluates the borrower’s ability to repay the loan based on income, expenses, and other financial obligations.
- Capital: Considers the borrower’s own capital at risk.
- Collateral: Examines the guarantees provided for the obligation’s payment.
- Conditions: Describes the general business environment and loan characteristics, such as the interest rate.
- These dimensions undergo analysis using a wide range of qualitative and quantitative factors. For corporate loans, this may include data from financial statements, business plans, industry information, and economic outlooks. Small loans are typically reviewed by a single credit analyst or a small team, while larger loans require more extensive examination by expert analysts and credit committees.
- While judgmental systems offer a structured framework, they rely solely on expertise rather than theory or empirical data, leading to notable limitations. It’s challenging to validate results, update evaluation processes, or analyze scenarios affecting creditworthiness without clear protocols. Attempts in the 1980s and early 1990s to enhance judgmental systems with analytical capabilities focused on emerging expert systems (ESs) technology. Despite limitations, judgmental approaches persist, especially in areas lacking historical data for advanced models, like project finance or specialized sectors such as shipping. They offer deep insights into complex cases where expert credit analysts interpret unstructured information effectively, providing valuable perspectives from both business and financial standpoints.
- Data-Driven Empirical Models
- Data-driven approaches rely on models constructed using historical data on loans, including accepted, rejected, paid-as-agreed, and defaulted cases. Applicable to both corporate and consumer loans when historical data are available, these approaches utilize data from various sources such as internal bank databases, credit rating agencies, and other data providers. The data primarily include borrower characteristics, loan status (defaulted or non-defaulted), and external risk factors. Instead of relying on expertise to aggregate risk factors, empirical models use analytical techniques to identify patterns from the data, establishing explicit relationships between default likelihood and input variables or risk factors.
- Empirical approaches have roots dating back to the late 1960s with the development of statistical models for predicting bankruptcy. Since then, methodological advances in data analytics and improvements in data collection have propelled rapid evolution in this field. A wide range of statistical, machine learning, and operations research techniques are now available for data analytics, enabling the identification of complex non-linear credit default relationships efficiently, even with large-scale datasets. Additionally, advancements in academic research, regulatory frameworks, and credit risk management practices have led to the identification of new relevant risk factors beyond traditional financial data. Factors such as corporate governance issues, information from social networks, news analytics, and real-time financial market data enhance the explanatory and predictive power of empirical models in various contexts.
- Advantages of the empirical approach in credit risk assessment include:
- Transparency and consistency: Empirical models offer greater transparency and consistency across different scenarios, ensuring impartial and standardized evaluation of credit risk.
- Objectivity: Empirical models minimize subjectivity and bias so that conclusions can be supported by measurable data.
- Validation of structure and predictive power: Both the model’s structure and predictive ability can be empirically validated.
- Analytical exploration: They make it possible to conduct scenario analyses and stress tests, as well as to examine hypotheses analytically.
- Real-time data updates: Market data can be continuously updated to adapt to changing conditions in real-time.
- Versatility and scalability: They can effectively handle large datasets and diverse credit portfolios (including both consumer and corporate loans), making them adaptable across different institutions. Specialized models can be developed for different entities, sectors, and types of loans.
- Despite these advantages, the empirical method also comes with significant weaknesses:
- Reliance on historical data: Historical data might not adequately forecast future outcomes, especially during periods of heightened uncertainty and volatility.
- Inadequacy and flaws in data: The data utilized in constructing empirical credit risk models are often incomplete and imperfect. .
- Static and retrospective characteristics: Data utilized in empirical models, like financial statements, are commonly perceived as retrospective and static, offering a snapshot of the present condition.
- Limited real-time updates: While market data can be updated in real-time, not all inputs, like financial statement data, may be accessible as frequently. This infrequent updating could lead to slower risk assessment compared to a judgmental approach.
- Financial Models
- In contrast to data-driven empirical approaches, financial models primarily rely on theoretical principles. Instead of focusing on descriptive and predictive analysis, financial models take a normative approach based on fundamental economic and financial principles. These models aim to explain the mechanism of the default process and are often referred to as market models because they utilize data from financial markets, particularly focusing on corporate debt..
- Two primary types of financial models for credit risk assessment:
- Structural models:
- Assume default as an internal process linked to a firm’s structural characteristics.
- Factors such as asset and debt values are central to this model.
- Reduced form models:
- View default as a random event driven by external factors.
- Often use Poisson jump processes.
- Rely extensively on market data from bonds and credit derivatives.
The Merton Model
- Option pricing theory, pioneered by Black and Scholes (1973), has found extensive application in assessing default-risky debt and equity, notably through the Merton (1974) model. In this framework, a leveraged firm with a single debt issue, no dividend payments, and perfect financial markets is considered. The debt has no coupons and matures at 𝑇. Under these idealized conditions, debt holders and equity holders are the sole claimants against the firm, the value of the firm’s assets is equal to the sum of the value of debt and the value of equity. At date 𝑇, if the firm cannot pay the principal amount 𝐹, it is bankrupt, equity has no value, and the firm belongs to the debt holders. If the firm can pay the principal at 𝑇, any surplus belongs to equity holders. For example, if a firm owes $350 million at maturity 𝑇, and total value of its assets at 𝑇 is only $280 million, equity holders receive nothing, but if the firm’s assets’ value at 𝑇 is $380 million, it can pay off the full principal of $350 million and equity holders get $30 million.
- Let \(A_T\) be the value of the firm’s assets and \(E_T\) be the value of equity at date T. The equity holders receive \(A_T -F\) if \(A_T -F\)> 0 or else they receive zero. This structure resembles the payoff of a call option on the value of the firm’s assets. At date T: E_T = \text{max}(A_T – F, 0)
- To price equity and debt using the Black-Scholes formula, the following assumptions are made:
- The value of the firm’s assets follows a log-normal distribution with a constant volatility \(sigma_A\).
- The interest rate 𝑟 is constant.
- Trading occurs continuously.
- Financial markets are perfect.
- The current value of equity can be derived as:
\(E = A \times N(d_1) – Fe^{-rt} \times N(d_2)\) where:
\(d_1 = \frac{ \ln \left( \frac{A}{F} \right) + \left( r + \frac{\sigma_A^2}{2} \right) t }{ \sigma_A \sqrt{t} }\) and
\(d_2 = d_1 – \sigma_A \sqrt{t} = \frac{ \ln \left( \frac{A}{F} \right) + \left( r – \frac{\sigma_A^2}{2} \right) t }{ \sigma_A \sqrt{t} }\)
N is cumulative normal distribution function,
F is the face value of the debt (equal to the market value of equity and net debt), A is the current value of the firm’s assets,
r is the risk-free rate of return
𝑡 is the remaining time to maturity of the debt,
\(sigma_A\) is the instantaneous volatility (standard deviation) of the firm’s assets.
- The calculation of the equity assumes equity as a function of both assets and time. It is given by \(\sigma_E = \frac{ \sigma_A \times A_t \times N(d_1) }{ E_t }\)
- Additionally, N\(-d_2\) represents the risk-neutral probability of default, which signifies the likelihood that shareholders will opt not to utilize the option to repay the company’s debt. This probability is determined under the assumption of asset growth at the risk-free rate. To obtain the actual probability of default (𝑃𝐷), the expected return on assets (𝜇) should replace the risk- free rate (𝑟), and it is given by:
\(PD_{real} = N \left( \frac{ \ln \left( \frac{A}{F} \right) + \left( \mu – \frac{\sigma_A^2}{2} \right) t }{ \sigma_A \sqrt{t} } \right)\)
- When it comes to credit risk assessments, a key idea is Distance to Default (𝐷𝐷). Distance to Default (𝐷𝐷) gauges a company’s proximity to defaulting on its debt by measuring the number of standard deviations that the company’s assets exceeds the face value of debt. A higher 𝐷𝐷 signifies a reduced likelihood of default, indicating stronger financial stability. It is given by
\(DD = \frac{\ln \left( \frac{A}{F} \right) + \left( \mu – \frac{\sigma_A^2}{2} \right) t}{\sigma_A \sqrt{t}}\)
Limitations of the Merton Model
- Assumption Simplification: The model relies on simplifying assumptions such as constant volatility and risk-free interest rates, potentially oversimplifying actual market conditions and impacting its accuracy.
- Limited Applicability: The model’s applicability is limited to publicly traded and financially liquid companies. Its application to unlisted firms presents challenges due to:
- Lack of Observable Prices: Unlisted companies lack observable prices, making it difficult to apply the model accurately. Proxies or comparables may yield unreliable results due to the model’s sensitivity to key parameters.
- Feasibility Issues with Comparables: Medium-sized enterprises, in particular, may pose challenges for using comparable prices, rendering this method infeasible.
- Reliance on Historical Data: The model’s effectiveness hinges on historical market data, which may become less predictive of future trends due to evolving market conditions and financial regulations.
- Recalibration Costs: Frequent recalibration of the model is necessary, incurring substantial costs.
- Stability Compared to Ratings: Merton’s approaches exhibit higher instability compared to credit rating agencies’ ratings due to continuous market fluctuations. Long-term institutional investors may be hesitant to adopt this approach, preferring less frequent changes in asset allocation.
- Credit Risk Aggregation Challenges: The model faces difficulties in aggregating and comparing credit risks across different business lines or financial institutions due to its focus on market- based risk factors.
The Credit Metrics Model
- J.P. Morgan’s CreditMetrics method assesses a company’s probability of default by comparing it to similar companies with a history of debt default. It utilizes a transition table to assess potential changes in default probability over time, making it a mark-to-market rather than a default mode model. Due to limitations in measuring certain risk analysis variables, transition probabilities are employed to derive the probability of default.
- The primary data used in this approach are:
- Credit ratings of bond issuers.
- Credit rating transition matrix indicating rating changes.
- Recovery rates for defaulted loans.
- Yield margins in bond markets.
- The model initially incorporates all credit ratings used by the institution and the probabilities of transitioning between categories over a specified period. Credit grades are assumed to be uniform, a point of contention for critics. Determining the time horizon is crucial, typically set at one year but can extend up to 10 years. Recovery rates are calculated from projected returns of initial funds at fixed rates, raising concerns about the model’s validity amid fluctuating interest rates over prolonged periods.
- The model generates a distribution depicting changes in the value of a loan over time. Specifically, the value of a loan issuance, such as a bond, considering factors like maturity, interest rate, and projected rating for the next year, is determined as
\(PV_{it} = \frac{CF_{it}}{(1 + r_t + s_{it})^t}\) where
- \(PV_{it}\) is the is the value of a bond in credit grade 𝑖 (i.e., the credit rating during the next period),
- \(CF_{it}\) the coupon payment at time 𝑡,
- \(r_t\) the risk-free interest rate in period 𝑡,
- \(s_{it}\) the annual risk premium for a loan in credit grade 𝑖 at time 𝑡.
- Using the \(CreditMetrics^{TM}\) model, market values for non-marketable loans or bonds can be estimated, allowing for risk assessment of individual loans and loan portfolios. Different capital requirements may arise from the distinct risks associated with each loan.
- A framework for analyzing risk in a variety of financial instruments, such as conventional loans, commitments, fixed income securities, commercial contracts, and products driven by the market like derivatives, is provided by \(CreditMetrics^{TM}\). This approach improves transparency in credit risk management and regulatory scrutiny while facilitating a methodical understanding of risks. It is important to understand that CreditMetrics , rather than being a rating tool or a risk pricing model, offers portfolio risk measurements that take asset correlations into account, which enhances the management of credit risk.
- The CreditMetrics methodology assesses credit risk at both individual and portfolio levels through three stages.
- Reporting Profiles: \(CreditMetrics^{TM}\) consolidates reports from diverse financial instruments, including loans, bonds, and market-based tools, ensuring consistency in exposure estimation.
- Volatility from Upgrades, Downgrades, or Defaults: This step involves estimating the probability and impact of credit rating changes on asset values, considering the risk weighting of each outcome.
- Correlations: Individual value distributions are aggregated to determine portfolio volatility, necessitating correlation estimates for credit quality changes. Various approaches, including fixed correlation coupled with the \(CreditMetrics^{TM}\) model, can be utilized to assess these correlations.
The Credit Risk + Model
- The CreditRisk + model was developed by Credit Suisse Financial Products in 1997.
- Unlike the KMV model, CreditRisk+ does not rely on a company’s capital structure or credit rating to assess the probability of default.
- Also, in contrast to the CreditMetrics model, which aims to evaluate debt securities and potential losses, CreditRisk+ simplifies credit events to only two categories: bankruptcy and bad credit conditions. Additionally, CreditRisk+ treats default frequency as a continuous random variable, with potential losses occurring solely in the event of bankruptcy.
- In the CreditRisk+ model, default probabilities are assumed to be low and independent of other credit events. This model utilizes the Poisson distribution, where the average default rate equals its variance. Unlike CreditMetrics , CreditRisk+ does not compute credit ratings but solely focuses on default events. This simplicity requires minimal data input, making it user- friendly.
Moody’s KMV Model
- KMV employs the “Expected Default Frequency” (EDF ) derived from the Merton Model extension to calculate default probabilities for each obligor. The EDF model generates an empirical distribution of default frequencies based on historical data, rather than the normal distribution approach used in the Merton Model. This offers a more robust mapping of Distance to Default (𝐷𝐷) to a probability of default scale. Additionally, it defines the default point as the sum of short-term debt and half of long-term debt, providing a closer approximation to a firm’s actual loan obligations.
- Advantages of KMV include:
- Current equity value determines probabilities of default (PD), ensuring immediate reflection of any firm value changes.
- PD changes continuously rather than waiting for rating adjustments, offering real-time risk assessment.
- Equity value increase reduces default probability, unlike CreditMetrics where firm value fluctuations may not affect default probability due to static ratings.
- KMV adopts a CAPM-inspired approach for expected firm value growth and simplifies correlation structure using a factor model, allowing for an analytical solution for loss distribution and eliminating the need for simulation to compute Credit VaR.
- Many credit risk assessment methodologies, like scoring and rating models, are validated through statistical performance measures. However, their outcomes for daily decision-making are evaluated in financial terms. Understanding these financial measures is crucial for assessing loan performance and making acceptance/rejection decisions based on risk-adjusted return.
- The primary financial performance measure used is risk-adjusted return on capital (𝑅𝐴𝑅𝑂𝐶). It has gained significant attention in the financial industry because efficient risk management is vital for institutional profitability. 𝑅𝐴𝑅𝑂𝐶 evaluates the performance of financial institutions, product lines, or loans by comparing financial outcomes (income, profits) to the economic capital required to achieve them. This general description changes based on the specifications for certain components.
- In a basic credit risk modeling scenario, the financial outcomes of a loan are its revenues (excluding expenses and expected losses), while economic capital represents the capital required to cover risk exposure in case of default. 𝑅𝐴𝑅𝑂𝐶, in this context, is defined as
\(RAROC = \frac{\text{Loan Revenues}}{\text{Capital at Risk}}\)
- When loan performance is evaluated using 𝑅𝐴𝑅𝑂𝐶, a straightforward rule of thumb is followed: a loan is considered profitable if its 𝑅𝐴𝑅𝑂𝐶 is higher than the bank’s cost of capital.
- The following elements are included in the 𝑅𝐴𝑅𝑂𝐶 equation’s numerator:
- Loan amount (𝐿)
- The spread between the loan rate and the bank’s cost of capital (𝑠).
- Fees linked to the loan (𝑓).
- Expected loan losses (𝑙).
- Operating costs (𝑐).
- Taxation (𝑥).
If these components are all expressed as a percentage of the loan amount, then the anticipated revenues from a loan can be defined as:
\(\text{Loan revenues} = (s + f \,-\, l\, -\, c)(1\, -\, x) \times L\)
- One method for computing the 𝑅𝐴𝑅𝑂𝐶 equation’s denominator (capital at risk) is to look at how much the loan’s value changed due to changes in interest rates over a given time period, usually a year. The duration of the loan is a measure of how sensitive it is to changes in interest rates. To be more precise, the duration approximation for the change in loan value (Δ𝐿) owing to an interest rate change (Δ𝑖) for a loan valued at 𝐿, duration of 𝐷, and interest rate of 𝑖 is:
\(\Delta L = -LD \left( \frac{\Delta i}{1 + i} \right)\)
- As an illustration, let’s consider MF Bank’s loan valued at $2,000,000 with the following parameters.
- The spread between the loan rate and the bank’s cost of capital (𝑠)=0.4%.
- Commission linked to the loan (𝑓)=0.15%.
- Expected loan losses (𝑙)=0.
- Operating costs (𝑐)=0.3%.
- Taxation (𝑥)=20%.
\(Loan \, revenues = (s + f \,-\, l\, -\, c)(1 \,-\, x) \times L\)
\(= (0.004 + 0.0015 \,- \,0 \,-\, 0.003)(1 \,-\, 0.2) \times 2,000,000\)
\(= 4,000\)
Moreover, assume that the loan’s duration is 4 years and the current interest rate is 5.6%. With an expected increase in interest rates of 0.5%, the capital at risk can be estimated using the change in loan value as an approximation:
\(\Delta L = – LD \left( \frac{\Delta i}{1 + i} \right) = $2,000,000 \times 4 \times \frac{0.005}{1.056} = $37,878.79\)
Hence \(RAROC = \frac{Loan \, Revenues}{Capital \, at \, Risk} = \frac{4000}{37878.79} = 0.1056 \, or \, 10.56\%\)
Microsoft PowerPoint – CR 5 – Introduction to Credit Risk Modeling and Assessment
So, this loan remains profitable for the bank as long as its own interest rate is below 10.56%.
- The above method is commonly known as a market-based approach. An alternative method for assessing the capital at risk relies on historical data rather than market-based indicators like anticipated interest rate fluctuations. This method incorporates the unexpected loan loss into the denominator of the RAROC equation, formulated as follows:
Unexpected loan loss = \(\alpha \times LGD \times EAD\)
- Here, \(\alpha\) stands as a risk factor symbolizing the unexpected default rate, which can be determined based on the historical default rates distribution for loans resembling the one in focus. For example, assuming a normal distribution of default rates, a value of \(\alpha = 2.576\sigma\) can be set at a 99.5% confidence level (based on the z-score). However, loan loss distributions typically exhibit skewness, indicating non-normality. Hence, the standard deviation coefficient in this context is frequently set at a higher level to accommodate for this.