Contact us

Basel Committee on Banking Supervision

Instructor  Micky Midha
Updated On

Learning Objectives

  • Explain the following lessons on VaR implementation: time horizon over which VaR is estimated, the recognition of time-varying volatility in VaR risk factors, and VaR backtesting.
  • Describe exogenous and endogenous liquidity risk and explain how they might be integrated into VaR models.
  • Compare VaR, expected shortfall, and other relevant risk measures.
  • Compare unified and compartmentalized risk measurement.
  • Compare the results of research on top-down and bottom-up risk aggregation methods.
  • Describe the relationship between leverage, market value of asset, and VaR within an active balance sheet management framework.
  • Video Lecture
  • |
  • PDFs
  • |
  • List of chapters

Introduction

  • This chapter is based on the findings of a working group (the “group”) that surveyed the academic literature that is relevant to a fundamental review of the regulatory framework of the trading book. It reflects the views of individual contributing authors, and should not be construed as representing specific recommendations or guidance by the Basel Committee for national supervisors or financial institutions. The report builds on and extends previous work by the Research Task Force on the interaction of market and credit risk (see Basel Committee on Banking Supervision (2009a).

Selected Lessons On VaR Implementation – Overview

  • The three categories of VaR implementation issues reviewed are:
    1. Time horizon over which VaR is estimated – The appropriate VaR horizon varies across positions and depends on the position’s nature and liquidity. For regulatory capital purposes, the horizon should be long, and yet the common square-root of time scaling approach for short horizon VaR (e.g., one-day VaR) may generate biased long horizon VaR (e.g., ten-day VaR) estimates.
    2. The recognition of timevarying volatility in VaR risk factors – While many trading book risk factors exhibit time-varying volatility, there are some concerns that regulatory VaR may suffer from instability and pro-cyclicality if VaR models incorporate time-varying volatility. Several approaches to incorporate time-varying volatility in VaR have been discussed.
    3. VaR backtesting – The literature on VaR backtesting has been surveyed and several regulatory issues have been discussed including whether VaR should be backtested using actual or hypothetical P&L whether the banks’ common practice of backtesting one-day VaR provides sufficient support for their ten-day, regulatory VaR.

Time Horizon For Regulatory Var

  • One of the fundamental issues in using VaR for regulatory capital is the horizon over which VaR is calculated. The 1998 Market Risk Amendment (MRA) sets this horizon to be ten days, and it allows ten-day VaR to be estimated using square-root of time scaling of one-day VaR.
  • This approach raises three questions:
    1. Is ten days an appropriate horizon?
    2. Does VaR estimation based on time scaling of daily VaRs produce accurate risk measures?
    3. What role do intra-horizon risks (i.e., P&L fluctuations within ten days) play, and should such risks be taken into account in the capital framework?
  • The computation of VaR over longer horizons introduces the issue of how to account for time variation in the composition of the portfolios, especially for currencies.
    1. A common solution is to sidestep the problem of changes to portfolio composition by calculating VaR at short horizons and scaling up the results to the desired time period using the square-root of time. While simple to implement, this choice may compromise the accuracy of VaR because, tail risk is likely to be underestimated.
    2. A second way to tackle the problem is to focus directly on calculating the portfolio VaR over the relevant horizon of interest. These approaches may have limited value if the composition of the portfolio changes rapidly. Furthermore, data limitations make it challenging to study the P&L of newly traded assets.
    3. A third solution is to extend VaR models by incorporating a prediction of future trading activity, According to Diebold – “To understand the risk over a longer horizon, we need not only robust statistical models for the underlying market price volatility, but also robust behavioural models for changes in trading positions.”

Is Ten Days An Appropriate Horizon

  • There seems to be consensus among academics and the industry that the appropriate horizon for VaR should depend on the characteristics of the position. Many of them assert that the relevant horizon will likely depend on where the portfolio lies in the firm (e.g., trading desk vs. CFO) and asset class (e.g., equity vs. fixed income), and the appropriate horizon should be assessed on an application by-application basis. It has been argued that, if the purpose of VaR is to protect against losses during a liquidity crisis, the ten-day horizon at 99% refers to an event that happens roughly 25 times a decade, while a liquidity crisis is “unlikely to happen even once a decade. Hence the probability and problem are mismatched.” From this perspective, it appears that an across-the-board application of ten-day VaR horizon is not optimal.
  • In addition, even for the same financial product, the appropriate horizon may not be constant, because trade execution strategies depends on time-varying parameters, like transaction costs, expected price volatility, and risk aversion

Square-Root Of Time Rule

  • Under a set of restrictive assumptions on risk factors, long horizon VaR can be calculated as short horizon VaR scaled by the square root of time, if the object of interest is unconditional VaR. Unfortunately, the assumptions that justify square root of time scaling are rarely verified for financial risk factors, especially at high frequencies. Furthermore, risk management and capital computation are more often interested in assessing potential losses conditional on current information, and scaling today’s VaR by the square root of time ignores time variation in the distribution of losses. There was no evidence in support of square-root of time scaling for conditional VaRs.
  • The accuracy of square-root of time scaling depends on the statistical properties of the data generating process of the risk factors. If risk factors follow a GARCH(1,1) process, scaling by the square-root of time over-estimates VaR. In contrast to the results that assume that risk factors exhibit time-varying volatility, when the underlying risk factor follows a jump diffusion process, scaling by the square root of time systematically under-estimates risk and the downward bias tends to increase with the time horizon. While these results argue against square-root of time scaling, there are no immediate alternatives to this rule. Therefore, the practical usefulness of square-root of time scaling should be recognised.

Intra Horizon Risk

  • Intra-horizon VaR (VaR-I), is a risk measure that combines VaR over the regulatory horizon with P&L fluctuations over the short term, with a particular focus on models that incorporate jumps in the price process. The rationale behind intra-horizon VaR is that the maximum cumulative loss, as distinct from the end-of-period P&L, exerts a distinct effect on the capital of a financial institution.  It was suggested that VaR-I “can be important when traders operate under mark-to-market constraints and, hence, sudden losses may trigger margin calls and otherwise adversely affect the trading positions.”
  • Even though daily VaR does carry information on high frequency P&L but knowledge of the VaR on a daily basis does not reveal the extent to which losses may accumulate over time.
  • It was found that taking intra-horizon risk into account generates risk measures consistently higher than standard VaR, up to multiples of VaR, and the divergence is larger for derivative exposures.

Time-Varying Volatility And Correlations In VaR

  • Certain asset classes, such as equities and interest rates, exhibit time-varying volatility. Accounting for time-varying volatility in VaR models has been one of the most actively studied VaR implementation issues.  Many firms advocate the use of fast reacting measures of risk such as exponential time-weighted measures of volatility. The reason given is that such VaR models provide early warnings of changing market conditions and may perform better in backtesting.
  • In contrast, some have argued that, depending on the purpose of VaR, capturing time-varying volatility in VaR may not be necessary, or may even be inappropriate. It was observed that volatility forecastability decays quickly with time horizon for most equity, fixed income and foreign exchange assets. The implication is that capturing time-varying volatility may not be as important when the VaR horizon is long, compared to when the VaR horizon is relatively short. There are also concerns about pro-cyclicality and instability implications associated with regulatory VaRs that capture time-varying volatility.
  • In summary, incorporating time-varying volatility in VaR appears to be necessary given that it is prevalent in many financial risk factors. However, using VaR with time-varying volatility for regulatory capital raises concerns of volatile and potentially pro-cyclical regulatory standards.

Methods To Incorporate Time-Varying Volatility In VaR

  • Beginning with J.P. Morgan (1996), the Exponentially Weighted Moving Average (EWMA) approach has been regarded as one of the industry standards for incorporating time-varying volatility in VaR. EWMA is a constrained version of an IGARCH (1,1) model, and in the case of RiskMetrics the parameter in IGARCH was set to 0.97. An alternative and simpler approach is assigning an observation from i days ago a weight of
  • Overall, incorporating time-varying volatility in VaR measures is not straight forward when there are many risk factors. Time-varying correlations should be taken into account. Rather than using more involved methods, the industry appears to be taking less burdensome alternatives, such as using simple weighting of observations, or shortening the data window used to estimate VaR. These approaches compromise on accuracy, but are computationally attractive for large and complex portfolios. The recent academic literature offers promise that some of the sophisticated empirical methodologies may soon become practical for large complex portfolios.

Backtesting VaR Models

  • Backtesting has been the industry standard for validating VaR models. Banks typically draw inference on the performance of VaR models using backtesting exceptions (sometimes also known as backtesting “breaches” or “violations”). For regulatory capital, the MRA imposes a multiplier on VaR depending on the number of backtesting exceptions the bank experiences.
  • While the MRA does not require banks to statistically test whether VaR has the correct number of exceptions, formal statistical inference is always desirable and many alternatives have been proposed in the literature. Kupiec (1995) introduced the unconditional coverage likelihood ratio tests as inference tools for whether the VaR model generated the correct number of exceptions. This methodology is simple to implement, but has two drawbacks.
    1. First, as pointed out by Kupiec , when the number of trading days used in VaR evaluation is limited (e.g., one year or approximately 250 trading days), or when the confidence level is high (e.g., 99% as in regulatory VaR), such tests have low power.
    2. Second, given that this test only counts exceptions, its power may be improved by considering other aspects of the data such as the grouping of exceptions in time.
  • Christoffersen (1998) has proposed a conditional backtesting exception test that accounts for the timing as well as the number of exceptions. Aside from backtesting based on the number of exceptions, a natural measure of VaR performance is the magnitude of the exceptions.
  • An important and yet ambiguous issue for backtesting is which P&L series to compare to VaR. Broadly speaking, the estimated VaR can be compared to either actual P&L (i.e., the actual portfolio P&L at the VaR horizon), or hypothetical P&L (i.e., P&L constructed based on the portfolio for which VaR was estimated). To complicate matters further, actual P&L may sometimes contain commissions and fees, which are not directly related to trading and trading risk.
  • Another issue is the appropriate backtesting horizon. Banks typically backtest one-day ahead VaR and use it as a validation of the regulatory VaR, which is ten-day. The problem here is clear: a good one-day VaR (as validated by backtesting) does not necessarily imply a good tenday VaR, and vice versa. Ten-day backtesting may not be ideal either, given the potentially large portfolio shifts that may take place within ten days.

Incorporating Liquidity – Exogenous Liquidity

  • To incorporate market liquidity into a VaR model, first of all, a distinction between exogenous and endogenous liquidity needs to be made. This distinction is made from the point of view of the bank, rather than in general equilibrium terms.
    1. Exogenous liquidity refers to the transaction cost for trades of average size. Below a certain size, transactions may be traded at the bid/ask price quoted in the market . The exogenous component of liquidity risk corresponds to the average transaction costs set by the market for standard transaction sizes. Exogenous liquidity risk, corresponding to the normal variation of bid/ask spreads across instruments can be, from a theoretical point of view, easily integrated into a VaR framework. It can be captured by a “liquidity-adjusted VaR” (LVaR) approach. One method to account for exogenous liquidity is to add the bid/offer spread to characterise exogenous liquidity as a risk factor. The method poses that the relative spread, S =(Ask-Bid)/MidPrice, has sample mean and variance μ ̂ and σ ̂2 If the 99% quantile of the normalised distribution of S is q ̂0.99, then the Cost of Liquidity is defined as
    2. Endogenous liquidity is related to the cost of unwinding portfolios large enough that the bid-ask spread cannot be taken as given, but is affected by the trades themselves. Above a certain size, the transaction will be done at a price below the initial bid or above the initial ask, depending on the sign of the trade (endogenous liquidity). The endogenous component corresponds to the impact on prices of the liquidation of a position in a relatively tight market, or more generally when all market participants react in the same way, and therefore applies to orders that are large enough to move market prices. Endogenous risk, corresponding to the impact on market prices of the liquidation of a position, or of collective portfolio adjustments, is more difficult to include in a VaR computation. Its impact, however, may be very significant, especially for many complex derivatives held in trading books of large institutions. The effects are important when
    1. the underlying asset is not very liquid,
    2. the size of the positions of the investors hedging an option is important with respect to the market,
    3. large numbers of small investors follow the same hedging strategy,
    4. the market for the underlying of the derivative is subject to asymmetric information, which magnifies the sensitivity of prices to clusters of similar trades.
  • Several authors define an optimal liquidation strategy in a finite (or infinite time horizon) model and deduce from this strategy the market value of the portfolio which is equal to the expectation of its liquidation price. The associated VaR measure, defined as a confidence interval around this expected price, implicitly incorporates market and liquidity risks. Some studies suggest that endogenous liquidity costs should be added to position returns before carrying out VaR calculations.
  • The liquidity risk adjustments proposed in the academic literature, for the most part, have not been applied to the trading books of banks. One reason for this may be that the suggested valuation methods are not necessarily compliant with actual accounting standards. Another reason academic proposals have been slow to be adopted may be the difficulty of estimating model liquidity parameters, especially for OTC products. Indeed, the necessary data are not always available, and some of these parameters may be subjective. But recent discussions in academic circles regarding OTC transaction reporting could contribute to solve this problem.
  • The recent financial crisis has provided examples where a change in market liquidity conditions alters the liquidity horizon, i.e., the time required to unwind a position without unduly affecting the underlying instrument prices (including in a stressed market). It has been suggested that the application of a unique horizon to all positions by ignoring their size and level of liquidity is undesirable, and the temporal horizon should be determined by the size of the position and the liquidity of the market.
  • On the one hand the exposures of banks to market risk and credit risk may vary with a risk horizon that is set dependent on market liquidity. If liquidity decreases, for example, the risk horizon lengthens and the exposure to credit risk typically increases. On the other hand, liquidity conditions are also affected by perceptions of market and credit risk. A higher estimate of credit risk for example, may adversely affect the willingness to trade and thereby market liquidity
  • Liquidation horizons vary over the business cycle, increasing during times of market stress. Besides transaction costs or the size of the position relative to the market, a trade execution strategy also depends on factors like expected price volatility and risk aversion.

Incorporating Liquidity

  • The academic literature suggests as a first step to adjust valuation methods in order to take endogenous liquidity risk into account. Then a VaR integrating liquidity risk could be computed. Notwithstanding academic findings on this topic, in practice, the ability to model exogenous and endogenous liquidity may be constrained by limited data availability, especially for OTC instruments.

Risk Measures – VaR

  • VaR has become a standard measure used in financial risk management due to its conceptual simplicity, computational facility, and ready applicability. Given some random loss L and a confidence level α, VaRα (L) is defined as the quantile of L at the probability α. In other words, it is the the maximum loss that can occur given a specified level of confidence.
  • Despite its prevalence in risk management and regulation, VaR has several conceptual problems. VaR measures only quantiles of losses, and thus disregards any loss beyond the VaR level. As a consequence, a risk manager who strictly relies on VaR as the only risk measure may be tempted to avoid losses within the confidence level while increasing losses beyond the VaR level. This incentive sharply contrasts with the interests of regulators since losses beyond the VaR level are associated with cases where regulators or deposit insurers have to step in and bear some of the bank’s losses. Hence, VaR provides the risk manager with incentives to neglect the severity of those losses that regulators are most interested in.
  • Neglecting the severity of losses in the tail of the distribution also has a positive flipside: it makes backtesting easier or possible in the first place simply because empirical quantiles are per se robust to extreme outliers, unlike typical estimators of the expected shortfall

  • A risk measure R is called coherent if it satisfies the following axioms.
    1. Subadditivity (diversification) R(L1+L2 )≤R(L1 )+R(L2)
    2. Positive homogeneity (scaling) R(λL)=λR(L), for every λ>0
    3. Monotonicity R(L1 )< R(L2 ) if L1  < L2
    4. Transition property R(L + a) < R(L) – a

    VaR is subadditive if the joint distribution of risk factors is elliptical (e.g. multivariate normal). Otherwise it may violate the subadditivity criterion. Hence VaR is not coherent.

    According to McNeil et. al. (2005), subadditivity  is important because:

    1. Subadditivity reflects the idea that risk can be reduced by diversification, … the use of non- subadditive risk measures in a Markowitz-type portfolio optimisation problem may lead to optimal portfolios that are very concentrated and that would be deemed quite risky by normal economic standards.
    2. If a regulator uses a non-subadditive risk measure in determining the regulatory capital for a financial institution, that institution has an incentive to legally break up into various subsidiaries in order to reduce its regulatory capital requirements ….
    3. Subadditivity makes decentralisation of risk-management systems possible.

Expected Shortfall

  • Expected shortfall (ES) is the most well-known risk measure following VaR. ES corrects three shortcomings of VaR.
    1. First, ES does account for the severity of losses beyond the confidence threshold. This property is especially important for regulators, who are, as discussed above, concerned about exactly these losses.
    2. Second, it is always subadditive and coherent.
    3. Third, it mitigates the impact that the particular choice of a single confidence level may have on risk management decisions, while there is seldom an objective reason for this choice.
  • If the loss distribution is continuous, then the representation of ES at a given level α is given by:
  • i.e. ES is then the expected loss conditional on this loss belonging to the 100(1 -α) percent worst losses.

  • Intuitively, backtesting ES is more complicated and/or less powerful than backtesting.

Spectral Risk Measures

  • Spectral risk measures (SRM) are a promising generalisation of ES. While the α-ES assigns equal weight to all β-VaRs with β≥α but zero to all others, an SRM allows these weights to be chosen more freely. An SRM is formally defined as
  • where w is the weighting function. Expected shortfall is a special case of spectral measure, where w(u) = (1 -α)-1 or 1/(1-α).

  • The definition of SRM is restricted to functions w that increase over [0,1], which ensures that the risk measure is coherent. This restriction also implies that larger losses are taken more seriously than smaller losses and thus the function w establishes a relationship to risk aversion. The intuition is that a financial institution is not very risk averse for small losses, which can be absorbed by income, but becomes increasingly risk averse to larger losses.
  • If the underlying risk model is simulation-based, the additional effort to calculate an SRM as opposed to the ES seems negligible.
  • Another advantage of SRM over ES and VaR is that they are not bound to a single confidence level. Rather, one can choose w to grow continuously with losses and thereby make the risk measure react to changes in the loss distribution more smoothly than the ES, and avoid the risk that an atom in the distribution being slightly above or below the confidence level has large effects.
  • In spite of their theoretical advantages, SRMs other than ES are still seldom used in practice. However, insurers use the closely related concept of distortion measures. Prominent examples such as the measure based on the Wang transformation are also SRMs.

Stress Testing Practices For Market Risk

  • VaR limitations have been highlighted by the recent financial turmoil. Financial industry and regulators now regard stress tests as no less important than VaR methods for assessing a bank’s risk exposure. A new emphasis on stress testing exercises derives also from the amended Basel II framework which requires banks to compute a valid stressed VaR number.
  • A stress test can be defined as a risk management tool used to evaluate the potential impact on portfolio values of unlikely, although plausible, events or movements in a set of financial variables. They are designed to explore the tails of the distribution of losses beyond the threshold (typically 99%) used in value-at-risk (VaR) analysis.
  • However, stress testing exercises often are designed and implemented on an ad hoc compartmentalised basis, and the results of stress tests are not integrated with the results of traditional market risk (or VaR) models. The absence of an integrated framework creates problems for risk managers, who have to choose which set of risk exposures are more reliable. There is also the related problem that traditional stress testing exercises typically remain silent on the likelihood of stress-test scenarios.
  • Traditional stress testing exercises can be classified into three main types, which differ in how the scenarios are constructed:

    1. historical scenarios;
    2. predefined or set-piece scenarios where the impact on P/L of adverse changes in a series of given risk factors is simulated;
    3. mechanical-search stress tests, based on automated routines to cover prospective changes in risk factors, then the P/L is evaluated under each set of risk-factor changes, and the worst-case results are reported.

    All these approaches depend critically on the choice of scenarios.

  • A survey of stress testing practices conducted by the Basel Committee in 2005 showed that most stress tests are designed around a series of scenarios based either on historical events, hypothetical events, or some combination of the two. Without using a risk model the probability of each scenario is unknown, making its importance difficult to evaluate. There is also the possibility that many extreme yet plausible scenarios are not even considered. Berkowitz proposed the integration of stress testing into formal risk modelling by assigning probabilities to stress-test scenarios.
  • More recent research advocates the integration of stress testing into the risk modelling framework. This would overcome drawbacks of reconciling stand-alone stress test results with standard VaR model output.
  • Progress has also been achieved in theoretical research on the selection of stress scenarios. In one approach, for example, the “optimal” scenario is defined by the maximum loss event in a certain region of plausibility of the risk factor distribution.
  • The regulatory “stressed VaR” approach is still too recent to have been analyzed in the academic literature. Certain methods could be meaningful in this context. Employing fat-tailed distributions for the risk factors and replacing the standard correlation matrix with a stressed one are two examples.

Unified Versus Compartments Risk Measurement

  • In many financial institutions, aggregate economic capital needs are calculated using a two-step procedure.
    1. First, capital is calculated for individual risk types, most prominently for credit, market and operational risk.
    2. In a second step, the stand-alone economic capital requirements are added up to obtain the overall capital requirement for the bank.

    This approach is often referred to as a non-integrated approach to risk measurement.

  • The Basel framework for regulatory capital uses a similar idea, based on a “building block” approach where a bank’s regulatory capital requirement is the sum of the capital requirements for each of the defined risk categories (i.e., market, credit and operational risk), which are calculated separately within the formulas and rules that make up Pillar 1. Capital requirements for other risk categories are determined by the supervisory process that fits within Pillar 2.
  • In contrast, an integrated approach would calculate capital for all the risks borne by a bank simultaneously in one single step and accounting for possible correlations and interactions,
    as opposed to adding up compartmentalised risk calculations.
  • Pressure to reconsider the regulatory compartmentalised approach came mainly from the financial industry, where it has been frequently argued that a procedure that simply adds up economic capital estimates across portfolios ignores diversification benefits. These alleged benefits have been estimated to be between 10 and 30% for banks.
  • Capital diversification arguments and estimates of potential capital savings are partially supported in the academic literature. More recently this view and the estimates have been fundamentally challenged by the Basel Committee. These papers have pointed out that nonlinear interaction between risk categories may even lead to compounding effects. This fact questions whether the compartmentalised approach will in general give a conservative and prudent upper bound for economic capital.
  • Academic studies have generally found that at a high level of aggregation, such as at the holding company level, risk diversification benefit can be observed. At a lower level of aggregation, such as at the portfolio level, risk compounding is found more. These results suggest, that the assumption of risk diversification cannot be applied everywhere, especially for portfolios
    subject to both market and credit risk, regardless of where they reside on the balance sheet.

Bottom-Up Approaches-Some Papers

  • A bottom-up approach attempts to account for interactions among various risk factors, rather than assuming clean separation of risks. Some Papers Using the “Bottom-Up” Approach are:
    1. Barnhill and Maxwell (2002) examine the economic value of a portfolio of risky fixed income securities, which they define as a function of changes in the risk-free interest rate, bond spreads, exchange rates, and the credit quality of the bond issuers. They develop a numerical simulation methodology for assessing the VaR of such a portfolio when all of these risks are correlated. Barnhill et. al. (2000) use this methodology to examine capital ratios for a representative South African bank. However, in these studies, the authors do not examine the differing values of their chosen risk measures using a unified risk measurement approach versus a
    2. compartmentalised approach that sums the independent risk measures.

    3. The study by Jobst, Mitra and Zenios (2006) provides some analysis along these lines. The authors construct a simulation model, based on Jobst and Zenios (2001), in which the risk underlying the future value of a bond portfolio is decomposed into:
    4. i.the risk of a borrower’s rating change (including default);

      ii.the risk that credit spreads will change; and

      iii.the risk that risk-free interest rates will change.

    5. In Breuer et. al. (2010a), the authors present analysis of hypothetical loan portfolios for which the impact of market and credit risk fluctuations are not linearly separable. They argue that changes in aggregate portfolio value caused by market and credit risk fluctuations in isolation should sum up to the integrated change incorporating all risk interactions very rarely. The magnitude and direction of the discrepancy between these two types of risk assessments can vary broadly. For example, the authors examine a portfolio of foreign currency loans for which exchange rate fluctuations (i.e., market risk) affect the size of loan payments and hence the ability of the borrowers to repay the loan (i.e., credit risk). For their empirically calibrated example, they use expected shortfall at various tail percentiles as their risk measure and examine portfolios of BBB+ and B+ rated loans. Their analysis shows that changes in market and credit risks can cause compounding losses such that the sum of value changes from the individual risk factors are smaller than the value change due to accounting for integrated risk factors.
    6. In particular, their reported inter-risk diversification index for expected shortfall increased sharply as the tail quantile decreased, which suggests that the sum of the two separate risk measures becomes much less useful as an approximation of the total integrated risk in the portfolio as we go further into the tail. These index values also increase for all but the most extreme tail percentiles as the original loan rating is lowered. The authors argue that this example presents evidence of a “malign interaction of market and credit risk which cannot be captured by providing separately for market risk and credit risk capital.” The authors show a similar qualitative out- come for domestic currency loans (i.e., loans for which default probability are simply a function of interest rates), although the index values are much lower.

    7. In Breuer et. al. (2008), the authors use a similar analytical framework to examine variable rate  loans in which the interaction between market and credit risk can be analysed. In particular, they model the dependence of credit risk factors-such as the loans’ default probabilities (PD), exposure at default (EAD), and loss-given-default (LGD)on the interest rate environment. A key risk of variable rate loans is the danger of increased defaults triggered by adverse rate moves. For these loans, market and credit risk factors cannot be readily separated, and their individual risk measures cannot be readily aggregated back to a unified risk measure. They conduct a simulation study based on portfolios of 100 loans of equal size by borrowers rated B+ or BBB+ over a one-year horizon using the expected shortfall measure at various tail percentiles. They find that the ratio of unified expected shortfall to the sum of the separate expected shortfalls is slightly greater than one, suggesting that risk compounding effects can occur. Furthermore, these compounding effects are more pronounced for lower-rated loans and higher loan-to-value ratios.
    8. The paper by Grundke (2005) lays out a bottom-up model that assumes the separability of interest rate risk (i.e., market risk) and credit spread risk. (i.e., credit risk). The author examines a calibrated multifactor credit risk model that accommodates various asset value correlations, correlations between credit spreads and other model factors, and distributional assumptions for innovations. The author examines hypothetical loan portfolios of varying credit quality over a three-year horizon, both with and without the joint modelling of interest rates and credit spreads. To assess the joint impact of interest rate and credit risk, the author uses forward market interest rates instead of separate interest rate and credit spread processes. Interestingly, the reported VaR measures at various tail percentiles lead to ratios of unified VaR measures to summed VaR measures that range widely from near zero to one, which seems to be due mainly to the separability of the interest rate risk (i.e., market risk) and credit spread risk (i.e., credit risk) in the model.
    9. Kupiec (2007) proposes a single-factor, migration-style credit risk model that accounts for market risk. This modelling approach generates a portfolio loss distribution that accounts for the non-diversifiable elements of the interactions between market and credit risks.
    10. The integrated exposure distribution of the model is used to examine capital allocations at various thresholds. These integrated capital allocations are compared to the separated assessments. The results show that capital allocations derived from a unified risk measure importantly alter the estimates of the minimum capital needed to achieve a given target solvency margin. The capital amount could be larger or smaller than capital allocations estimated from compartmentalised risk measures. Regarding specifically the Basel II AIRB approach, the author argues that the results show that no further diversification benefit is needed for banking book positions since no market risk capital is required. Thus, Basel II AIRB capital requirements fall significantly short of the capital required by a unified risk measure.

    11. The studies discussed above examine the different risk implications of a unified risk measurement approach relative to a compartmentalised approach for specific portfolios. In contrast, Drehmann et. al. (2010) examine a hypothetical bank calibrated to be representative of the UK banking system as a whole. Within their analytical framework, they do not explicitly assume that market and credit risk are separable. The authors decompose the total risk in their bank scenario analysis into:
      1. the impact of credit risk from non-interest rate factors,
      2. the impact of interest rate risk (excluding the effect of changes in interest rates on credit risk), and
      3. the impact of the interaction of credit risk and interest rate risk.
    12. Following up on the work of Drehmann et. al. (2010), Alessandri and Drehmann (2010) develop an integrated economic capital model that jointly accounts for credit and interest rate risk in the banking book; i.e., where all exposures are held to maturity. Note that they explicitly examine repricing mismatches (and thus market and credit risks) that typically arise between a bank’s assets and liabilities.

    On balance, these authors conclude that the bank’s capital is mismeasured if risk interdependencies are ignored. In particular, the addition of economic capital for interest rate and credit risk derived separately provides an upper bound relative to the integrated capital level. Two key factors determine this outcome.

    i.First, the credit risk in the bank is largely idiosyncratic and thus less dependent on the macroeconomic environment

    ii.Second, bank assets that are frequently repriced lead to a reduction in bank risk.

    Given that these conditions may be viewed as special cases, the authors recommend that “As a consequence, risk managers and regulators should work on the presumption that interactions between risk types may be such that the overall level of capital is higher than the sum of capital derived from risks independently.”

Top-Down Approaches-Some Papers

    Here, the first item is more narrowly defined to represent the portfolio’s credit risk, while the last item is more narrowly defined to represent the portfolio’s market risk. However, the middle item is sensitive to both risks and challenges the notion that market and credit risk can be readily separated in this analysis. The authors use portfolios of US corporate bonds and one-year VaR and CVaR risk measures at the 95%, 99% and 99.9% confidence levels for their analysis.

    In their analysis, the authors generate risk measures under three sets of assumptions. To concentrate on the pure credit risk contributions to portfolio losses, they simulate only rating migration and default events as well as recovery rates, while assuming that future interest rates and credit spreads are deterministic. The authors then allow future credit spreads to be stochastically determined, and finally, they allow future interest rates to be stochastically determined. Note that the latter case provides an integrated or unified risk measurement.

    The authors’ results are quite strong regarding the magnitude of the risk measures across risk types and credit ratings. For AAA-rated bonds, the authors find that the unified risk measures at all three tail percentiles are on the order of ten times the pure credit risk measures, since highly-rated bonds are unlikely to default. As the credit quality of the portfolio declines, the ratio between the unified risk measures and the risk measures for pure credit risk drops to just above one for C-rated bonds.

  • Top-down approaches rely on the assumption of cleanly splitting up the bank portfolio into sub-portfolios according to market, credit and operational risk i.e. they assume risk separability at the beginning of the analysis. This means that that market and credit risks are separable and can be addressed independently. As noted by Jarrow and Turnbull, economic theory clearly does not support this simplifying assumption. An important difference between the two approaches is that top-down approaches always reference an institution as a whole, whereas bottom-up approaches can range from the portfolio level up to an institutional level. The top-down approach does not require a common scenario across risk types, but because the correct form of aggregation is not known, the approach “loses the advantages of logical coherence”. In addition, the assumption of separable risk will generally prevent the ability to gauge the degree of risk compounding that might be present and instead typically provide support for risk diversification.
  • The literature is unclear on whether the combination of financial business lines within one organisation leads to an increase or decrease in risk. Although the literature suggests mixed results, but overall it is suggested that reductions in economic capital arise from the combination of banking and insurance firms. Some Papers Using the “Top-Down” Approach discussed below find this result as well for various risk combinations at the firm level:
    1. Dimakos and Aas (2004) decompose the joint risk distribution for a Norwegian bank with an insurance subsidiary into a set of conditional probabilities and impose sufficient conditional independence that only pairwise dependence remains; the total risk is then just the sum of the conditional marginals (plus the unconditional credit risk, which serves as their anchor). Their simulations indicate that total risk measured using near tails (95%- 99%) is about 10%-12% less than the sum of the individual risks. In terms of our proposed ratio, the value ranges from 0.88 to 0.90. Using the far tail (99.97%), they find that total risk is often overestimated by more than 20% using the additive method. In terms of our proposed ratio of unified risk measure to the sum of the compartmentalised risk measures, its value would be 0.80.
    2. Kuritzkes et. al. (2003) examine the unified risk profile of a “typical banking-insurance conglomerate” using the simplifying assumption of joint normality across the risk types, which allows for a closed-form solution. They use a broad set of parameters to arrive at a range of risk aggregation and diversification results for a financial conglomerate. Based on survey data for Dutch banks on the correlations between losses within specific risk categories, their calculations of economic capital at the 99.9% level is lower for the unified, firm-level calculation than for the sum of the risk-specific, compartmentalised calculations. Ratio of these two quantities ranges from 0.72 through 0.85 based on correlation assumptions across market, credit and operational risk.
    3. Rosenberg and Schuermann (2006) conduct a more detailed, top-down analysis of a representative large, internationally active bank that uses copulas to construct the joint distribution of losses. The copula technique combines the marginal loss distributions for different business lines or risk types into a joint distribution for all risk types and takes account of the interactions across risk types based on assumptions. Using a copula, parametric or nonparametric marginals with different tail shapes can be combined into a joint risk distribution that can span a range of dependence types beyond correlation, such as tail dependence. The aggregation of market, credit and operational risk requires knowledge of the marginal distributions of the risk components as well as their relative weights. Rosenberg and Schuermann assign inter-risk correlations and specify a copula, such as the Student-t copula, which captures tail dependence as a function of the degrees of freedom. They impose correlations of 50% for market and credit risk, and 20% for the other two correlations with operational risk; all based on triangulation with existing studies and surveys.
    4. Rosenberg and Schuermann find several interesting results, such as that changing the inter-risk correlation between market and credit risk has a relatively small impact on total risk compared to changes in the correlation of operational risk with the other risk types. The authors examine the sensitivity of their risk estimates to business mix, dependence structure,risk measure, and estimation method. Overall, they find that “assumptions about operational exposures and correlations are much more important for accurate risk estimates than assumptions about relative market and credit exposures or correlations.” Comparing their VaR measures for the 0.1% tail to the sum of the three different VaR measures for the three risk types, they find diversification benefits in all cases. For our benchmark measure of the ratio between the unified risk measure and the compartmentalised risk measure, their results suggest values ranging from 0.42 to 0.89. They found similar results when the expected shortfall (ES) measure was used.

      Note that the authors state that the sum of the separate risk measures is always the most conservative and overestimates risk, “since it fixes the correlation matrix at unity, when in fact the empirical correlations are much lower.” While the statement of imposing unit correlation is mathematically correct, it is based on the assumption that the risk categories can be linearly separated. If that assumption were not correct, as suggested by papers cited above, the linear correlations could actually be greater than one and lead to risk compounding.

    5. Finally, Kuritzkes and Schuermann (2007) examine the distribution of earnings volatility for US bank holding companies with at least $1 billion in assets over the period from 1986 Q2 to 2005 Q1; specially, they examine the 99.9% tail of this distribution. Using a decomposition methodology based on the definition of net income, the authors find that market risk accounts for just 5% of total risk at the 99.9% level, while operational risk accounts for 12% of total risk. Using their risk measure of the lower tail of the earnings distribution, as measured by the return on risk-weighted assets, their calculations suggest that the ratio of the integrated risk measure to the sum of the disaggregated risk measures ranges from 0.53 through 0.63.
  • Overall, academic studies have generally found that at a high level of aggregation, such as at the holding company level, the ratio of the risk measures for the unified approach to that of the separated approach is often less than one, i.e., risk diversification is prevalent and ignored by the separated approach. However, this approach often assumes that diversification is present. At a lower level of aggregation, such as at the portfolio level, this ratio is also often found to be less than one, but important examples arise in which risk compounding (i.e., a ratio greater than one) is found. These results suggest, at a minimum, that the assumption of risk diversification cannot be applied without questioning, especially for portfolios subject to both market and credit risk, regardless of where they reside on the balance sheet.

Balance Sheet Management

  • When an intermediary actively manages its balance sheet, leverage becomes procyclical because risk models and economic capital require balance sheet adjustments as a response to changes in financial market prices and measured risks. This relationship follows from simple balance sheet mechanics.
  • The following example is taken from Shin (2008a, pp. 24 ff.) Assume a balance sheet is given with 100 in assets and a liability side which consists of 90 in debt claims and 10 in equity shares. Leverage is defined as the ratio of total assets to equity, which is 10 in this example. If it is  assumed more generally that the market value of assets is A and the value of debt stays roughly constant at 90 for small changes in A, we see that total leverage is given by:
  • Leverage is thus related inversely to the market value of total assets. When net worth increases, because A is rising, leverage goes down, when net worth decreases, because A is falling, leverage increases.
  • Consider now what happens if an intermediary actively manages its balance sheet to maintain a constant leverage of 10. If asset prices rise by 1%, the bank can take on an additional amount of 9 in debt, its assets have grown to 110, its equity is 11, and the debt is 99. If asset values shrink by 1%, leverage rises. The bank can adjust its leverage by selling securities worth 9 and pay down a value 9 of debt to bring the balance sheet back to the targeted leverage ratio.
  • This kind of behaviour leads to a destabilising feedback loop, because it induces an increase in asset purchases as asset prices are rising and a sale of assets when prices are falling. Whereas the textbook market mechanism is self stabilising because the reaction to a price increase is a reduction in quantity demanded and an expansion in quantity supplied, and to a price decreases an expansion in quantity demanded and a contraction in quantity supplied, active balance sheet management reverses this self stabilising mechanism into a destabilising positive feedback loop.
  • Adrian and Shin (2010) document this positive relationship between total assets and leverage for all of the (former) big Wall Street investment banks. Furthermore, they produce econometric evidence that the balance sheet adjustments brought about by active risk management of financial institutions has an impact on risk premiums and aggregate volatility in financial markets.
  • It is important to recognise that while the current system may implement a set of rules that limit the risk taken at the level of individual institutions, the system may also enable institutions to take on more risk when times are good and thereby lay the foundations for a subsequent crisis. The very actions that are intended to make the system safer may have the potential to generate systemic risk in the system.
  • These results question a regulatory approach that accepts industry risk models as an input to determine regulatory capital charges. This critique applies in particular to the use of VaR to determine regulatory capital for the trading book but it questions also an overall trend in recent regulation.

Go to Syllabus

Courses Offered

image

By : Micky Midha

  • 9 Hrs of Videos

  • Available On Web, IOS & Android

  • Access Until You Pass

  • Lecture PDFs

  • Class Notes

image

By : Micky Midha

  • 12 Hrs of Videos

  • Available On Web, IOS & Android

  • Access Until You Pass

  • Lecture PDFs

  • Class Notes

image

By : Micky Midha

  • 257 Hrs Of Videos

  • Available On Web, IOS & Android

  • Access Until You Pass

  • Complete Study Material

  • Quizzes,Question Bank & Mock tests

image

By : Micky Midha

  • 240 Hrs Of Videos

  • Available On Web, IOS & Android

  • Access Until You Pass

  • Complete Study Material

  • Quizzes,Question Bank & Mock tests

image

By : Shubham Swaraj

  • Lecture Videos

  • Available On Web, IOS & Android

  • Complete Study Material

  • Question Bank & Lecture PDFs

  • Doubt-Solving Forum

FAQs


No comments on this post so far:

Add your Thoughts: