An Introduction to the Bank of England’s Stress Tests
This posting provides an introduction to the Bank of England’s recent stress tests on the UK banking system. It suggests that a stress test can be compared to a school exam: there is the exam paper or stress scenario and there is the performance of the candidate against a pass standard to determine whether the candidate passes or fails. More precisely, with a mild stress scenario, a highly gameable capital-adequacy metric to assess performance and a very low pass standard, the Bank’s stress tests can be compared to an exam that is extremely easy to pass. This bias towards a ‘pass’ result undermines the credibility of the entire exercise. (For the previous blog in this series, see here.)
The purpose of central bank stress testing is to assess the banking system’s capital adequacy, i.e., the ability of banks to withstand financial stress. A stress test has three key components:
- An assumed adverse stress scenario – essentially a guess scenario generated by modellers at the central bank.
- A metric to gauge the strength of each bank. This metric is the bank’s capital ratio – the ratio of ‘core’ capital to some measure of the total amount ‘at risk’ - the intuition being that core capital provides a buffer to absorb potential losses and keep the bank solvent in a crisis.
- A pass standard by which we determine whether the post-stress value of the capital ratio is (or is not) high enough to merit a pass mark in the test.
There is a natural analogy here with a school exam, the purpose of which is to assess a student’s academic strength. It too has three key components:
- There is an exam paper based on a set of questions, and underlying this, the issue of how easy or tough the exam paper might be. The easiness/toughness of an exam paper is comparable to the severity (or otherwise) of a central bank’s stress scenario.
- There is the performance of the candidate in the exam, i.e., the mark they receive.
- There is the pass standard, i.e., the minimum mark that a student must achieve in order to pass the exam.
We then draw our conclusions. For example, if we had an easy set of questions and a low pass standard and the student achieved a low mark, then we shouldn’t conclude that the student is academically strong.
Similarly, if we had a stress test with a mild stress scenario, a low pass standard and generally low post-stress capital ratios – all of which are in fact the case with the Bank’s stress tests – then we shouldn’t conclude that the banks are financially strong.
Yet this is exactly the conclusion that the Bank draws from its stress tests.
We should also say a little more on the capital-adequacy metrics and the pass standard.
To evaluate a bank’s capital adequacy, we need estimates of both the numerator (core capital) and the denominator (the total amount ‘at risk’).
By core capital, we mean the capital available to support the bank in the heat of a crisis. However, there are a number of different core capital measures available and some are more reliable than others. Their reliability is in inverse proportion to their broadness: the broader the capital measure, the more ‘soft’ capital it includes and the less reliable it is. The narrowest and best is Common Equity Tier 1 (CET1), which approximates to tangible common equity (TCE) capital plus retained earnings. In this context, the ‘tangible’ in TCE means that it excludes ‘soft’ items such as goodwill and other intangibles that cannot be deployed to help it weather a crisis, and ‘common’ means that it excludes more senior capital items like preferred shares and hybrid capital. However, in its stress tests, the Bank also uses a broader definition of capital known as Tier 1 capital; this is equal to CET1 plus some additional and therefore softer hybrid items.
As with any exam, a major concern is cheating – or ‘gaming’ to use the more polite language used in this area. In the case of the capital measure, the concern is with banks’ ability to exploit loopholes (e.g., by stuffing softer and less expensive-to-issue capital items into the core capital measures approved by regulators) and, of course, with their lobbying to create such loopholes in the first place.
Then there is the denominator, the total amount ‘at risk’. Traditionally, this was taken to be the total assets of the bank. However, for many years now the on-balance-sheet amounts at risk have been dwarfed by the amounts at risk off the balance sheet in securitizations, contingent liabilities, derivatives etc. These off-balance-sheet risks have long since made the total assets measure highly inadequate.
To make matters worse, the exposure measure long favoured by the Basel system is not total assets, which would be bad enough, but so-called ‘Risk Weighted Assets’. We can think of RWAs as a game to lower the ‘at risk’ numbers in order to get lower capital requirements. In this particular game, every asset is given a fixed arbitrary ‘risk weight’ of between 0% and 100%. So, example, the debt of OECD governments would be given a zero risk weight on the presumption that it is riskless – that’s right, Greek debt is considered riskless! - whereas commercial debt would be given the full risk weight of 100%.
The result is to create artificially low ‘Risk Weighted Asset’ measures that are much lower than total assets. To give an idea, latest available data for the UK banks that participated in the stress test show that their average ratio of RWA to total assets was a mere 33%, which means that on average across the system, two thirds of bank assets are deemed by this measure to have no risk at all! And one institution – the Nationwide - had a RWA to total assets ratio of just under 18%, meaning that no less than 82% of its assets were deemed to be entirely risk-free. So either these banks have indeed taken very low risks or they are just very good at playing the risk-weighting game. The evidence suggests the latter.
Going further, this RWA system is tailor-made for gaming: you load up on zero-weighted assets and you are rewarded with a lower capital requirement because you are deemed to have low risk. In limit, you could load up entirely on zero-weighted assets: you would then be deemed to have zero risk and incur a zero capital requirement. If we look at the data, we see that average risk weights across the big banks have trended down from about 70% in 1993 to a little below 40% by 2011. If this trend continues, then the average risk weight should hit zero by 2034 and every single risk in the banking system would be invisible to the risk-weighted measurement system.
There is also abundant evidence – most notably that provided in a widely cited paper published by the Bank of England itself [1] – to suggest that the RWA measure is so poor that it actually gives a contrarian indicator of risk, i.e., that a fall in RWAs indicates rising risk!
Part of the explanation is that banks were loading up on assets with low RWAs to reduce their capital requirements.
Even more worrying is that banks were also engaging in vast derivative and securitization transactions to move assets from high to low weight classifications to reduce their capital requirements even further. Indeed, this game even had a name – Risk-Weight ‘Optimisation’ (RWO) – and RWO really means risk-weight minimisation.
And this RWO that almost no-one has ever heard of was the main driving force behind the enormous growth in derivatives trading and securitization in the years running up to the Global Financial Crisis (GFC) – and in so far as it led to (much) greater risk taking and (enormous) capital depletion it was also a major contributing factor to the GFC as well. [2]
Thus, a low RWA does not indicate low risk; instead, it indicates RWO: it suggests that the banks concerned are taking more risks, but are better at hiding them from the risk measurement system.
To help deal with these problems, the Basel III international bank capital adequacy regime introduces a new measure of the amount at risk known as the ‘leverage exposure’. This measure makes a half-hearted attempt to incorporate some of the off-balance-sheet risks that do not appear in the total assets measure. However, the adoption of this new measure was subject to the usual bank lobbying and one must have serious doubts about it. Nonetheless, if we rule out the RWA measure, then we are stuck with a choice between total assets and the Basel III-based Bank of England version of leverage exposure as the only exposure measures available to work with.
With both the capital and exposure measures, we should also be concerned with the mischief that arises from highly gameable accounting rules. Examples include the abuse of hedge accounting rules to hide risks, and the abuse of International Financial Reporting Standards (IFRS) accounting rules to create illusory capital, which makes banks appear more capitalised than they really are, and to create fake profits, which can then be siphoned off as bonuses to the bankers who created them and in the process decapitalise the banking system. [3]
Returning to our main theme, the Bank uses two different capital-adequacy ratios in its stress tests:
- The first is the ratio of CET1 capital to RWA – the so-called ‘CET1 ratio’ – and it is the tests based on this ratio that the Bank always highlights in its headline commentary on the stress tests.
- The second is a supplementary capital ratio (the so-called ‘leverage ratio’), the ratio of Tier 1 capital to leverage exposure.
Neither of these ratios is entirely satisfactory: the first because it uses the worse than useless Really Weird Assets measure, and the second because it uses a softer capital measure (i.e., Tier 1 instead of CET1). However, notwithstanding this latter weakness and one’s doubts about the leverage exposure measure in its denominator, the leverage ratio provides a much better metric of capital adequacy than the ratio of CET1 capital to RWAs, precisely because it is not dependent on the fatal flaws in the latter.
It was therefore unfortunate that the Bank of England chose to focus on stress test results using the CET1/RWA ratio rather than the leverage ratio.
Then there is the question of choosing a suitable pass standard.
One approach is to choose a pass standard that reflects the minimum regulatory capital standards imposed on banks – most obviously, the standards imposed under Basel III. Indeed, the Bank itself suggested pass standards of at least Basel III quality. As it explained in the October 2013 Discussion Paper setting out the stress testing framework:
A key consideration [in setting the pass standard] will be the minimum level of capital required by internationally agreed standards. Banks need to maintain sufficient capital resources to be able to absorb losses in the stress scenario and remain above these minimum requirements.
The document then notes that under the Prudential Regulation Authority’s proposed implementation of CRD IV, the EU Directive on capital regulation, there is a minimum CET1 [to RWA] requirement of 4.5%, and it observes a little later that “CRD IV [also] requires banks to have at least a 2.5 percentage point buffer of capital above the 4.5% minimum.” [4] Note the word “required” here.
In short, the Bank suggests that the hurdle rate/pass standard should be at least as high as internationally agreed minimum required capital standards [read: Basel III] and it acknowledges that this minimum required standard is at least as high as 7%.
But for reasons best known to itself, the Bank then chose a pass standard that fell below these minimum standards: it set the pass standard at 4.5%.
A cynic might suggest that the Bank chose a mild ‘stress’ scenario, focused on a very ‘soft’ and highly gameable capital adequacy metric (the CET1/RWA ratio) and chose a very low pass standard to engineer an undeserved ‘pass’ result for the UK banking system.
I am not suggesting that the Bank actually did this, but the Bank’s stress tests could be construed that way.
And this is a big shame, because it undermines the credibility of the whole exercise – the Bank of England failed its own stress test.
In the next posting, I will start to examine the stress tests in more detail.
References
[1] See A. G. Haldane, “Constraining discretion in bank regulation,” April 9, 2013, p. 15, chart 2.
[2] See G. Kerr, “How to destroy the British banking system – regulatory arbitrage via ‘pig on pork’ derivatives,” The Cobden Centre, January 21, 2010.
[3] For more on these problems with IFRS accounting standards see T. Bush “UK and Irish Banks Capital Losses – Post Mortem,” Local Authority Pension Fund Forum, 2011, and G. Kerr, “The Law of Opposites: Illusory Profits in the Financial Sector”, Adam Smith Institute, 2011.
[4] Bank of England, “A framework for stress testing the UK banking system,” (October 2013), p. 28.