A report published by the Adam Smith Institute on 18th June questions the effectiveness of the the Bank of England stress testing program.
The report, penned by Kevin Dowd, Senior Fellow of the Adam Smith Institute and Professor of Finance and Economics at Durham University, suggests that the stress tests are flawed, due to four key reasons:
1. A single, questionable stress scenario
2. Inadequate data
3. Poor metrics
4. Unreliable models – especially risk models
Professor Dowd also claims that stress tests are creating systemic instability by forcing banks to standardise towards the models set out by the Bank of England.
The report pays special attention to the issue of bad data and points out the age-old issue that no model is of any use if poor data is fed into it. Most stress test exercises, he claims, involve stresses to a spreadsheet-based valuation model, and these are prone to a number of problems, including a tendency to underestimate the risks of complicated derivative trading positions and unquantifiable risks such as litigation or regulatory fines.
Mr Dowd also claims that a bank is likely to have thousands of different spreadsheet models and that there is no straightforward method of combining or standardising the information they provide across the institution as a whole. Data fed into any models will therefore vary in quality and be susceptible to error. The report claims that the Bank of England itself has acknowledged these issues and stated that data quality varied considerably across banks.
The report portrays the Bank’s stress tests as highly unreliable – “worse than useless” in fact – because they paint an unreliable picture of resilience. So far the Bank of England has not commented on the criticism, but informally banks have shown some frustration with the effort involved in preparing what critics would call a box-ticking exercise of limited practical use to those involved in the process.
Meanwhile, in the US on 29th May, the Federal Reserve Governor Daniel Tarullo met with several US banks to discuss possible changes to their stress testing programme. Those familiar with the meeting report that the banks were concerned with the ’one size fits all’ nature of the tests, and urged the Fed not to incorporate a complicated set of requirements for how bigger banks must calculate their capital requirements. The Fed has so far deferred including these “advanced approaches” in the test, but officials are yet to make a permanent decision on the matter.
For its part, the Fed also has concerns on how stress tests are being conducted. At a speech on 3rd June, the Federal Reserve Bank of Chicago CEO, Charles Evans, described how some institutions are failing in their efforts to produce reliable stress tests under CCAR and DFAST.
Some firms, according to Mr Evans, distribute the initial scenarios across business lines, but do not have proper controls or management engagement to ensure that the tests are conducted in a systematic way with controls around data and assumptions. Without proper guidance and controls there is a risk that scenarios are inconsistently interpreted and produce results that are overly optimistic – or just plain wrong. Poor documentation and audit trails of information make it difficult to identify these issues and ultimately such data and procedural weaknesses can undermine the credibility of the stress test modelling framework and the pro forma financial results.
While Mr Evans did not say in his speech whether he expected any changes to the stress testing regime, Mr Tarullo and the four banking executives he met in May plan to meet again this summer to discuss the issue again, along with other regulatory measures.
So what is coming over the hill?
Despite the call from critics like the Adam Smith Institute to abolish the tests altogether, and the difficulty regulators admit that banks have in producing stress testing submissions, the public appetite for financial services regulation and the perceived failure of ‘light touch’ pre-2007 regulatory regimes mean that stress testing will not be scrapped anytime soon.
Moreover, given that we are in early stages of a major change to how financial market stability and industry solvency is monitored, we can expect further change as regulators refine their stress testing approaches, as they gain insight from the information provided by participants. Industry feedback may yet prompt further changes, as regulators look to build acceptance for their stress testing regimes.
Another powerful argument to suggest that stress testing will continue is because regulators see evidence that it is working. Since the establishment of CCAR, DFAST, UK and EU stress tests and other risk data regulations such as BCBS 239 and the upcoming BCBS 265, firms have made sizeable improvements.
These improvements have been seen in areas such as data collection, risk analytics and elements of corporate governance. All this suggests that, independent of whether institutions believe the scenarios being tested are realistic or not, we are seeing the start of a fundamental shift in risk culture which is what the regulations seek to achieve in the long run.
Banks on both sides of the Atlantic are faced with two common challenges:
1. How to make stress testing into a transparent, repeatable and cost-efficient process
2. How to cope with stress testing changes in the future
Regulators will remind banks to focus on the intent behind regulation, rather than what the specific tests will be this year or next.
The intent of the regulation, in both its American and European form, is to encourage institutions to seek the building of a capital planning and stress testing framework that identifies firm-specific risks and ultimately informs the decisions that management take. Therefore, when it comes to stress scenarios, banks should not merely try to comply with regulatory stress tests, but instead build a framework to understand and monitor the specific risks in their own markets.
A proactive, rather than reactive, approach to stress testing and risk reporting generally is viewed as the key to increasing the resilience of the individual firms, the overall stability of the financial system, and – ultimately – regulatory approval.
How is this done?
Just as critics of stress testing say a ‘one size fits all’ approach can be considered too blunt for the wide variety of business models in financial markets, there is no single method (or off-the-shelf solution) to help banks manage the challenge of annual stress testing. That said, there are common approaches that institutions – including clients of Asset Control – adopt, that are being recommended by regulators as best practice.
From an organisational and systems perspective, these recommendations include:
When conducting stress tests, strong internal controls (and involvement of audit and risk) are critically important toward the translation of stress scenarios into financial impacts. When such translations are undertaken at large, complex organisations, it can often involve multiple models and lines of business. Firms that have assembled strong stress testing approaches also have strong governance functions whereby all parties work together to implement the stress scenarios and coordinate assumptions, and the senior management of each business line is intimately engaged in the process.
Well-articulated documentation, standards and process orchestration and underlying data ensure that scenarios are implemented consistently across business functions in terms of the understanding of the intent of the scenario, the portfolios and business lines affected and the accuracy and consistency of reported results.
In addition to this, a sound governance process identifies and addresses gaps in an enterprise’s stress testing framework and has the authority to work with business owners and stakeholders to address them.
2. Data Control Points for Scenario Generation
Data models are the core of each stress test. Crucial assumptions used to estimate losses, risk weighted assets, and revenues must be clearly documented so that the detail of each scenario is fully parameterized.
To achieve consistency in preparing scenarios it is essential that common data inputs (e.g. securities data, historical market prices) and derived data (e.g. curves and surfaces) are sourced, validated and distributed from a single function. This is to make sure common data has assured lineage and provenance, and any changes can be quickly propagated throughout the organisation. High quality data is needed for understanding the historical relationship between scenario parameters (GDP, interest rates, unemployment, stock markets etc.) so that scenarios can be correctly discussed and challenged before they are released internally.
Where scenarios require price or volatility shocks to trading books these should also be defined from a validated central source and provided to models that are simulating their impact on positions for the stress test. For the results to be convincing to regulators, it is important that banks have a deep understanding of the data and models used in their scenarios so that their explanatory materials meet the required standards.
3. Workflow Control Points
In order to establish a robust stress testing framework – and to make the process both repeatable and auditable – institutions must first understand the necessary workflows and the data integration challenges they face.
At such firms, control points have been established to both promote repeatable and automated practices within the modelling framework, and to verify the data and model quality throughout the process. These control points ensure that corrections or enrichments of data used are proliferated consistently both upstream and downstream from the control point and – wherever possible – the opportunity for manual input of data is eliminated, along with the risk of human error.
In short, governance, data management, model management and system integration are the keys to banks building a robust stress testing framework that is capable of both adapting to future changes in stress testing requirements and, more importantly, becoming the decision-making tool that regulators hope will inform both strategic and routine decisions.
In my opinion, making the process repeatable and cost effective will be made possible by the use of automation and integration, particularly with regard to the common data sets required for scenarios calculation around the various departments and divisions running them. This moves the process from what is currently a highly manual task, led by risk and audit experts, to an automated and scalable environment that leverages existing technology investments.
I believe financial institutions will ultimately be successful in creating stress testing environments that meet regulator’s needs and indeed institutionalise risk weighted decision making. After all, the single greatest engineering skill western financial institutions have shown over the last 50 years is the ability to take physical processes and make them virtual.
The question today is whether this transformation can be done at a pace that keeps regulators satisfied.
“The Adam Smith Institute has an open access policy. Copyright remains with the copyright holder, but users may download, save and distribute this work in any format provided: (1) that the Adam Smith Institute is cited; (2) that the web address adamsmith.org is published together with a prominent copy of this notice; (3) the text is used in full without amendment [extracts may be used for criticism or review]; (4) the work is not re–sold; (5) the link for any online use is sent to info@ adamsmith.org.”
Asset Control serves over 60 global financial institutions worldwide, including 45% of G-SIBS. Established in 1991, we play a central role in guaranteeing enterprise-wide data quality for our clients, as well as helping them comply with stress testing and other data-centric regulatory requirements such as BCBS 239. Asset Control provides these institutions with an essential foundation for a risk management stack, delivering ultra-consistent, auditable and SLA-driven data, including time series, end of day prices, static data and derived data both to the risk function and to the entire enterprise.
In June 2015 Asset Control released its all-new Risk Data Manager module to automate the process of preparing price, curve and surface shocks for stress testing frameworks.