If you have been following our blog series on delivering firm-wide data management initiatives, you should now be equipped with a firm appreciation of the seven critical steps that a programme of transformation entails. The crucial aspects to take away are:
Do not underestimate the complexity of data, especially reference data – integration, backward-compatibility and agile deployment are key
The programme must be led by the business, supported by technology, and governed with a rock-solid framework enforced rigorously by a programme office
A clear picture of the desired end state is essential, as are forming a true partnership with the technology vendor and following our seven critical steps in a robust and orderly fashion
Successful delivery should mean financial institutions realize dramatic improvements across the board in terms of data quality and SLAs, efficiency and risk management.
With a central platform and connectivity cascading throughout the organisation, financial institutions can capture data, perform multi-source and multi-vendor data quality validations for the efficient management of potential quality issues, and reduce risk via improved audit and change management capability.
In addition, higher throughput means quick and flexible distribution to downstream systems so consumers are able to trade efficiently and more productively in their time zone with near-real-time data. At the same time, having one database, one GUI and one distribution engine drives down the cost of platform development, management and support.
Above all, our overriding message is: don’t leave anything to chance. Transformation programmes are more than just technology; they call for robust governance and stakeholder buy-in. All of the touch points that make up our seven critical steps must be checked and checked again to ensure programme success and enhance efficiency, drive down cost and reduce risk.
The final and arguably most pivotal stage of any major transformation programme is the execution. This marks the point at which a programme’s inherent risks could become manifest and is where programmes can begin to fail if organizations have not followed the steps we have set out in this series.
Scope creep is one example of an inherent risk that can come into play once a programme transitions from planning into execution and delivery. Scope creep is hard to accommodate once delivery has commenced but is easy to fall prey to if requirements are not fully scoped during the planning phase. However it can be prevented if there is strong programme management and a flexible technology solution in place.
With a flexible technology platform and a roadmap designed for continuous improvement and agile deployment (as specified under step #5), it should be possible to add in new requirements or respond to change requests relatively quickly and easily. New requirements or refinements may be necessary as part of data model enhancements, validation rules, workflow items and processes, or integration transformation rules, but must be signed-off formally by the programme office.
It is the responsibility of the programme office to manage execution in accordance with the project plan and rigorous governance framework established under Step #4. The programme office should track, report and escalate issues where necessary, as well as tracking and reporting mitigating actions. It should also manage the process of quality assurance testing throughout the lifecycle of the delivery, and break the programme down into strict project deliverables as part of the governance process
It is at the point of execution that all of the actions taken under the previous steps really come to bear. If they have been followed in a robust and orderly fashion, then programme success should ensue with seamless migration to the new data management infrastructure.
The relationship between an end-user organisation and its technology vendor is of critical importance to a programme of major transformation. Establishing a strong rapport and working in partnership not only increases the likelihood of success, but can yield major benefits for both parties.
For a technology vendor, having a financial institution such as a global bank as a client provides a significant boost to both its brand and its solution. For the financial institution, establishing technical leadership helps secure and retain the confidence of customers, stakeholders and regulators.
There are a number of key aspects to engendering a strong partnership. From the outset, the client and the vendor must agree stringent terms and conditions, confirm the availability of professional resources, and clearly define the appropriate channels of communication. Risk mitigation actions should also be formalised early on, as this can ensure more rapid resolution should an issue arise during delivery.
Implementing new technology represents a substantial and long-term investment for a financial institution. It therefore makes sense to invest the time and resources to nurture a strong and successful partnership too.
With the vision, program governance and buy-in established, the next step in a transformation project is to specify a technical architecture and determine how it will be delivered.
Historically, financial institutions have tended to build systems in-house. However, experience shows that a number of issues can arise when taking this approach. A common pitfall is that the complexity of data management requirements calls for considerable customization, which can result in longer time to market and poor performance. Opting for a ready-made platform can avoid this.
As with previous steps, it’s critical that a transformation program is approached as a business- rather than technology-led initiative; it’s important that the solution is not more complicated than what the business actually needs. Remaining focused on core attributes, e.g. the ability to acquire data internally and externally, at speed and with the flexibility to make changes as required, is therefore key.
Flexibility is especially important when it comes to data models; however, conventional models are anything but. This is problematic and inefficient because rigid data models come with their own storage and management techniques.
Financial institutions should also avoid building internal data models before the technical architecture has been established. It is much easier to start with base-level architecture – i.e. data acquisition, storage and distribution – and create a data model on top, than finding technology to fit a rigid data model and then discovering that legacy data cannot easily be migrated across to the new platform.
Getting the balance right requires specialist skills and infrastructure and it makes commercial sense for financial institutions to buy rather than build this. Your provider should have proven expertise and R&D capability to ensure an agile deployment. As well as accessing best-of-breed technology, the buy approach has the added advantage of meaning internal resource can be focused on tasks that really add value to business, carving out and maintaining a true competitive advantage.
Multiple IT teams manage reference data within financial institutions. In many cases, there will be one team serving the front office, another supporting the opening of accounts and on-boarding of clients and legal entities, and others serving middle and back office functions.
For a transformation program to succeed, these artificial boundaries and silos need to be removed. This means not only integrating data from disparate systems spread out across the organization, but also bringing all of the various IT teams together under a single group that has a complete understanding of the overall vision for managing reference and pricing data.
Creating a program office staffed by best-in-class specialists in the data space and governed by rigorous processes is the best way to address this challenge. In addition to realizing synergies and lowering operating costs, this approach can avoid any issues that might arise from internal politics when changing the organizational structure further into the program.
It is also critical to consider how downstream systems, as well as legacy data systems, will be integrated with the new centralized platform. First, the centralized platform must have the ability to accept data feeds from multiple sources via a single interface. To achieve this, it is likely that new validation rules will need to be applied in legacy systems to ensure backward-compatibility.
Secondly, because there are multiple sources, all legacy data must be cleansed and date-marked to ensure consistency. The operations team, for example, can be responsible for the acquisition, quality and cleansing of legacy data, although this can be handled by a specialist third-party if necessary.
Finally, it should be the responsibility of the program office to ensure coordination of all the relevant teams and multiple projects comprising the project. Successful delivery hinges on how well the integration is articulated, with accountabilities clearly defined and designated, administered centrally, and with rigorous program management.
As I’ve said before, it is critical that a financial institution does not underestimate the complexity of reference data – i.e. the different vendor data models and asset classes and inconsistencies in data quality (accuracy, completeness and timeliness). Integration and backward-compatibility with legacy systems are therefore the watchwords for this step.
Very often the technology footprint in a large financial institution is a spider web of legacy technology with multiple inconsistent data sets, distribution methods and downstream systems lacking efficient integration. Therefore before you get too far down the road of any change project, you need to know where you’re headed, if not, you’ll just compound the current problem.
While it might seem an obvious point, one of the most common stumbling blocks in major transformation programmes occurs when they progress directly from the high level business need to project execution without first establishing a clear picture of the desired end state. A conceptual architectural vision should fill any gap between business needs and implementation.
It is imperative that this vision is ‘the firm-wide vision’ and not just a technology one. Once created, it should be socialized internally and presented to and approved by the governance body.
The vision should be built upon the notion that data in general and reference data in particular, follow a very similar pattern of acquisition, storage and distribution:
Data is acquired from internal or external sources
Data is stored and quality rules configured by the financial institution are then applied (rules can vary according to source or product type, but should adhere to a core set of principles – e.g. validation, workflow, enforcement, cleansing)
Data must be distributed to consumers – e.g. trading desks, operations teams, or for audit and research purposes
Financial institutions must also recognize the correlation that exists between different reference data sets. For example, legal entities may also have trading relationships with the firm and therefore accounts and standard settlement instructions must also be defined. The legal entity will also issue asset classes and have corporate actions against it. If an institution could navigate through this data consistently, counterparty exposure and credit risk calculations would be simplified.
Therefore, by understanding the similarities and correlations, firms can manage reference data in its entirety. Remember ‘great engineering is simple engineering’ and this approach streamlines management and reduces running costs.
It also means that any data exceptions can be quickly escalated for review and resolution, while referential integrity is assured between all data assets – e.g. legal entity, issuer/obligor, corporate actions, instruments/securities, accounts/standing settlement instructions.
Mapping this out in a clear and accessible way ensures the necessary buy-in from the data governance team and enables the change team to specify technology architecture that’s appropriate to both the conceptual data model and the desired end state.
The governance model is a founding pillar for any large-scale programme of transformation such as upgrading a firm’s data management processes and infrastructure. However, in the politically-charged environment of a financial institution, securing the right people to participate in a steering committee can present a catch-22: business heads must be involved from the start in order to launch the transformation programme; but a formal governance structure can only be established once a project has been launched.
An interim steering committee can prove a good way to overcome this dilemma. This should comprise senior professionals representing all of the different stakeholders – e.g. the chief operating officers or managing directors for each of the business units (equities, fixed income, derivatives etc.), as well as the global head of operations and global head of change. The latter two enable effective liaison between the technology and business change functions, something that is critical to project success.
With an interim steering committee in place, the programme can commence. The challenge is then to formalise a long-term governance structure. Experience suggests that a three-level model tends to prove the most effective, to the extent that financial institutions have retained this governance structure to manage all subsequent data management initiatives:
Executive – sets strategy and vision, establishes/approves policies, governs practices, unites IT and business, and drives the organisation to achieve its strategic objectives
Operate – develops and implements data quality procedures and uses IT to support its data quality efforts on a day-to-day basis
Change/Projects – represents the many initiatives that co-exist within the organisation to improve quality by developing tools and techniques that support effective data management
In essence, the Executive sets the vision and leads the budget and funding. Meanwhile, the ‘Operate’ level introduces a ‘chief data officer’ (CDO) function. Often a new role at financial institutions, CDOs are responsible for defining the standards and policies to be adhered to, as well as the scope of projects that need to be launched to successfully execute on a programme.
The third and final level (Change/Projects) comprises a number of smaller working groups and committees tasked with specifying the requirements for each aspect of the programme.
When establishing the governance model, it is also important that a clear vision for the reference data model begins to take shape. This is necessary to ensure a successful transition from launch phase to programme execution.
Following the 2008 financial crisis, Basel III was created to enhance the capacity of banks
to withstand market shocks and is considered to be core to a number of reforms within the global banking industry. Yet, views of politicians and the banking sector with regards to the new rules appear to be on an ever-divergent path. Cabinet ministers’ commitment to the rules were underlined last month when Germany’s government agreed to move forward with introducing stricter capital requirements. They are calling for their European counterparts to do the same.
However, banks have clear misgivings around the rules, citing some key unintended consequences, especially in the shipping and aircraft financing sectors. A senior Bank of England official, Andy Haldane, has even gone as far as to suggest that Basel III has become too complicated to be effective. He suggested a move back to the drawing board to design a more minimalist approach. In reality, this type of u-turn is not something we are used to seeing from the regulatory community. Therefore, despite their disquiet, banks must begin preparations in earnest to ensure they are not caught off guard. A wait and see approach will not do.
While Haldane’s suggestion that “less is more” might curry favor with the banking community, the same cannot be said for financial institutions’ approach to compliance in light of the myriad of new regulations they face. Effective and efficient data management sits at the heart of this and without the right infrastructure in place, banks will struggle to deal with the increased regulatory scrutiny resulting from, but by no means limited to, Basel III. After all, the quality of information management relies heavily on the data that feeds it.
Although conceptually simple, reference data is extremely far reaching. Managing it in a way that delivers maximum value to the business requires the combined consideration of people, processes and infrastructure across the breadth of a financial institution.
Poor management of reference data can be very costly. It can lead to inconsistencies, reconciliation issues, breaks in workflow and significant time delays in distributing data to downstream systems. This is problematic for the business resulting in, for example, an increased number of failed trades as decisions are made on out of date information.
If the business doesn’t appreciate the criticality of reference data, and their dependence on it, then the ROI on any automation project will not be clear. This is why my first step in a data management program is to engage the business by exposing the problems that result from a lack of straight through processing and any manual intervention when cleansing, validating or reconciling reference data.
It is essential to conduct an in-depth review of current infrastructure and business requirements. This will entail analysis of data flows across the front, middle and back office to identify any road blocks or weak spots. It also calls for interviews with both the business and IT to highlight all of the operational pain points. These should then be categorized into, for example, performance, data quality and functionality etc., to truly identify where problems lie.
Using this approach, it is possible to uncover not just the obvious pain points, but the ‘hidden costs’ as well. This will reveal the true cost of doing nothing compared with the benefits of building a consistent approach that reduces costs, dramatically improves data quality and reduces exposure to, and management of, risk.
I call this the ‘wow’ factor because it helps to get attention and create awareness at all levels; it is an extremely effective method of obtaining the necessary level of buy-in from global heads and the COOs responsible for each individual business unit. Only with buy-in at senior level is it possible to move to the next step – establishing governance…
Data has historically been considered as simply a by-product of business systems. At financial institutions and at global banks in particular, the approach to data, and especially reference data, has largely been on how to make it work, but not necessarily on how to make it right.
Reference data, whether it’s legal entity data, instrument data, pricing, corporate actions or standing settlement instructions (SSIs), has become mission-critical for banks. It underpins almost any process or activity in the core areas of accounting, trading and risk management. It is also essential for auditing and reporting, as well as ensuring compliance and transparency.
Challenges including intersystem connectivity, outdated distribution methods and a diversity of product reference data sets, make it extremely difficult for banks to optimize their reference data without adequate data management software.
This is why a growing number of banks are looking to centralize the management of their reference data eliminating organizational and technological silos to realize seamless integration of reference data.
However, a high proportion of data management initiatives can fail because banks have underestimated the scale and complexity of the challenge and adopt a technology-led approach as opposed to addressing the program as a business-led initiative. Without the buy-in of key sponsors across the breadth of the institution, a sub-optimal outcome is inevitable.
With 25 years’ experience on the banking frontline, Asset Control’s Managing Director for Global Markets, Paolo Mittiga, will be writing a series of blogs examining in detail the seven critical steps to optimizing a reference data management change project:
1. Engage business, prove the business case and use specialized resources
2. Establish data governance with clear responsibilities and accountabilities
3. Create vision, ensure it is socialized and sold to the business and data governance
4. Establish team, clear accountabilities and define the program office
5. Create architecture, decide buy vs. build and create roadmap
6. Establish a clear partnership with vendor
7. Ensure execution
These steps are based on Paolo’s real-world experiences in successfully delivering firm-wide data management initiatives at global banks. He has learnt firsthand the success that sticking to them can bring.
Paolo will therefore provide a practical guide to optimising a data management delivery based on these ‘seven data wins’ and demonstrate how large banks can improve data quality, service levels and technology management.