The advent of big data and the cloud was supposed to make distributing applications and data easier for global banks. A key selling point was to have a single location from which to run applications and store data which would make the process cheaper and easier. However, data protection and privacy regulations such as the General Data Protection Regulation (GDPR), the Australia Privacy Act and the Japanese Personal Information Protection Act have thwarted those promises.
Instead, global banks need multiple environments based on country or regional requirements, or a hybrid cloud approach, in which to store their data. Add to this the complexities of implementing a global data management programme, and a recipe for disaster is in the making.
From a data perspective, the goal of the cloud is data globalisation, where users are given access to a golden copy of data regardless of where they are located. However, in reality, due to the obstacles being imposed by many countries, data localisation is occurring. This is where a combination of multiple on-premises and cloud environments are set up to store the data. Due to these multiple instances however, uncertainty is created around the quality and accuracy of the data which reduces the data’s credibility.
In theory, the data protection and privacy regulations are supposed to create tight controls on flows of personal data outside their respective countries through requirements such as data centres needing to be located inside each country. However, this fails to recognise that the physical location of the data has no inherent impact on privacy or security. For example, if a bank is subject to European laws (e.g. GDPR), then the privacy risks of storing Europeans’ data inside the EU are no less than those of storing it outside. The bank would still have to treat the data according to the rules of GDPR. These types of data-residency requirements create inefficiencies in technology infrastructure.
Such country/region-specific regulations, which result in data localisation, are being introduced at a time when global banks are actively pursuing machine learning and artificial intelligence (AI) capabilities to boost productivity. The governments creating this environment need to understand that these regulations will come at a significant cost in terms of stifled innovation and productivity.
For machine learning and AI to be successful, organisations need access to vast amounts of data. Regulations that overly control the use of data, in effect, shackle AI. The core economic value of AI lies in its ability to automate complex processes, de-risk data environments, and increase the quality of the data output. The act of localising data will make it much harder for the banks to reap the benefits promised by AI.
Another issue created by regulations is the fragmentation of implementations. As noted above, many new cloud-based infrastructure strategies have a very region-centric or country-specific flavour. This causes implementations to become fragmented and limits the true benefits of data globalisation and cloud implementations. For example, housing data behind a firewall in a country-specific data centre creates a massive burden on the central infrastructure teams due to the significant maintenance and support costs. It is, in essence, the exact opposite of why cloud computing came about – to enable databases or applications to be set up wherever or whenever they were needed.
Regardless of where the data is physically located, the treatment of the data must go through the appropriate processes and controls and be subject to the required level of security. If all this happens, then the need for data to be stored in a particular country or region is mitigated.
This is where the data virtualisation layer comes in. It’s a concept that, when implemented appropriately, allows one single point of connectivity to obtain all the data required by the consumer. Having specific regulatory and privacy requirements, as well as entitlement-based ownership, needs to be at the forefront of design when determining a true implementation methodology. Regardless of whether the data is sitting on a public or private cloud, or a mix of both, a single point to access this data will create efficiencies. Once the core data virtualisation layer is enacted, and the treatment of the data can be managed via user and system entitlements, then the need to have distributed data stored across different regions becomes immaterial.
Finally, there needs to be an understanding of what data truly needs to be localised and why. For example, if specific data doesn’t need to be localised instantaneously, but does need to be available at the end of the day, it could yield one type of data storage need, while the intraday access and cache of data would yield a different requirement.
Public cloud onboarding has really only just begun in the financial services industry. The original implementation strategies and intended use of the cloud has already changed from the very early days and it should be anticipated that there will be significant changes to the environment in the foreseeable future. Increases to security requirements are inevitable, as is the ability to access the data in more sophisticated ways. Additionally, as regulations become more mature, there will be even more changes to how data use is monitored and measured.
There is no doubt that the use of cloud computing in financial services will continue to grow at an exponential rate. New cloud-based architectures will create efficiencies and innovations and allow firms to grow. However, none of these efficiencies and innovations will happen unless the regulations start to align with the technology and allow for data globalisation.