Blog article
See all stories »

No 2. The Evolution of Core Banking Technology – From Mainframes to Beyond Cloud

To understand why coreless banking is such a paradigm shift, we must travel back in time and track the evolution of banking channels, products and services. Let’s rewind 60 years to a world before core banking when the first computer was used to automate paper ledgers. 

In 1959, Barclays was the first bank to buy an electronic computer, an Emidec 1100, which at the time cost £125,000. After purchasing a second computer, Barclays opened the first-ever banking computer centre in 1961. During the ’60s, automated branch bookkeeping machines were introduced to branches. This was the first time branches could update ledgers in a central computer and the first time branches used technology. 1964 Barclays moved its central computer technology to IBM 1460 and branches to IBM 360. 

The 1967 arrival of the world’s first ATM was soon followed by the worldwide rollout of linked proprietary intrabank and shared interbank ATM networks. Alongside cash withdrawals, evolving ATM functionality enabled deposits, balance enquiries, currency exchange, mobile phone top-ups, and contactless cash withdrawals. In 1968, BACS (bankers automated clearing house) was created to enable automated cheque clearing. As you can see, the banking technology revolution really started in the 1960s, but it was not until the 1970s that core banking emerged. Previously, technology had been used to automate bookkeeping/ledgers. Centralised Online Real-Time Exchange/Environment solutions were developed by banks, enabling account balances to be updated in real-time on central bank computers. These solutions enabled interest on loans and deposits to be calculated. These systems evolved over the following decades to handle regulatory reporting and other features like credit risk. Around this time, the first vendor-developed core banking systems were launched, enabling smaller banks to afford the technology to automate their operations. At this time, the banks and vendor solutions were the first generation of core banking, defined as a single centralised computer (mainframe).

It was not until the late eighties that banks introduced PCs. These were initially used as low-cost terminals to access and update central computers; in the early 90s, they were networked in branches and leveraged for their ability to process some logic themselves. Lloyds bank was one of the first to roll out over 40,000 PCs across over 2000 branches running on Microsoft Windows 2.1. Initially, they were networked using a proprietary Token Ring network. 

By the mid-90s, networks moved to Ethernet (a non-proprietary open solution) and Windows 3.11. This version of Windows supported ethernet and enabled multiple applications to run simultaneously. Around the same time, the first non-mainframe databases running on PCs (as a server) emerged. With a much lower cost base than mainframes, the client-server revolution created the next generation of core banking solutions developed by vendors like Temenos—these leveraged ever-increasing processing and memory capabilities at a lower cost and databases that ran on servers. Like the first-generation core banking solutions, these second-generation solutions were designed only to serve staff.

Banks also began centralising things like cheque processing and processing of customer forms in regional service centres. At this time, call centres were also created to provide telephone banking, often with separate systems integrated into the core banking platforms.

Whilst the Internet had been around for some years already, it was only with the arrival of a graphical browser from Mosaic (later called Netscape) that the Internet started to take off in the late 90s. Wells Fargo was the first bank in the world to put some basic account services onto the Internet in 1995. Europe’s first banks launched services in 1996. Until then, mainframes ran code written in specific languages supported by the mainframe vendor. The same was true of Windows and other operating systems running on PCs and Servers. In 1997, Sun Microsystems launched Java, the first language to run on almost any computer with its Write Once Run Anywhere vision. The combination of Java, the Internet and the evolution of server technology spawned the 3rd generation of core banking solutions. These systems were “distributed” in design, with processing and data able to be placed/run on multiple computers. Servers were typically dedicated to software (application servers) or data (database servers). They allowed systems to scale better and systems to interact across the Internet regardless of what hardware/software they were running on. It’s important to understand that the client/server revolution worked on a single network owned and operated by the bank. The Internet allowed different companies to connect their systems securely but openly (in a secure but non-proprietary way). 

Some vendors of core banking solutions were able to migrate or wrap their client/server solutions into this new architecture. They were able to update the technology and leverage the ability to use more scalable hardware technology. Before the Internet revolution, vendors had started providing outsourced data centres for smaller banks, connecting them via virtual private networks. However, with the Internet, banks could connect their staff and customer systems over the Internet, which led to the first Cloud systems. However, vendors were initially running a specific set of servers for a bank long before. Eventually, technology could allow them to run multiple banks on a single set of servers.

Around 2017, some pure digital banks (Neo banks) like Starling and Monzo and vendors like ThoughtMachine and Mambu saw the opportunities Cloud technology could provide to core banking. Designing for Cloud would mean that software could be componentised (using microservices) and, as such, did not have to be developed, deployed or maintained as a single monolithic solution. These services allow different parts of core banking to be scaled and managed individually, leveraging software and hardware scalability, which means that only those services that had been updated had to be deployed and not the whole system. Hence, the fourth generation of the core banking systems is defined as being Cloud Native and fully developed in microservices. 

Such a technological shift presents a huge opportunity as neither banks nor incumbent core vendors can simply migrate their solutions to fully exploit the newly available technology. Such systems had to be developed from scratch, allowing greater flexibility, such as defining and launching products faster.

In some cases, these solutions allow for multi-tenanted banks, that is, multiple banks running on a single platform. These solutions have yet to support the entire product range of banks. They also rely on a bank willing to take the risk and cost of migrating their core banking solution or creating a new bank to use the new solution.

Technology and banking have continued evolving, and now the 5th generation of core banking is upon us with Coreless banking. We’ll be explaining what this generation of core banking addresses that the previous generations could not, the benefits and why it will be the de facto way that banks will adopt new technology going forward.

 

 

9130

Comments: (0)

Santosh Radhakrishnan

Santosh Radhakrishnan

Chief Commercial Officer

XYB by Monese

Member since

05 Jul 2023

Location

London

Blog posts

5

This post is from a series of posts in the group:

Banking

Banks nowadays are in stiff competition for human resources with fintech. The financial technology sector often offers higher pay. Still, the prospects of many such start-ups are difficult to forecast – they are as likely to occupy a solid niche as they are to go bust. Stable companies in Latvia are only a handful. Primarily, fintech players active in Latvia are headquartered in foreign countries – the United Kingdom, to name one – despite maintaining offices in Riga and employing staff in Latvia


See all

Now hiring