The transformative potential of generative AI for the banking sector was a subject we keenly explored earlier in the year. With the whirlwind advancements in the technology sector, six months can feel like a lifetime. It is worth taking a moment to look
back and assess how generative AI has shaped and influenced the banking industry.
The crux of generative AI's promise lies in its innate capability to simulate human-like conversations, producing answers and solutions based on contextual and conversational input from the user. Its application ranges from enhanced customer service through
tailored product offerings, to helping with the early detection and prevention of fraudulent transactions. The core idea is still to elevate the traditional banking experience, infusing it with responsiveness, personalisation, and security.
But we must now ask the question - Is generative AI in banking a game-changer or just industry buzz? In short, I think I agree with Gartner’s Hype Cycle that we are currently near the peak of inflated expectations. As such, the business outcome and overall
business case is critical for implementation of generative AI.
As the year progressed, there have been plenty of examples of early stage adoption at banks as well as technology companies integrating generative AI capability into various areas of banking. The optimal, and all too possible, outcome has shifted from a
chatbot just answering a customer's query; that chatbot can now be set up to understand the nuances of customer sentiment, offering real-time solutions, and in many instances, pre-empting queries even before they are posed. The ability of the technology to
understand context has significantly improved, thereby leading to options to reduce instances of miscommunication.
There is also the value proposition of fraud detection and prevention. Traditional fraud detection systems operate based on known patterns. Generative AI can create synthetic datasets to train models to recognise new and evolving fraudulent techniques, thus
enhancing the robustness of fraud detection systems.
In credit risk, the technology’s ability to generate synthetic data that mirrors real-world credit situations can provide banks with a deeper insight, fostering a more sophisticated decision-making process. Moreover, by simulating diverse customer behaviours,
banks can anticipate client needs with greater precision, fine-tuning their services in the process, but most importantly optimising their credit decisioning.
However, generative AI comes with its own set of concerns. While synthetic data can be a potent tool, over-relying on it without rigorous validation can lead to misleading outcomes. Real-world data has its nuances, which might not always be captured fully
by generative models.
Additionally, generating synthetic personal financial data, even if de-identified, can raise ethical concerns. There's a fine line between simulating realistic data for model training and infringing upon personal data rights. Transparency of sources and
controls over data will become more critical. Moreover, regulators will be wary of financial models based largely on synthetic data, and want to understand controls and testing to ensure avoidance of bias, similarly to how they treat assessment of credit policy
application. They will demand greater transparency on how AI models operate, posing challenges for banks that may struggle to explain intricate AI decisions.
In conclusion, generative AI in banking is clearly not going to be a passing trend – it is a tool with immense potential. But, as with any tool, its value is gauged by how effectively it's utilised and the business outcome and improvements achieved. It is
not the be all and end all, and often will need to be combined with other AI models and technology to achieve the outcomes desired. While there's no refuting the potential value it can deliver, it's vital to temper expectations and remain vigilant of the pitfalls.