Blog article
See all stories »

A Glimpse into the Future: Next-Gen AI and LLM in AML Compliance

From impressive AI-assisted artwork and music generation to intelligent virtual agents that provide efficient responses to customer service queries, recent breakthroughs in artificial intelligence (AI) are transforming the way we approach diverse areas of our lives. Latest developments in AI have opened a whole new world of possibilities, and this is no exception for the anti-money laundering (AML), KYC, and KYB compliance scene.  

There are already various applications of AI in RegTech: traditional usages include transaction monitoring to identify deviating patterns of usage in bank accounts or risk evaluation in networks of companies and individuals. The latest crop of large language models (LLMs) such as ChatGPT and Azure OpenAI are creating opportunities far beyond traditional use cases.  

In alignment with the trend, AI has become a big topic of conversation, specifically for compliance teams. What are the challenges and use cases of recent releases of generative AI for compliance? How can compliance teams use the latest AI tools to elevate systems and expedite processes further? These are just some of the questions that we keep hearing across the industry. 

 

Key challenges for AI implementation in compliance 

The first step to managing challenges and risks is awareness. By prioritising awareness, challenges and risks can be mitigated and controlled. When it comes to the use of AI in compliance, ethical considerations and privacy risks are the main areas of concern. Controls, essentially in the form of human checks and balances, need to be in place to manage these risks. 

One of the best ways to overcome and avoid most ethical or privacy risks is to use AI and LLMs in a very granular and specific way within a highly controlled environment. For instance, feeding a large amount of personal data into the public ChatGPT and then asking it a broad question such as "Is this customer risky?" is a recipe for disaster, guaranteeing all sorts of problems and biases will arise. Instead, consider utilizing AI and LLMs to solve specific parts of the bigger compliance problem: Identifying share classes in the articles of association document, for example, or translating the foreign language titles of controlling entities without including their names and personal data. These are tasks where LLMs can provide immense value without transferring personal data. In addition, strictly privacy-controlled LLM environments are by now readily available as Microsoft Azure OpenAI or ChatGPT Enterprise, to name just two.  

 

AI and LLM use cases to elevate compliance  

One of the biggest benefits of using AI is its capacity to boost general efficiency. On a general level, compliance teams – like all other employees - can easily incorporate AI to improve the efficacy of day-to-day tasks like writing e-mails or reporting to management. An even more interesting application can be found in staff training. By prompting the AI to act as a teacher on a particular topic, it can concisely and clearly explain relevant information, hold a mock exam, understand your answers, and then re-explain the areas where the user failed. In essence, it has the ability to transform compliance training from its current tick-box exercise into a process that is much more personal, accurate, and comprehensive. 

Regarding  AI or LLM use cases more specific to AML, KYC, and KYB compliance, the most interesting areas we are seeing are in translation, natural language interfaces, understanding of unstructured text and data, adverse media processing and automation of e-regulations and policies. Let’s look at each one of them in more detail. 

 

1. Translation

LLMs such as ChatGPT are able to translate languages with incredible precision, down to the appropriate tone of voice. The first application of this relates to communication with staff at foreign entities, parent companies of the company you wish to onboard, law firms, or regulatory authorities in other countries. But interpretation, as well as accurate translation, is another key strength of this type of technology. Unlike regular translation applications such as Google Translate, LLMs can take things a step beyond simple translation as they can be queried to identify specific information within a foreign document. An example of such a use case for AML compliance by an English native speaker would be: "Please confirm the company type, incorporation date, directors, and shareholders from this Thai language company registry filing."  

 

2. Natural language interfaces

The new crop of AI technologies is about to deliver on the age-old dream of using natural language to converse with computers and digital systems in general. For the compliance use case, LLMs we see several applications, but one sticks out as hugely beneficial: LLMs will allow non-technical staff to query complex customer databases for reports and information mining without needing a degree in computer science. For example, we will be able to type "please show all clients with at least one director or shareholder connected to at least one or two countries defined as high risk" into the natural language interface and receive accurate results. The use case can also extend into actions, whereby typing "please change the review date of all business entities with more than two levels of ownership structure to one year now" into the natural language interface will yield the appropriate action.  

 

3. Understanding of unstructured text and data 

Arguably, the biggest impact of LLMs on the compliance function would be its ability to process and interpret unstructured text and data. Whilst it was always easy for computers to understand structured data in graphs or tables, a lot of important information in compliance surfaces from unstructured text documents. With large language models, it is now possible for systems to automatically process and extract useful information from unstructured documents such as contracts or articles of association.  

For example, it was previously impossible to find answers to typical compliance questions from banks (such as "Do these articles of association restrict company directors from taking on company debt?") without a manual review. However, this process can now be automated since LLMs are able to understand the unstructured text in the company constitution documents.   

Another good use case is detecting and handling diverse share classes, a challenging task even for manual reviewers. Jurisdictions that require the public filing of shareholders or ultimate beneficial owners (UBOs) often do not specify a fixed format, and share classes are thus included as free text in comments and articles. This meant it was previously impossible for automated systems to detect and correctly process shareholdings with several share classes, but LLMs enable just this.  

 

4. Adverse media processing

Current practices for adverse media processing include entering the subject into Google alongside related watchwords, like "fraud" or "terrorist", and then manually going through dozens of search results. This process can now be largely automated with the use of LLMs and can thereby be performed in much greater depth. If a manual reviewer can only reasonably get through one or two pages of search results, the automated solution ca now process a dozen or more pages with hundreds of results. Previous software technologies could not understand context very well and caused many false positive results for pages that mention the subject name on the same page as the watchword but in a completely different context. LLMs solve this problem.  

 

5. Automation of regulations and policies

Traditionally, regulations have been written  for people to read, understand and apply them. With the widespread use of technology in compliance, a debate emerged about the possibility of creating special “machine-readable” regulations that could be implemented automatically by compatible systems. But with the development of LLMs, this debate becomes somewhat obsolete, as now virtually all regulations and policies are machine-readable by the likes of ChatGPT and similar models. This means that, for example, KYC due diligence requirements from the regulatory texts can be automatically imported into a compliance team's digital model without needing human interpretation first. The same applies to the internal policies on risk or anti-money laundering in financial institutions. Before LLMs, these policies had to be manually translated from human-readable form into computer code to automate processes. However, for LLMs, code is just another language they can understand, and as we know, they excel at translation. Change management in automated compliance processes has just become a lot easier.   

 

A fork in the road for the future of compliance 

AI and LLM use cases in compliance will continue to expand beyond the list above, and we are extremely excited by the prospect of  future developments. I firmly believe that the recent releases of ChatGPT and OpenAI are on a new level compared to what the world has previously encountered, and we are thrilled by the possibilities they create for the compliance scene.    

Although many companies have embraced these new developments in AI technology, some companies are banning generative AI from their operations, preventing staff members from accessing ChatGPT from their device. While ethical concerns and far-reaching implications of this type of technology should not be underestimated or ignored, such opposition is somehow reminiscent of the Luddite Riots, where British workers in the early 19th century smashed weaving machines and other tools in protest against what was then believed to be a deceitful method of circumventing labour practices of the day.  

Is this the time to ban LLMs and  slow down, if not prevent, their development and implementation? Or should we jump on the bandwagon to use this amazing new technology and see the possibilities that generative AI can bring? In a way, this topic is one that all of us – as a society and as individuals – should think about deeply. Please let me know your opinion, and let’s start a conversation in the comments section.  

7755

Comments: (1)

Prasoon Mukherjee
Prasoon Mukherjee - Societe Generale Bank - Bangalore 26 October, 2023, 17:33Be the first to give this comment the thumbs up 0 likes

Liked your article, though I am not totally convinced about use of Generative AI on financial markets use cases where exact data needs to be extracted and the same extracted data needs to be used in downstream processes. Have explained in the below article as why.. 

 

https://www.finextra.com/blogposting/24436/large-language-models-are-not-a-solution-for-precise-data-extraction-in-banking

Now hiring