Blog article
See all stories »

AI: Do Regulators Have a Hope of Keeping Pace?

It’s hard to believe that it was only in November 2022 that ChatGPT was unleashed as one of the first mass-market AI applications. With its first birthday approaching, the speed with which ChatGPT is influencing our interactions with financial brands is moving faster than regulators can keep up with.

AI is embedding itself into the global financial ecosystem, with generative and conversational AI being integrated across more front-end and back-end processes. With that comes greater potential for misuse, consumer abuse, and even AI-driven fraud, all now causing regulators to scramble to put protections in place.

What different regions of the world have in common is that any legislative processes tend to take a long time to be discussed, designed and approved by lawmakers. While these timelines have worked reasonably well for things like food or drugs regulation, AI is a different kind of beast altogether. The challenge now for regulators all over the world is how to design effective guardrails for such a fast-evolving entity with so many strands reaching into more business sectors every day.

 

Europe pushes forward with the world’s first proposed AI regulation

In the European Union, lawmakers have put forward an AI legislative framework as part of its digital strategy. The proposed legislation, the Artificial Intelligence (AI) Act, focuses on rules around data quality, transparency, human oversight and accountability, but is also trying to address potential ethical problems in various sectors, including finance.

Once approved, the EU’s regulation will be the world’s first for AI, and will hit companies with fines of up to €30 million, or 6% of their global income, if they’re found to have breached the rules.

But the draft’s journey to being rubber-stamped was delayed in June 2023, when European lawmakers agreed to add tougher rules, including requiring firms using systems like ChatGPT to disclose that the content is AI-generated, and ensure safeguards against illegal content.

 

Different risk levels will determine the design of AI guardrails

Under the EU’s proposed regulation, AI will be classified into four levels of risk posed to consumers – minimal risk, limited risk, high risk, and unacceptable risk - with different rules applied to each, and mandating different levels of transparency, safety and human oversight.

Needless to say, AI systems classed as having unacceptable risk are considered a threat to people and will be banned. Media attention has been focused on the example of cognitive behavioural manipulation of people or specific vulnerable groups, such as voice-activated toys that encourage dangerous behaviour in children, but other unacceptable AI use cases outlined by the EU include social scoring, whereby AI classifies people based on behaviour, socio-economic status or personal characteristics.

Here, it’s understandable that lawmakers point to the notorious practice of “red-lining” – a discriminatory practice that originated in the US, that systematically denied services such as mortgages, insurance loans, and other financial services to residents of certain areas, based on their race or ethnicity.

And this is where payments and fintech firms could find themselves in a quagmire of legal and ethical conundrums that could slow business growth – or force them to abandon some services altogether. The potential for misinformation to be regurgitated through customer service AI touchpoints like chatbots is enough to keep any risk or compliance officer awake at night. Given that generative AI can only present information that was fed into it, how can an algorithm verify whether that information was biased or incorrect, and gave the customer the right service?

In the UK, the introduction of Consumer Duty rules in July 2023 could potentially see AI-generated product or service pricing or communications breaching requirements if they result in poor outcomes for retail customers, particularly those in vulnerable groups like the elderly or low-income households.

Other unacceptable AI use cases highlighted by the EU include real-time and remote biometric identification systems, such as facial recognition, while biometric ID and categorisation of people is classed as a high-risk use case, which may be permitted under some barrow circumstances, depending on how the proposed regulation pans out.

Given how widely biometrics are in use now, to unlock our smartphones and make payments, many in the payments world are asking how much more friction any well-meaning AI regulation will bring into existing payments processes.

 

AI has many business benefits – but only with human oversight

AI-driven customer service has proven to be vital in improving efficiency in customers’ payment journeys and interactions with their payment providers. Right now, AI is already massively enhancing customer service capabilities and speeding up issue resolution, and that means lower costs, more efficient use of employee time and happier customers.

AI’s intelligent automation capabilities can facilitate straight-through processing of payments, with far more accurate decisioning, and smart routing and distribution of payment transactions across a range of variables to improve authorisation and settlement. For example, AI can help a payment provider decide whether a specific transaction needs to go through two-factor authentication.

A 2019 study by Oliver Wyman, Marsh, and Hemes Investment Management found that financial firms using machine learning and AI for fraud detection saw a 15% improvement in fraud reduction and legitimate transaction approvals. As fraud continues to mutate, AI has huge potential to reduce manual investigative workloads and stop fraud at the source.

But it’s important to remember that technology alone cannot answer every question or solve any problem. While fintech has produced countless innovations, innovation can only be classed as successful if it effectively solves a real human need. As regulators worldwide get to grips with AI, there is fear that in trying to grapple with a growing beast that’s getting bigger by the day, they will overreach and place AI-driven product or service development into a virtual stranglehold. So it’s essential that fintech and payments players engage with regulatory discussions, ensure that their AI processes are taking in the right information, and are providing the best outcomes for their customers.

I believe that AI will bring about many positive transformations in the way we make payments with each other. But it’s way too early for AI to go it alone. Payments are driven by humans, and that’s why any AI-related payment process needs to be guided by human insight and nuance to make the most positive impact.

 

6284

Comments: (1)

Ketharaman Swaminathan
Ketharaman Swaminathan - GTM360 Marketing Solutions - Pune 25 August, 2023, 12:30Be the first to give this comment the thumbs up 0 likes

No. Regulators cannot keep up with AI. 

Take ChatGPT itself as an example. It's the pinnacle of leveraging of Regulatory Gap, which has been the strategy followed by fintech, rideshare, room share and many startup industries in regulated industries. 

While training ChatGPT, OpenAI scraped data from billions of websites without their express permission and probably in violation of their TOS. Now that ChatGPT is launched and popular, OpenAI has announced steps that website owners can take to prevent their websites from getting scraped by OpenAI's bot. This is unethical, if not also illegal in some jurisdictions. But will any regulator shut ChatGPT down now?

Had OpenAI announced, in advance, its plans to slurp the content of billions of websites when it started doing so a few years ago, I can bet that ChatGPT would have been DOA.

Evgeniy Ivantsov

Evgeniy Ivantsov

Chief Marketing Officer

FYST

Member since

16 Aug 2023

Location

Riga

Blog posts

1

This post is from a series of posts in the group:

Artificial Intelligence and Financial Services

Artificial Intelligence and Financial Services


See all

Now hiring