Blog article
See all stories »

How is AI bias being tackled by the credit sector

Artificial intelligence (AI) is improving the outcomes for businesses across the world every day. It is automating, speeding up and making processes more effective and efficient than ever before. Such is its widespread use, that almost every company will have adopted it by 2025, according to research by Forrester

There’s no denying the benefits that AI can bring. But it also comes with some serious downsides. Chief among those is its potential to incorrectly and unfairly interpret data. That means it can often favour one group of consumers over another when analysing multiple data sets and that’s borne out in the results. 

The problem has been made far worse by the humans that select the data that the algorithm will use and how those results will then be applied. The AI systems merely replicate those biased models and will continue to do so until the algorithm is trained differently. In business lending, there is bias based on company age, sector and geography.

It’s a problem that isn’t going away any time soon either. A Gartner report published in 2018 has predicted that, through to 2022, 85% of AI projects would continue to produce erroneous outcomes due to inherent bias in data, algorithms or the teams responsible for managing them. Several years on, AI bias remains a big issue for companies.

Where AI bias exists 

AI bias is present in three key areas: input data, development and post-training. When it’s contained in input data, such as gender, race or ideology, AI’s ability to be objective will be restricted by incomplete or unrepresentative datasets. There’s also the potential for discrimination because some methods of AI training may obscure how the data is used in decisions.

At the development stage, an ongoing cycle of bias is created by AI systems that are continuously trained using certain data. Bias is also perpetuated within the model by a subconscious bias or lack of diversity among development teams that may have an effect on how AI is trained.

Post-training, AI bias creeps in where there is a continuous learning drift towards discrimination. Unintended consequences may result from AI taking on new behaviours as it continues to learn and self-improve. A prime example of this is in online lending, where certain platforms may suddenly decide to decline more loan applications from ethnic minorities or women than any other group.

Tighter governance

To mitigate the risk of bias and discrimination, banks and financial institutions have tightened up their governance. In this regard, they have formed multidisciplinary internal teams or contracted third parties to audit AI models and analyse the data. They have also established a fully transparent policy for developing algorithms and metrics for measuring bias, at the same time as keeping pace with regulations and best practice.

To ensure greater inclusivity, organisations have continued to build diverse data sets and harness unstructured data from internal and external sources. They are also constantly looking for skewed or biased data throughout the model’s lifecycle.

Firms have also been closely monitoring AI and machine learning models for data and concept drift. By scanning training and testing data, they can determine if protected characteristics and/or attributes are underrepresented, and, when issues are identified, retrain models.

Credit industry

The credit industry has made big strides in tackling the problem. AI bias can manifest itself in many forms, resulting in certain customers’ credit applications being turned down due to discrimination against them.

The sector has established itself as a model for others to aspire to. To prevent biases arising from a homogenous workforce, organisations have been investing in a diverse data and decision science team to tackle the problem.

It’s becoming increasingly important for humans to check for AI bias as models continue to develop. While it’s difficult to eliminate the issue altogether, this method has been proven to reduce bias at the same time as improving the models.

 Banks and financial institutions too, have been establishing multidisciplinary teams to work on AI initiatives. The roles on these teams range from developers and business managers to human resources and legal professionals.

The credit industry is leading the way in addressing AI bias. But there’s room for improvement to ensure that certain groups of customers are not unfairly discriminated against when making key lending decisions.

1888

Comments: (0)

Chirag Shah

Chirag Shah

Founder and CEO

Nucleus Commercial Finance

Member since

05 Jun 2023

Location

St. James's

Blog posts

7

This post is from a series of posts in the group:

Artificial Intelligence and Financial Services

Artificial Intelligence and Financial Services


See all

Now hiring