The fintech revolution is a great step forward for consumers. It reduces fees and improves access to credit for many of us. Much of the success of fintech is due to the automatization of basic processes, but more sophisticated fintech applications like lending rely on artificial intelligence applications.
The problem with AI is that it can become pretty racist at times. We all probably remember the story of Microsoft’s first Twitter chatbot Tay, released in 2016, that “learned to become racist” from its interactions with other Twitter users and had to be shut down two days after its release. It was a pretty quick evolution from neutral algorithm to chauvinist indeed.
Racist tweets are one thing, racist loans are another. US fintech Upstart got reprimanded by the Consumer Financial Protection Bureau in 2017 for discriminatory lending practices. The AI used to assess the creditworthiness of student loan applicants offered higher interest loans to students of historically black colleges than to students of similar “white” colleges.
The problem with AI is that it exploits nonlinear relationships between different variables, which means that it is very difficult to identify why the algorithm recommends one action for one person and another for someone else. This means that avoiding racism in an algorithm isn’t as simple as excluding the use of race in the algorithm. Talia Gillis has shown what can happen if you run a credit scoring algorithm with and without race data as input. With race as an input factor, the discrimination between white and minority applicants for a loan was actually lower than without the race variable.
Exclusion of race information can lead to more discriminatory outcomes
Source: Gillis (2020). Note: Chart shows the predicted rate of default for a student loan for white (W) and non-white (M) applicants.
The problem is that racism is systemic in our societies and that race is reflected in many other economic variables. One just has to look at the average household wealth in the United States by race to see that an algorithm that uses household income as input would immediately put a higher default risk on black people. If one adds race as an input back into the algorithm, the algorithm can identify race as the driving force between the disparities in different economic variables and “correct” for them. However, if that correction really happens is difficult to predict in an AI application where it is often unclear which input influences the output in what way…
Average wealth of household by age in the United States
Source: Brookings Institution.
The Black Lives Matter movement has put a new spotlight on systemic racism in our society and thus, we should expect that fintech companies will face increased scrutiny on how they do business. But we should beware of simple solutions like forcing fintech companies to become “colour blind” because as shown above that can make the situation worse. I am not an expert in AI, but I imagine the solution is not going to be easy. But if fintech companies don’t come up with a good solution, the risk is that regulators throw out the baby with the bathwater as they have done so often before.
If I’m going to lend $$ to SOMEONE, damn straight it’s going to go to the person most likely to be able to repay it, race be damned. Anything else is stupid, as proven by the housing crash a little over a decade ago.