money, finance, economy

AI in Finance

IBM files patent for online fairness monitoring in financial AI models

stacked cards
by Doable
| published 3/13/24, 2:10 am
ibm-watson
IBM Watson via IBM.com
TL;DR Quick Facts
  • IBM files patent for online fairness monitoring in AI lending to combat bias
  • System detects and corrects bias by monitoring protected attributes in decision-making process
  • Tech firms, including Adobe and Intel, also seeking patents to address biases in AI applications

IBM filed a patent application for online fairness monitoring in AI lending to weed out bias in financial machine learning models.

Always improving: IBM has filed a patent application for online fairness monitoring in financial machine learning models to combat bias in AI lending. The patent aims to ensure that AI models making financial decisions do so without biases, particularly in sensitive areas like money lending or visa approvals. By continuously monitoring how the models treat certain protected attributes, such as age, gender, or race, IBM's system can detect and correct bias in the decision-making process.

Deeper details: The system developed by IBM tests the reward probability for different attributes to determine if bias has developed in the machine learning model. If the probability of a positive outcome falls below a certain threshold, indicating bias against a protected attribute, the system intervenes by updating the distribution of credit scores to be fairer. This continuous monitoring process prevents biases from developing over time as the model encounters new data, ensuring fair lending practices.

Looking ahead: IBM's initiative to address bias in AI lending practices is part of a broader trend among tech firms to tackle bias in their patent activities. Companies like Adobe, Sony, and Intel have also sought patents to address biases in various AI applications, such as image translation models and datasets. The potential consequences of unchecked bias in AI lending are significant, with regulators increasingly focusing on ensuring fair practices in financial decisions made by AI tools to prevent discrimination and disparities.

Who its for:: Regulators have started to take notice of the potential risks associated with AI-based lending practices, emphasizing the importance of adhering to anti-discrimination laws and quality control standards. Federal Reserve vice chair of supervision Michael Barr highlighted the enormous potential of AI technologies but also warned about the risks of violating fair lending laws and perpetuating disparities. As AI continues to evolve in financial institutions, addressing bias in AI lending practices remains a critical priority for the industry to ensure fair and equitable outcomes.