Over the last few months, US and European regulators have signaled that they may start cracking down on one of the biggest ethical problems with artificial intelligence: the potential for algorithms to perpetuate discrimination.
The US Federal Trade Commission warned that companies using biased algorithms may run afoul of consumer protection laws like the Fair Credit Reporting Act. The Federal Reserve, the Consumer Financial Protection Bureau, and other American financial regulators asked for public comments on how banks are using AI. The EU released new rules governing the use of AI for decisions ranging from hiring to lending to law enforcement, all of which are areas ripe for bias.
These moves respond to growing concerns that algorithms have been reproducing discrimination in situations such as home lending, the allocation of health care, and decisions about who deserves parole. While many people hoped machines could help us make fairer decisions, as the use of AI has exploded it’s become clear that all too often they simply replicate and even amplify our existing prejudices.
An important part of the story has been missing, however. It’s one that might make businesses more amenable to regulation or even preclude the need for it by motivating them to act on their own. Algorithmic bias is not only a pressing ethical and societal concern — it’s also bad for business.
My research shows that over time, word of mouth about algorithmic bias among customers will hurt demand and sales and cut into profits…
Read the full article online at The Boston Globe.
This article was produced by Footnote in partnership with University of Southern California Marshall School of Business.