What does fairness mean in algorithmic decision-making and how should companies balance legal compliance, ethical responsibility and business performance when addressing possible biases in AI-driven systems?
What does fairness mean in algorithmic decision-making and how should companies balance legal compliance, ethical responsibility and business performance when addressing possible biases in AI-driven systems?
Financial technology company Upstart aimed to modernize consumer lending by using artificial intelligence to assess creditworthiness based on broad data such as education level and zip code, rather than traditional FICO scores. It has realized immense success, having issued over $40 billion in loans by 2024, and receiving recognition as a strong alternative to traditional lenders for underserved populations. However, in 2020, civil rights groups including the NAACP Legal Defense Fund and the Student Borrower Protection Center, raised concerns about Upstart’s model, flagging the unintended racial disparities that might arise from their inputs, which might serve as proxies for demographic identifiers. In the audits that ensued, it became apparent that an inherent trade-off may exist between reduction of demographic disparities and business performance. In this case study, students will learn about the use of AI in the financial industry and explore the potential ethical considerations of this technology.