By Aurélie Jean, Guillaume Sibout, Mark Esposito and Terence Tse
In November 2022, the global deployment of ChatGPT3 introduced generative AI (GenAI) to the general public in just a few hours, despite it being in development for years. This rapid release, without preparing users, led to poorly articulated debates on the influence of these technologies on work, jobs, humans, and creativity (Jean et. al., 2024) and exaggerated claims about AI capabilities. The anthropomorphization of ChatGPT – and other solutions – is symptomatic of confusion among users and economic and political leaders, hindering their grasp of the fundamental issues and risks, such as technological discrimination, which takes a new form and dynamic with GenAI.
From algorithmic bias to technological discrimination
Technological discrimination (C. O’Neil, 2016), (K. Crawford, 2021) reflects unfair – and often discriminatory – treatment of individuals by the algorithmic technology (or AI) based on gender, age, sexual orientation, social class, ethnicity, or visible or invisible disabilities. This discrimination results from a bias inside the algorithm that runs the technology, stemming from biased training/validation datasets or cognitive biases of designers in technical/business specifications or assumptions about the use of the technology. However, adopting a well-conceived AI governance (A. Jean et al., 2023), including best design, testing, and use practices, can mitigate risks to building inclusive or trustworthy AI technologies.
Reported technological discrimination cases mainly concern the algorithm’s execution, regardless of user interaction or user adoption of the technology. One early 2010s example is facial recognition failing to identify people of color (J. Buolawmibni, 2016)(J. Buolawmibni, 2024). In 2019, the Apple Card credit line estimation algorithm gave men up to 20 times more credit than women with the same financial and credit history. More recently, Twitter’s (now X’s) automated photo cropping algorithm favored images of young, white, thin women.
Double Discrimination in the Age of GenAI
With generative AI, the risks of technological discrimination related to the design of AI solutions are compounded by a significant risk related to usage and user interaction. For solutions like ChatGPT, Gemini, Le Chat, Copilot, or Jasper, the quality and accuracy of the response heavily depend on the quality and precision of the query (or prompt). Thus, education level, related to age and cultural, social, and economic capital (P. Bourdieu, 1986), significantly influences prompt quality. This vicious cycle is disadvantageous to those with poor and less structured language. Of note, this usage bias is more pronounced than the bias among those who did not grow up with digital technologies, like many seniors (J. Grosperrin-Legrand et al., 2023). In addition to digital illiteracy (E. Maroun, 2022), we now face algorithmic illiteracy, reflecting the inability to understand and use these tools correctly or at least efficiently. Even users who are experts in a discipline, with presumably high education levels, have shown differences in the reproducibility and accuracy of generated responses (P.F. Funk et al., 2024). These differences in use and adoption of technology’s potential must be considered when designing an AI technology.
Unlike conception-related technological discrimination, often detected by end-users, usage discrimination in GenAI is more challenging to identify. Indeed, GenAI runs internally in response to the user’s practices, creating a “usage bubble” where algorithmic responses depend on the user’s relationship with the tool. This makes it harder for users to detect problems related to their practices, unlike discrimination from embedded algorithmic bias in external technology that could lead to apparent bugs or errors.
An AI governance to fight against these new discriminations
Currently deployed AI governance is evolving further to incorporate users and their diverse social and cultural capital to ensure the design of inclusive algorithmic solutions. Good governance typically includes end-users from the ideation stage to ensure the relevance and proper formulation of the problem to solve. This end-user, who needs relevant and understandable information about the technology’s design, data used, and execution, must be assessed for their abilities and education level. This includes:
- Inform the user about the possible adoption bias: Make them aware of such biases based on age, education level, or native language (when the primary language used in the solution is different). This allows them to potentially reconsider their position by questioning their own usage or even by asking questions to the technology provider,
- Guide the user in their use of the technology: Implement methods to improve user skills in using the solution, such as how to make a query, errors to avoid, how to interpret a generated response and verify its accuracy, or pitfalls to avoid such as those related to anthropomorphism of the solution,
- Test with multiple proxies with different social and cultural capital: Integrate additional tests during the deployment phase of the solution. This phase can also be considered with a limited and controlled group of human testers called beta testers, identified by the company that owns and develops the solution,
- Back-test the solution explicitly: Explicitly collect user feedback through surveys or questions asked throughout the use of the technology. This approach remains limited because users are in their generative bubble and, hence, have more difficulty identifying this type of technological discrimination,
- Back-test the solution implicitly: Implicitly collect user feedback by collecting their behavioral data (or usage data) and comparing them to each other according to the type of profile previously identified and studied on the beta testers.
- Continuously rethink the technology’s ergonomy: Regularly Reevaluate the features and design of the solution based on the results from backtesting (explicit and implicit).
Towards inclusive adoption with social and economic benefits
In the GenAI era, building and deploying well-conceived AI governance is critical to fight against this double discrimination, which risks reinforcing inequalities despite AI solutions aiming to provide equal opportunities. The risk also lies in intensified biases in datasets used to retrain the underlying algorithm of technologies, stemming from overlooked and unaware users. When left alone with these tools, the challenge lies in supporting and protecting users from vulnerability. Future GenAI development must integrate this usage component to avoid becoming divisive, like some social networks that have become polarizing. This will foster more sustainable GenAI business models. It’s time to demonstrate that AI can do good things…but must be designed accordingly.
About the Authors
Aurélie Jean, PhD, is a computational scientist, entrepreneur and author. Aurélie Jean has close to 20 years of experience in computational science applied to a broad range of disciplines. After 11 years of academic research, Aurélie is now running two companies, including a deep tech AI startup in the early detection of breast cancer. She is the author of several bestseller non-fiction titles on algorithmic science, as well as a columnist on science and technology. Aurélie is teaching algorithmic science in executive education and is a research fellow at the Hult Business School and The Digital Economist. She is also an investor and a board member of several companies in the United States and in France.
Guillaume Sibout is a specialist in Digital Humanities. He has held various communication and marketing management positions in the finance sector. He is a graduate of Sciences Po Paris in Digital Humanities, the Ecole des Hautes Etudes en Sciences de l’Information et de la Communication (Celsa), and philosophy at Sorbonne University.
Dr. Mark Esposito is a professor of economics and public policy with appointments at Hult Int’l Business School and Harvard University. He equally serves as an Adjunct Professor of Public Policy at Georgetown University’s McDonough School of Business.
At Harvard, he serves as a social scientist with affiliations at Harvard Kennedy School’s Center for International Development; Harvard University’s Institute for Quantitative Social Science (IQSS); the Davis Center for Eurasian Studies and he is an incoming faculty affiliate of the Berkman Klein Center for Internet and Society at Harvard.
He co-founded the Machine Learning research firm, Nexus FrontierTech and the EdTech venture, The Circular Economy Alliance. He has equally co-founded The Chart ThinkTank and The AI Native Foundation. He was ranked by Thinkers50 in 2016 as one of the 30 rising business thinkers in the world and got shortlisted for the Breakthrough Award in 2019 and for the Strategy Award in 2023. He holds a doctoral degree from Ecole des Ponts Paris Tech and lives and works across Boston, Geneva, and Dubai.
Terence Tse is a globally recognized educator, author, and speaker. He is a Professor of Finance at Hult International Business School and co-founder of Nexus FrontierTech, an AI company. He is also a visiting professor at ESCP Business School and Cotrugli Business School. His latest co-authored book, The Great Remobilization: Strategies and Designs for a Smarter Global Future, was nominated for the 2023 Thinkers50 Strategy Award. Terence co-authored two Amazon bestsellers, The AI Republic and Understanding How the Future Unfolds. The DRIVE framework from the latter earned a nomination for the Thinkers50’s CK Prahalad Breakthrough Idea Award. Terence also authored Corporate Finance: The Basics, now in its second edition. He has appeared on numerous media platforms as well as ran workshops and consulted for global brands. He holds a doctoral degree from the Cambridge Judge Business School and has a background in investment banking and consulting.
#Tackling #Bias #Design