top of page
Dots
60 LEADERS ON AI-COVER-FINAL2.png
60L-FINAL-COVER.jpg

'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.

ABOUT 60 LEADERS

'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders. 

DOWNLOAD (PDF 256 pages)

'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.

DOWNLOAD (PDF 286 pages)

What are the ethical concerns associated with the general adoption of AI?

Veena Calambur

Q.

There has been a massive acceleration in the adoption of Artificial Intelligence (AI) in all aspects of our lives. But how much can we trust these ubiquitous AI systems? Over the past several years we have seen several examples of AI that have gone wrong across multiple fields and industries from racist facial recognition , discriminatory hiring , and credit scoring predictive models to even genocidal chatbots . As a result, there are emerging concerns around the trustworthiness, reliability, fairness, privacy, transparency, and autonomy of AI, that we must address to minimize societal harm.

A natural question to pose is how can AI be biased? Many have the notion that AI algorithms grounded in facts and data must be the key to fighting against biased human judgement and decision-making. Unfortunately, the way systems are designed to create and record data points that are used to train these algorithms are ultimately human constructs that can be impacted by both implicit and systemic biases.
For example, medical algorithms that are trained on longitudinal electronic health records are fully dependent on how individual healthcare professionals enter the records. Several studies show that there is evidence of implicit biases impacting healthcare professionals in medical practice . If a doctor has implicit biases about taking a female patient’s pain less seriously than a male patient’s pain, leading to fewer pain medication prescriptions, then an algorithm analyzing pain medication treatment patterns may pick up on these biases and further recommend limiting treatments to female patients.

Beyond the biases perpetuated by individuals, one of the significant drivers of AI bias comes from systemic biases that are embedded across several societal institutions. For instance, the history of the United States housing industry is rife with racist policies such as redlining that barred African Americans from homeownership in certain neighborhoods . Even though these policies have been outlawed for over fifty years, we can still observe their ramifications. African American homeownership is still significantly lower and banks are still much more likely to mortgage loans to Caucasian borrowers in certain regions. AI-based lending algorithms trained on historical housing data can learn the past racist policies and accelerate housing discrimination. Without any kind of intervention, AI algorithms can learn, codify and perpetuate biases long into the future all under the guise of objectivity.

If humans engage in biased judgement or decision-making that leads to unfair outcomes why does AI pose such a risk to humanity? AI technologies have the ability to automate biased decision-making at scale and this could lead to widespread algorithmic discrimination. Given the presence of implicit and systemic biases in many of our recorded data systems, algorithms that are trained on these datasets are often learning ‘standard’ scenarios and generalize behaviors that are not necessarily representative of the full population . So any individuals or sub-populations that deviate from the learned norms codified in the algorithm can experience negative impact and harm as a result.

This is further exacerbated by the lack of transparency and human autonomy over these AI systems. Some of the most notorious examples of AI causing societal harms occur when an AI system is deployed to automatically trigger decisions without any oversight or human control. For example, social media newsfeed recommendation algorithms have been shown to lead to massive political polarization and a rise in mis/disinformation globally based on automated content suggestions.

The role of AI is often obscured in these situations which is a serious concern of lack of consent of individuals influenced or impacted by AI. Even in cases where an individual is aware of the role of the algorithm in the prediction and wants or needs to contest the decision, it may be difficult or even impossible to explain the reasoning behind the decision - due to the complex and even black-box nature of many AI algorithms.

How can we go about addressing these very serious concerns? While it is impossible to claim we can remove all forms of bias, we can work to improve AI systems to ensure they are more equitable and inclusive. The simplest way to get started is by asking the right questions. There are several AI Ethics checklist resources available but below are a few key questions to get started and to check in periodically throughout the development and lifecycle of AI.

1. What is the purpose and intended use of the AI system? Are there ways that it can be misused or used unintentionally?

2. Who are the ‘interactors’ of the AI system? More, specifically, [a] Who are the primary intended end-user of the AI? Do they have adequate understanding and autonomous controls over the system? And [b] Who may be influenced or impacted by the presence of the AI system directly or indirectly? Are they / their needs well represented in the data and algorithm?

3. Are the AI ‘creators’ (i.e. data scientists, machine learning engineers, supporting operations, business sponsors) aware of ethical AI issues? And [a] Are they building in proper data and algorithm inspections during the development and monitoring of AI systems? [b] Are the developer teams diverse and support a culture of responsible AI?

There are a few steps to add to the AI algorithm development process to identify and mitigate bias. In the exploratory analysis, AI practitioners should analyze how well represented the data population is when compared to the target population of the AI solution. They should also analyze the target outcome across key sub-populations based on demographics, protected characteristics, or customer segments. Upon training the AI model, it should be evaluated for bias prior to deployment. Bias evaluations should include model performance disparities checks, model explanation assessments, and model outcome or endpoints reviews delineated by sub-populations. If any bias is found, mitigation methods like matching algorithms, sampling techniques, or debiasing algorithms should be applied.

To learn more about Ethical AI we should continue to review additional materials, particularly from marginalized communities who have already documented their experiences with AI and algorithmic discrimination. We should all develop a culture and governance to support responsible AI development.

"Without intervention, AI algorithms can learn, codify and perpetuate biases long into the future."

Veena Calambur is a data scientist and AI ethicist. She has worked to research and adapt data analytics and machine learning capabilities. She will be starting a role in Workday's Machine Learning Trust Program. In the past Veena has worked as data scientist at Pfizer where she pioneered their enterprise-wide Responsible AI strategy and toolkit and as a decision analytics associate at ZS Associates. Veena has her bachelor's degree in Information Science and Statistics from Cornell University and is pursuing a PhD in Information Science at Drexel University.

LI-In-Bug.png

Veena Calambur

"We can work to improve AI systems to ensure they are more equitable and inclusive."

Data Scientist & AI Ethicist

MEET THE LEADERS #

ARTIFICIAL INTELLIGENCE

linkedin.jpg

Created by many, offered to all. Help us reach more people!

facebook.jpg
twitter.jpg
bottom of page