top of page
Dots
60 LEADERS ON AI-COVER-FINAL2.png
60L-FINAL-COVER.jpg

'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.

ABOUT 60 LEADERS

'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders. 

DOWNLOAD (PDF 256 pages)

'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.

DOWNLOAD (PDF 286 pages)

"What are the ethical concerns associated with the general adoption of AI?"

Arathi Sethumadhavan

Q.

Artificial intelligence is increasingly informing crucial aspects of people’s everyday lives. Whether playing a part in the security apparatus for buildings, the screening of candidates for jobs, or the diagnosis and prediction of patient health, AI technologies are being used to facilitate countless essential services. As such, there are real human costs when systems do not perform reliably or equitably across the variations found in real-world settings.

More recently, we have seen several AI applications that perpetuate unfair biases. In 2020, unable to administer traditional exams due to the coronavirus pandemic, the United Kingdom’s Office of Qualifications and Examinations Regulation decided to use an AI to predict the grades of students in A-level exams. Because the algorithm placed a disproportionately higher weight on the historical performance of schools (which correlated with how rich the schools were), it was biased unfairly against students from poorer backgrounds.

Unfortunately, such algorithmic biases are not uncommon. The same year, in the midst of the Black Lives Matter protests occurring across the United States, an African American man, accused of shoplifting, was wrongfully arrested by the Detroit police due to an erroneous match from a facial recognition algorithm. Clearly, the danger with such biased systems is that they contribute to social and political imbalances and reinforce inequalities based on characteristics such as race, socioeconomic status, gender, sexual identity, and location.

Researchers describe five general harms that AI systems can create:

1. Allocation harm occurs when systems extend or withhold opportunities, resources, or information. There is a risk of this type of harm when AI systems are used to make predictions and decisions about how individuals qualify for things that can impact a person's livelihood (e.g., an AI system used in hiring that withholds employment opportunities for women).

2. Quality of service harm occurs when there is disproportionate product failure, and a system does not work as well for one person as another (e.g., a speech recognition system that does not work well for individuals belonging to certain sociolects).

3. Stereotyping harm occurs when systems reinforce existing societal stereotypes (e.g., a translator that associates ‘she’ to a nurse and ‘he’ to a doctor when translating from a gender-neutral language to English).

4. Denigration harm occurs when a system is derogatory or offensive to a subset of users or to all users (e.g., a face recognition system misclassifying an African American as a primate).

5. Over/under representation harm occurs when systems either over-represent, underrepresent, or even completely exclude certain groups or subpopulations (e.g., a search algorithm that returns more images of men than women when a ‘CEO’ query is used).

To address biases proactively, here are a few questions to think through:

- Who are the stakeholder populations that will be affected by the AI system, and how would the AI impact marginalized groups?

- How should the community of impacted stakeholders be involved to define fair outcomes?

- What is the composition of the training data? What is in it, what is missing? For example, if an AI is only trained on certain facial characteristics (e.g., light-skinned faces) for facial recognition systems, then it may treat other individuals who do not share those same characteristics differently.

- Has the model been tested & benchmarked against affected subgroups? Are there disproportionate errors across subgroups?

- Have the ground truth labelers (i.e., those who are responsible for labeling datasets) been trained to reach a high level of domain expertise and overcome personal biases?

- How should users be informed about the limitations of the AI, so as to minimize overreliance?

While real issues exist, AI is not created with the intent to cause or introduce harm. In fact, many applications have the potential to positively inform the ways people live their everyday lives, from helping to fight poverty , to improving the agency of older adults , to assisting people with low vision in learning more about their physical surroundings.

Innovation should therefore not be stifled. In fact, these challenges offer valuable surface areas for fueling responsible innovation. However, it is now more important than ever to have in place key principles , governance structures , and robust development processes, that enable the responsible development of AI technologies.

"While real issues exist, AI is not created with the intent to cause or introduce harm."

Arathi Sethumadhavan is the Head of Research for Ethics & Society at Microsoft, where she works at the intersection of research, ethics, and product innovation. She focuses on AI and emerging technologies such as computer vision, natural language processing, mixed reality, and intelligent agents. She is also a recent Fellow at the World Economic Forum, where she worked on unlocking opportunities for positive impact with AI.

LI-In-Bug.png

Arathi Sethumadhavan

"We need to have principles, governance, and processes for responsible development of AI."

Principal Research Manager, Ethics & Society

MEET THE LEADERS #

ARTIFICIAL INTELLIGENCE

linkedin.jpg

Created by many, offered to all. Help us reach more people!

facebook.jpg
twitter.jpg
bottom of page