top of page
Dots
60 LEADERS ON AI-COVER-FINAL2.png
60L-FINAL-COVER.jpg

'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.

ABOUT 60 LEADERS

'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders. 

DOWNLOAD (PDF 256 pages)

'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.

DOWNLOAD (PDF 286 pages)

What are the ethical concerns associated with the general adoption of AI?

Enrico Panai

Q.

The power of AI is relative to its capacity for agency, which is not monolithic, but a system composed of autonomous agents, or multi-agent systems. In practice, many of the agents that make up an AI system have an extraordinary capacity for action over a very limited field. The actions they generate may have limited consequences. However, the sum of many morally-neutral actions can lead to a large morally-loaded consequence. The final consequence is not necessarily negative.

Take ecologically responsible behaviour, for example. If I turn off all the lights at home for one hour a week, this has no impact on the global environmental system. And perhaps it has no visible impact on my electricity bill either. However, if everyone in the world does the same thing, the action becomes relevant and therefore morally loaded (in this case positively). This is an example of a multi-agent system composed of people, but there are systems composed of AI agents only or some hybrids (humans, organizations, AI). Our fear comes from having no idea how a set of small morally neutral or morally negligible interactions result in a set of enormous morally-loaded consequences.

An AI agent can perform a task very well, much better than we humans can; but perhaps it uses data that is not of the required quality to make a decision. Any dataset may have been biased during the recording, the storage, the processing, etc. The quality of the data may be diminished statistically (if the sample is not representative), technically (if sensors or recording technologies were not reliable), or cognitively (because people ‘injected’ cognitive biases into the dataset). Cognitive biases are particularly interesting because they have no defined perimeters; they overlap one with another. But perhaps that is what makes us human.

Let me say that in other words. We are humans so our decisions are naturally biased. And some biases are absolutely natural. But when we see them in AI they become unbearable. So we try to eradicate them from their source: humans. This means that we try to change ourselves to make AI better. We are afraid that machines are like us, imperfect. In doing so we underestimate that the process of reducing prejudices is a much more complex social process. And in any case, some biases cannot be removed even from a dataset or an AI system.

What one must try to do is to mitigate the risks in a continuous and evolutionary process. A bit like driving a car, you adjust the steering wheel all the time to stay on track (but this is simply acceptance of uncertainty and a common sense that starts from Socrates and goes all the way to extreme programming).

In short, what we are afraid of is not the speed of execution of AI systems, but the unpredictability of the consequences. There are risks, but there are also advantages.

And from an ethical point of view, it is also wrong not to exploit the positive opportunities of AI. The point is to be able to do so with an appropriate ethical awareness. What scares us is not driving fast, but driving fast while blindfolded.

"The sum of many morally-neutral actions can lead to a large morally-loaded consequence."

Enrico Panai is an Information and Data Ethicist and a Human Information Interaction Specialist. He is the founder of the French consultancies “Éthiciens du numérique” and BeEthical.be and a member of the French Standardisation Committee for AI, and ForHumanity’s Fellow.

LI-In-Bug.png

Enrico Panai

"What scares us is not driving fast, but driving fast while blindfolded."

AI and Data Ethicist

MEET THE LEADERS #

ARTIFICIAL INTELLIGENCE

linkedin.jpg

Created by many, offered to all. Help us reach more people!

facebook.jpg
twitter.jpg
bottom of page