top of page
Dots
60 LEADERS ON AI-COVER-FINAL2.png
60L-FINAL-COVER.jpg

'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.

ABOUT 60 LEADERS

'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders. 

DOWNLOAD (PDF 256 pages)

'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.

DOWNLOAD (PDF 286 pages)

To regulate or not? How should governments react to the AI revolution?

Badr Boussabat

Q.

Artificial intelligence is a unique technology - it has an impact on all aspects of society and sometimes simultaneously. Consequently, it questions our societies and sometimes upsets the organization of societies. Generally, it is the biases of certain AI systems that worry people and authorities and raise the need for an ethical approach to AI.

An ethical approach to AI is fundamental, but it constitutes a colossal challenge in legal thought. Indeed, the pace of development of AI is much faster than the pace of development of legal texts. In general, the conventional approach to regulating AI prompts the legal reflection that starts from a ‘principle’. For example, when authorities want to legislate on the use of data by AIs, they establish the principle of ‘invasion of privacy’ and apply this to all AIs that use data, without nuance. This method, while commendable, has absolutely no effect on healthy AI control. Indeed, the principle is vague by definition and also, AI has drifts that sometimes escape these basic principles.

Therefore, it is necessary to leave the conventional approach in principle and move more towards a more pragmatic approach. In other words, a consequentialist approach. This method, which I consider to be more effective, represents legal work typically the opposite of what we know in general. The consequentialist approach would list all the concrete excesses of AI and group them by the ‘degree of danger’ for society. The legislator would review all AI applications on the market to create a second group.

The latter would list the identified technical biases of algorithms according to their frequency in the development of AI in the market. Then, it will be a question of whether the technical biases identified in AI applications, cross the corresponding ‘degree of danger’ or not. Finally, the legislator will be able to distinguish the algorithmic biases according to the potential dangers and further extend the legislative logic. This logic would therefore not be exclusively based on a starting principle but, instead, on a societal and technical understanding of AI.

A major challenge is how to assess the level of risk at the start of legal work - due to algorithmic drift. Regardless, the legal work will be much more concrete and the consequentialist approach will make it possible to avoid the logical distance that exists between current ethical recommendations and the excesses of AI that for the most part escape it.

In conclusion, AI is an exceptional technology for its benefits. But it also carries risks, and as a result, it requires legal work that is out of the ordinary. Why a consequentialist approach and not a principled one? The two reasons that should motivate this paradigm shift lie, on the one hand, in the difference in the speed of development of AI versus the speed of adaptation of conventional legal texts, and, on the other hand, in the inefficiency of the traditional regulations - from which the excesses of AI too often escape.

"It is the biases of certain AI systems that worry people and authorities and raise the need for an ethical approach to AI."

Badr Boussabat is the President of AI TOGETHER, a serial entrepreneur, AI Speaker, Author of "L'intelligence artificielle: notre meilleur espoir" (2020) and "L'intelli-gence artificielle dans le monde d'aujourd'hui" (2021). Badr gave 80+ conferences to major financial institutions, Gulf Cooperation Council, World Economic Forum, banks, public authorities, SME's and Universities. He is the most quoted AI specialist in the Belgian press and has participated in 30+ TV shows to talk about the inclusive use of AI.

LI-In-Bug.png

Badr Boussabat

"With a consequentialist approach, we can reduce the logical distance between ethics and the real excesses of AI."

President, AI TOGETHER

MEET THE LEADERS #

ARTIFICIAL INTELLIGENCE

linkedin.jpg

Created by many, offered to all. Help us reach more people!

facebook.jpg
twitter.jpg
bottom of page