'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.
ABOUT 60 LEADERS
'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks from AI and what should be done to protect individuals and humanity. View the leaders.
'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.
To regulate or not? How should governments react to the AI revolution?
The exponential growth of digital data combined with computing power has brought Artificial Intelligence into a new era, offering extremely favourable prospects for its development and implementation in many sectors of the economy and society.
This is particularly true for security applications of AI, such as Facial Recognition Technology (FRT), Crowd Measurement tools, and tools to prevent and detect digital Child Sexual Abuse Mate-rial (CSAM). For example, in the ecosystem of online information, AI is used for content moderation on social media platforms, i.e., for evaluating and moderating messages and videos that may violate the law, such as hate speech or disinformation. Not only does algorithmic con-tent moderation pose risks to privacy and freedom of expression, but it may also erode the foundations of democracy when democratic speech (or even thought) becomes dangerous and low-harm behaviors such as ‘online spitting’ become the subject of law enforcement.
Therefore, it is key that governments ‘lead by example' by adopting regulatory and policy frameworks based on a human-centered approach where the interest of the citizens comes first, in line with human rights, democracy, and ‘rule of law’ standards. The premise ‘people first’ is also the guiding principle in the European Commission’s draft declaration on Digital Rights and principles of 2022, which will guide the digital transformation in the EU. Furthermore, the importance of transparency of algorithms to prevent discriminatory uses of data is echoed in the European Commission’s proposal of 15 December 2020 for a Digital Services Act (DSA), which brings enhanced transparency of automated content moderation mechanisms. A fully-stretched application of Human Rights in the online sphere also requires adequate over-sight and control to monitor the transparency of algorithms.
In addition, discriminatory, gender-biased AI systems should be prohibited as masculine-dominant algorithms do not always detect hate speech aimed at women. In general, women are also more digital-ly vulnerable than men and under-represented in the ITC sector. There-fore, the Belgian federal government developed a ‘Women in Digital’ strategy to ensure that more women graduate in ITC-related studies, promote the integration of women in the digital sector, and ensure an inclusive and diverse digital society.
If human-centred AI becomes a reality, then we could even imagine a world where AI helps humans to strengthen (instead of eroding) democracy and fundamental rights. This would significantly increase trust in AI as a tool for the common good.
A best practice example where AI is already used for designing legislation in a human rights-friendly way comes from South Korea where the privacy regulator (Personal Information Protection Commission, PIPC) is working on an AI system for preventing infringements of personal information regulations, which will support decision-makers with the evaluation of personal information infringement factors when en-acting and emending laws and regulations.
If we take all of these ethical considerations into account, we will be able to fully utilize and enjoy the opportunity AI has to offer.