'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.
ABOUT 60 LEADERS
'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks from AI and what should be done to protect individuals and humanity. View the leaders.
'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.
What are the ethical concerns associated with the general adoption of AI?
From philosophy to programming. From Plato to Python. Artificial Intelligence rises a diversity of challenges that root in values and go far beyond technology. The massive power and the general adoption of AI raise a series of ethical concerns and tough questions: How to make algorithmic decision-making fair? How to guarantee that AI will augment humans instead of replacing them in the quest for economic growth and prosperity? Which criteria should be applied to define the limits of human oversight over technology?
No doubt, AI initiatives are already permeating various business and social domains. The increasing impact of AI decision-making brings as many promising opportunities for value creation as question marks on how to establish an actionable ethical framework. The speed at which AI is transforming industries impacts communities and behaviors and keeps generating new dilemmas. Considering that AI adoption will accelerate in the coming years – in the form of myriads of digital services - it is important to address the major AI risks now.
While AI Ethics is relatively new as an applied field of knowledge, the conversations around ethos were already a pillar of the Greek polis where our civilization started outlining societies as such. The character of beliefs and ideals shaped by communities and nations is, by definition, intangible. It crystallizes in individual and collective behaviors, holding certain attitudes and acting in particular ways. AI Ethics is for shaping a character around the application of AI, through a shared predisposition on what should be fostered or prevented when exercising AI.
This covers countless use cases across industries: from evaluating race bias of Computer Vision algorithms to guaranteeing financial credit eligibility fairness across genders; from reasonably explaining the hows and whys behind an automated decision to finding the collaborative model of virtual and human agents of customer service; from ensuring the right to intimacy and privacy across geo-localized applications to deciding the level of human supervision that should apply for an AI-driven health initiative; from guaranteeing unmanipulated dialogues between brands and individuals to protecting vulnerable population groups from discrimination…
AI Ethics must be considered in the corresponding ecosystem that comprises organizations and the communities they serve, professionals and individuals, private and public sectors. AI Ethics require direction and monitoring from both a long-term socio-economic and a day-to-day viewpoint and from the perspectives of all involved stakeholders.
Ethics are embedded within our social agreement. For instance, while the idea of fairness may be abstract, humans recognize and accept regulations to settle, even if some translation is needed depending on the context. From the individual subjectivity of values to the set of laws that embody nationwide criteria, there is a level of intersubjectivity, that is, people’s mutual understanding around the way we define concepts such as discrimination or respect for diversity. Such agreements are paramount to harmonizing our perspectives as a collective, and often serve as a driver to stimulate new corporate guidelines and legislations. In a simile to how those agreements formed the foundation for managing the polis, AI Ethics is now generating organizational, institutional, and social debate.
As organizations serve society, they must enrich their strategic leadership agenda with ways to manage or mitigate the risks associated with AI applications. There are a lot of examples of such risks, for instance, the case of Apple Card , the Twitter debate, and the Wall Street regulator's investigation.
The gender bias of the algorithm for extending Apple card credit, resulted in a series of reports by users, stating that the process was unfair: under similar conditions, men were eligible for much higher credit than women. Even Steve Wozniak shared that he had been given ten times the credit limit offered to his wife. This controversy on AI bias went far from just an academic debate: The CEO of Goldman Sachs, the firm issuing and operating the card, had to publicly clarify the reasons behind such unfair outcomes and make the process more explainable.
Another example of gender bias is Amazon’s recruiting tool , which already in 2015 was identified not to be gender-neutral when rating candidates for various technical positions. The reason? Models had been trained over 10-years datasets of successful profiles which, on the male-dominance scenario for the period, did not equally represent female candidates.
A responsible approach to AI development requires tools for risk assessment at all levels, including the identification of potential issues regarding bias, privacy, diversity, transparency, manipulation, etc. During the last three years, governments worldwide have started a race to define guidelines around the ethical implications of AI. In 2019 the European Commission launched the Ethics Guidelines for Trustworthy AI, led by an independent expert group. Based on a public consultation followed by a process of gathering and analyzing views of 1,000+ participants (citizens, member states, industry experts, and academics), the AI Act was launched on April 2021 – as a regulatory proposal that defines what AI is and identifies different levels of AI-related risks that need to be addressed.
While it is critical for organizations to adopt institutional frameworks and regulations like the AI Act, it is also of paramount importance to mobilize all players across industries, so that top-down and bottom-up approaches jointly work towards building trust in human-centric AI. To achieve this, organizations must adopt standards, processes, and controls to make AI systems trustworthy and compliant. Moreover, they must promote a culture of responsible, ethical, and trustworthy AI.