'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.
ABOUT 60 LEADERS
'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders.
'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.
How should societies get prepared for AI?
To imagine a world of AI, it would be easy to direct the interested reader to the realm of science fiction. From films such as The Terminator to iRobot and 2001: A Space Odyssey, or video games such as Deus Ex, creative minds have marvelled over the potential benefits and dangers of an Artificial Intelligence. The promise of AI in many ways lies in its ability to process vast amounts of information to make or arrive at unbiased, accurate, or ‘correct’ decisions, fair to millions of people. Imagine the benefits to the electorate, to government, to society… For now, innovative commercial uses abound, with examples including analytics (e.g., Anodot), assisted decision-making and marketing campaigns (e.g., Peak, AI.Reverie, Frame.ai), insurance (Arturo, Inc.), code (e.g., Comet.ml, Metabob.com), cybersecurity (e.g., MixMode, Socure), autonomous vehicles (e.g., Pony.ai), and open banking (e.g., Cleo). But what might a society relying on AI look like?
As part of an answer, AI can mimic or simulate human interactions. For example, Jill Watson is a teaching assistant at Georgia Tech that answers student questions realistically to the point that students do not even realize that they are interacting with an AI and not with a human professor; Google Duplex acts as a phone assistant using Natural Language Processing to create conversations with a human quality; and GPT-3 uses AI to generate human-like texts based on questions asked of it. Potentially then, AI can support humans’ roles in society or even replace them.
However, its ability to mimic and simulate human interactions by absorbing and processing a plethora of existing (and historical) information carries many serious, and not necessarily foreseen, consequences. For example, machine learning-based AI can learn to be racist. In the US, its justice system, frequently faced with charges of racial bias, turned to technology to create unbiased profiles of criminals only to find that the algorithms developed a racial bias. In other cases, a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay distributed antisemitic messages having spent a day learning from Twitter.
AI and machines are not infallible and the assumption that they will operate with objectivity is flawed because of both the programming requirements and the information they feed on containing inaccuracies, biases, or flaws, whether current or historical. For instance, human intervention in the training of machine learning cannot be overlooked. The quality of data or information fed into the machine matters substantially to the quality of the outcome—training is key, and volume helps, but these do not overrule the need for human and societal oversight of machine learning and AI and interpretations of their outcomes to ensure the machine or AI is not learning and absorbing incorrect information, trends, or perpetuates flaws originally present in data.
Business and government both have a role to play in educating society of the usefulness and limits of AI, but the question is who should provide oversight? Society is a major stakeholder of AI, but society is too distant and disparate to impose effective oversight or control. Therefore, it falls on government, business, and non-government institutions to ensure (potentially through regulation) that AI properly services the public good. In many ways, the potential of AI lies in augmenting human agency (and decision-making), not replacing it. Many opportunities certainly lie at this interface and to manage the potential Janus face of machine learning and AI.