'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.
ABOUT 60 LEADERS
'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders.
'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.
How could Democracy benefit from AI?
Jennifer Victoria Scurrell
Education is everything. Many issues in the globalised world – be it the climate change or global health crises like the Covid-19 pandemic – can be tackled by equipping people with critical thinking. Especially with regards to political opinion formation and participation in democratic decision-making processes, digitalisation, and specifically, Artificial Intelligence, can support us in gaining political literacy. In times of disinformation, algorithmic content curation, bot armies, and democratic backsliding, it is more important than ever to provide citizens with the right tools to assist them in making their choices independently without any nudging or manipulation.
So why not have your own personal Artificial Intelligence buddy? An AI that accompanies and supports you in political decision-making by acting as a sparring partner discussing the political issues at stake with you. This conversational AI could provide insights based on your values and attitudes, nurturing the argument with scientific facts. It would recommend news articles to gain further information about the topic, algorithmically customised to your interests and stance. In parallel, it would show you articles, which you normally would not see and read. As such, the AI would lead you out of the filter bubble or echo chamber, which is important, as a good citizen in a democracy is a fully informed citizen. Moreover, as political and societal issues are often complex, the AI could break down the topic and accurately explain it in a way most illuminating and viable to you . The AI could represent politicians standing for election in the form of a hologram, so that you can dyadically discuss with their digital twins important issues and their positions in a direct way to decide upon which candidate you would like to cast your vote into the ballot for.
Facing away from this utopia: The technology is there to support us in our everyday life, as well as in complex situations such as critically thinking about the political decisions we must make. However, there are still many basic problems scientists, together with the developers and providers of AI systems, as well as society by its very nature, must address. Be it privacy issues, the black box problem, biased data for training, the risk of getting hacked and manipulated: A personal AI buddy for political decision-making is still far-off reality, as current incidents in the virtual realm of social media, but also in the political and societal reality, demonstrate that humanity cannot handle AI technology in benevolent ways without slipping into maleficent enticement.
What can we do about it? When developing AI systems and technology, we should always think one step ahead: How can the developed tool be used in a harmful way and how can we prevent that? If scientists, developers, tech providers, and policymakers follow the basic ethical framework for creating AI in a transparent way (see Beard & Longstaff 2018), society can regain and consolidate trust in technology, science, tech companies, and politics. Prudently complying with ethical regulations, the utopian dream of living with good and trustworthy AI side by side might become reality and we can use AI justly and with integrity, educating citizens to become more critical and informed in the process of democratic decision making.