
'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.
ABOUT 60 LEADERS
'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders.
'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.
What is the impact of AI on society and everyday life?
Alf Rehn
Q.
As AI (and algorithmic logics in general) becomes omnipresent, the societal implications are getting more profound by the week. Whereas some still see AI as a specialist tool, e.g. as something for pharmaceutical researchers or document management experts, its larger impact will affect any and all human activities. We may not yet be in a world in which AIs decide on everything, from what innovations to invest in and what social programs to fund, but we are far closer to this than most people realize. Whereas AI-driven decisions were just a flight of fancy five years ago, today more decisions than you may be comfortable knowing are, at least in part, driven by algorithmic logic.
When discussing how individuals and societies will be affected by AI, it is important to balance the benefits with the potential risks. The short-term benefits are ample and easily understood – AIs can take over dreary, repetitive jobs and free people to realize their potential, while getting algorithms involved in decision-making can limit both the errors and the biases that humans are prone to introducing. By leaving decisions to an algorithm, we can make sure that the innate human limitations – biases, insufficient information, moods – aren’t affecting decisions in an overt fashion. By introducing algorithmic logic, we can make sure that human frailty isn’t driving the big decisions that society needs to take.
That said, we often forget that AI has both short and long-term impacts on society. Looking at things in the short term, it may well look like AI is nothing but a net positive for society. AI can help us sort out issues such as suboptimal urban planning, or deal with racial bias in sentencing decisions. It can help clarify the impact of credit scores, or ensure that the mood of a doctor doesn’t affect a medical diagnosis. What unites these cases is that it is very easy to spot bias or errors in the way the AI functions. An AI that does urban planning in a way that marginalizes certain ethnic groups will be found out and an AI that misdiagnoses cancer will be caught. These are all cases of what I have called ‘short bias’, errors that algorithmic logic can get caught in through insufficient data or bad training.
But what about those cases where an AI influences decisions that have long trajectories, and where the impact might not be known for years or decades? Imagine that an AI is programmed to figure out which of four new research paths in energy production should be supported and financed. One is known and tested, two are cutting edge but with great potential, and the last one is highly speculative. Unless the AI has been programmed to take great risks, it is likely to suggest that the speculative program is cut. Yet, we know that many speculative ideas – antibiotics, the internet, and female suffrage come to mind – have turned out to be some of the best ideas we’ve ever had.
What is at play here is something I have given the name ‘long bias’, i.e. the issue of potential long-term negative consequences from AI decisions that are difficult to discern in the here and now. AI is exceptionally good at handling issues where the parameters are known – whether a cat is a cat, or whether a tumor is a tumor. These are also issues where humans can quickly spot the errors of an AI. When it comes to more complex phenomena, such as ‘innovation’ or ‘progress’, the limitations of algorithmic logic can become quite consequential. Making the wrong bet on a speculative technology (and let’s be clear, there was a time when the car was just that) can affect society not just in the here and now, but for a very long time afterwards. Cutting off an innovation trajectory before it has had a chance to develop is not merely to say no in the here and now; it is to kill every innovation that might have been, and an AI would not care.
In this sense, AI is a double-edged sword. It can be used to make decisions at a speed that no human can match, with more information than any group of humans could process. This is all well and good. On the other hand, by taking away the capabilities of imagination and bravery that humans excel at, we may be salting the earth for technologies we’ve not even considered yet. AIs work with data, and all data is historical – as the investment banks say, “past performance is no guarantee of future results”.
With this in mind, it is far too early to be wishing for a technological singularity, a state of affairs where infinitely wise AIs can guide us in our technology exploration. On the contrary, when it comes to innovation it is of critical importance that we retain the human capacity to imagine and dream, and ensure that we are not letting data do all the driving. AI can help us solve massively complex problems, but the keyword here is ‘help’. The human capacity “to see a world in a grain of sand/ and a heaven in a wild flower” needs to be protected, to ensure that AI only augments our capacity to innovate, rather than defining the same.
"AI is exceptionally good at handling issues where the parameters are known – whether a cat is a cat, or whether a tumor is a tumor."
Professor Alf Rehn is a globally recognized thought-leader in innovation and creativity, and is in addition a keynote speaker, author, and strategic advisor. See alfrehn.com
