'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.
ABOUT 60 LEADERS
'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks from AI and what should be done to protect individuals and humanity. View the leaders.
'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.
What is the impact of AI on society and everyday life?
In the years ahead Artificial Intelligence is poised to have profound impacts on society – in ways we are only just now starting to understand. These impacts will manifest in four areas: holistic interconnection, ubiquitous awareness, substitutionary automation, and knowledge creation.
1. Holistic interconnection means that everything in our lives will eventually be digitally enabled and thereafter interconnected in a true Internet of Everything (IoE) manner. This will permit AI systems to intelligently monitor all aspects of our lives (24/7), including us as individuals and all the infrastructure we use on a regular basis - our homes, appliances, entertainment devices, cars, laptops, mobile devices, health aids, and so on. Such holistic interconnection serves as the backbone for realizing truly ‘smart’ persons, smart homes, smart communities, smart cities, and ultimately smart nations. Eventually, everything will be able to communicate with everything else – and AI will ensure this is done in ways that benefit all.
2. Ubiquitous awareness means that AI systems – built atop holistic interconnection – will become fully aware of each person, of personal and societal infrastructure, and of how these are all interacting with each other – and will then make decisions and take actions on our behalf that benefit society in a range of ways. One can imagine the situation where – upon approaching their office building or a shopping mall – that facility becomes fully ‘aware’ of their presence (including their identity – and that of everyone else there), where they currently are in the environment, where their assets (car, laptop, etc.) are in the environment, and how the environment can best accommodate their needs by learning new insights about them, like their preferred office lighting and temperature, or the promotions being run at stores they frequent, and so on – all in a way that optimizes the whole, like overall energy consumption for example. Eventually, everywhere we go, our environments will be completely aware of us, and will, via AI, optimize the environment for us. In many ways, AI will come to know more about us and our patterns than we ourselves understand (in some places it already does). One important implication of holistic interconnection and ubiquitous awareness is that society’s notion of ‘privacy’ will have to change – to one that is far more comfortable with having individual data shared openly across systems. In due time, this societal norm will shift, and the conversations around privacy in future generations will look very different from those of the present generation.
3. Substitutionary automation means that AI will empower numerous automated systems to become fully autonomous in their operation, and consequently be able to deliver value without the need for human oversight or intervention. Clear examples of this are fully autonomous vehicles and transportation systems, fully automated business processes, and fully autonomous professional services (like legal and accounting services for example). Substitutionary automation means that many tasks that presently consume (waste) our time – like routine driving, routine data processing, and so on – can all be relinquished to automated systems and consequently free us up to focus our time, energy, and efforts on more creative and novel tasks – tasks for which the human mind is best suited.
4. Knowledge creation refers to something that AI has already started to do, namely synthesize new knowledge that did not exist previously (usually via adaptive pattern recognition) – interconnecting points of insight that were previously unconnected. It is this area of AI knowledge creation that is poised to grow exponentially over the coming decades. And not only will it accelerate, it will – via a self-reinforcing cycle – actually start to generate its own queries and learning loops, so that it is not just synthesizing new answers to pre-existing questions, but rather actually synthesizing new questions needing to be answered. This will permit AI to address even better such looming human challenges as climate change, food security, economic stability, poverty eradication, disease eradication, and so on – areas in which next-generation AI holds incredible promise, especially when coupled with powerful new computing methods like Quantum computing.
Most AI scholars agree that, as this acceleration continues, there will come to be a point in time at which AI is generating new knowledge faster than humans can absorb and apply it, at which point AI will surpass human (natural) intelligence, and only AI will then be able to use this new knowledge. This is the singularity, which will most likely occur somewhere around the mid-Twenty-First Century. One key ramification of the singularity is that it creates a prediction wall, beyond which we can no longer forecast what the future will look like – because we have no idea what AI will end up doing past that point. The singularity thus presents us with a serious unknown ahead.
There are, of course, key risks with AI. While AI can certainly be used for good to optimize our lives, it can also be used for equally destructive purposes, such as in learning how to wage the most effective wars and cyberattacks against different groups. There is also the ultimate risk, which is that AI itself will become both sentient (fully self-aware) and malevolent (rather than benevolent) toward humanity, thus unleashing some form of ‘war’ against humanity – to either subdue it or eradicate it.
The first risk – that of humans misusing AI – is a challenging one to address. World bodies like the United Nations for example are – with the assistance of AI Ethicists – already working to develop ethical guidelines for the appropriate uses of AI, and consequences for the systematic misuse of AI. The second risk – that of AI itself overriding human oversight and acting malevolently toward us – is one that can most likely be addressed through discrete control mechanisms in which power to AI systems is cut. Of course, one could imagine the dystopian situation in which such AI systems foresee those human interventions and devise means (including autonomous war machines under their control) to prevent humans from being able to employ such overrides. Many of these risk-mitigation practices will be worked out as we proceed, and will have to be approached very cautiously.