'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.
ABOUT 60 LEADERS
'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders.
'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.
Will AI become capable of replicating or redefining itself?
It depends on how ‘unintelligent’ humans may become - how much intelligence we are willing to surrender, and how much freedom we are willing to offer to AI systems. If most of the human intelligence is transferred to computers - GPU, and eventually Quantum-based machines - AI will gradually lessen the need for human intelligence. Computers may move ‘ahead’ of humans, retaining the intelligence a human mind no longer needs. Currently, as we know, AI detects patterns that sit in the human realm and is able to build ecosystems based on patterns that we understand well and can label in order to teach AI models.
But there are patterns invisible to us which are not in the human realm – and that’s where the problem lies: when advanced AI is able to spot such complex patterns, in an unsupervised mode, we may start losing control. Today’s AI depends on the patterns led by humans which help machines to learn and develop ‘narrow intelligence’ capabilities. A narrow AI service might be a small piece of the puzzle, but if we have a large number of such services and we let them interact and communicate, then the model can become smarter over time and bigger patterns will possibly evolve – a new intelligence system which is more ‘human like’.
GPT-3 is an example of multi-model AI which is setting the standards of how complex connected AI models can deliver beyond average, human-level intelligence. This is where we need to stop or control general intelligence from talking to other general intelligence, otherwise, they will eventually enable ‘super intelligence’ scenarios. Then, superintelligence talking to superintelligence could create something we don’t understand, and we may lose control. This is where the possibility of AI creating a new form of intelligence may be viable. It’s catalyzed by the rise of GPU today, but it will be Quantum computing soon: at some point, we will experience a new level of technology, one that we don’t know about yet, and we may realise that we are dealing with the natural evolution of advanced technology systems.
Advancements in AI could result in an Intelligence offset that we don't understand, and if this is applied to autonomous Machine Learning, it will result in a paradigm shift in our society. Like Genetically modified foods, Artificial Intelligence may enable ‘digitally modified citizens and 'Intelligently-controlled-societies', and once this happens, there may be unthinkable hidden possibilities and risks that we cannot even fathom. The danger is we facilitate an uncontrolled and unregulated world that doesn't recollect how to rationalise i.e. apply Human Intelligence.