'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.
ABOUT 60 LEADERS
'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders.
'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.
Will AI become capable of replicating or redefining itself?
Over the last ten years, AI has progressed from theory to reality. AI is now powering advances in medicine, weather prediction, factory automation, and self-driving cars. People use Google Translate to understand foreign language web pages and talk to Uber drivers in foreign countries. Facial recognition apps automatically label our photos. And AI systems are beating expert game players at complex games like Go and Texas Hold ’Em.
The recent progress in AI has caused many to wonder where it will lead. Will AI create beneficial intelligent robots like C3PO from the Star Wars universe, or will AI systems develop free will and try to exterminate us like the Terminators of the eponymous movie franchise? Will we ever reach a point where AI systems develop consciousness and turn against us as they did in the movie and TV series Westworld? Will we ever reach a point where AI creates new forms of intelligence?
Today, AI systems that have human-level intelligence and consciousness only exist in science fiction. But will the progress in AI lead to systems with human-level intelligence and consciousness?
If you ask a 4-year-old child what will happen if you drop a glass, they will say it will break. Their commonsense knowledge of the world includes a basic understanding of gravity and the relative strengths of materials like glass and tile. If you ask a 4-year-old child if pigs can fly, they will say no and go on to explain that pigs don’t have wings . Their commonsense knowledge of the world includes the fact that animals need wings to fly, pigs are animals, and pigs don’t have wings. Moreover, a 4-year-old child has enough commonsense reasoning skills to conclude that pigs can’t fly and glasses will break when dropped.
However, today’s AI systems don’t have the reasoning capabilities of a 4-year-old child. They have no commonsense knowledge of the world and cannot reason based on that knowledge. A facial recognition system can identify people’s names, but knows nothing about those particular people or people in general. It does not know that people use eyes to see and ears to hear. It does not know that people eat food, sleep at night, and work at jobs. It cannot commit crimes or fall in love.
In fact, most AI researchers acknowledge that paradigms like supervised learning, which is responsible for facial recognition, machine translation, and most of the other AI systems that impact our daily lives, cannot progress into human-level intelligence or consciousness.
For over 70 years, AI researchers have proposed many approaches toward building systems with human-level intelligence. All have failed. Today’s AI researchers are continuing to propose new approaches and are at the early stages of investigating these new approaches. Some researchers argue that the key to creating systems with commonsense reasoning is to build systems that learn like people. However, cognitive psychologists have been studying how people learn for a century. Progress has been made but we’re far closer to the starting gate than the goal line. How many more centuries until we reach the goal line and are ready to start programming intelligence into computers?
Similarly, some researchers suggest that we need to understand the architecture of the physical human brain and model AI systems after it. However, despite decades of research, we know only some very basic facts about how the physical brain processes information. Other researchers argue that while supervised learning per se is a dead-end, self-supervised learning may yet take us to the promised land. The idea is that self-supervised systems like GPT-3 will magically acquire commonsense knowledge about the world and learn to reason based on that knowledge. However, despite ‘reading’ most of the internet, GPT-3 didn’t learn a comprehensive set of facts about the world or gain any ability to reason based on this world knowledge.
Other researchers have proposed novel deep learning architectures designed to learn higher-level building blocks that can help AI systems learn compositionally. This is interesting, but again, these are very early-stage ideas and there is a high likelihood that, like all the great ideas of the past 70 years, they won’t pan out.
Others have argued that human-level intelligence will occur as a by-product of the trend toward bigger and faster computers. Ray Kurzweil popularized the idea of the singularity, which is the point in time that computers are smart enough to improve their own programming. Once that happens, his theory states, their intelligence will grow exponentially fast, and they will quickly attain a superhuman level of intelligence.
But how would processing power by itself create human-level intelligence? If I turn on a computer from the 1970s or the most powerful supercomputer today, and they have no programs loaded, these computers will not be capable of doing anything. If I load a word processing program on each of these computers, then each of them will be limited to performing word processing. If a quantum computer of the future is only loaded with a word processing program, it will still only be capable of word processing.
Both the optimism and fear of achieving human-level intelligence and consciousness are grounded in the success of task-specific AI systems that use supervised learning. This optimism has naturally, but incorrectly, spilled over to optimism about prospects for human-level AI. As Oren Etzioni, the CEO of the Allen Institute for AI, said , “It reminds me of the metaphor of a kid who climbs up to the top of the tree and points at the moon, saying, ‘I’m on my way to the moon.’”
Few people believe that science fiction notions like time travel, teleportation, or reversing aging will occur in their lifetimes or even their children’s lifetimes. You should put AI with human-level intelligence into the same category.