'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.
ABOUT 60 LEADERS
'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders.
'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.
How is AI impacting the way businesses operate?
AI is a general-purpose technology, and as such, the number of use cases where it can add value is only limited by our imagination. The full spectrum of sectors such as agriculture, financial services, healthcare, and transportation already leverage advanced AI. Similarly, a broad array of organizational functions such as HR, marketing, sales, and finance already benefit from AI. In a few short years, it will be hard to conceive of any area of a business that is untouched by AI. Artificial Intelligence will be ubiquitous.
A good way to demonstrate the wide range of use cases for AI is to look at the many tasks an organization does on a daily basis. Any task with some of the following characteristics is well-suited for automation, or partial automation, with AI.
1. Discrete, stand-alone
2. Repeated frequently
3. Similar and repetitive in nature
4. Human intensive
5. Focuses on prediction, optimization, and pattern recognition
6. Has clear inputs and outputs
7. Has lots of data to learn from
The more of these characteristics a task has, the more likely AI can help. This is why AI is already used widely in responding to routine customer enquires via service chatbots, predicting customer churn in marketing, screening resume and CVs of candidates, identifying quality defects of parts on a manufacturing line, predicting supply chain demand, and identifying the signature of abnormal behavior on a computer network.
AI today is not only embedded in the activities of a company but also embedded in the products and services provided to its customers. Automobiles are full of AI, such as automated parking assistance (if not full parallel parking ability). Online services from Google and Facebook are driven by AI algorithms that dictate what search results or advertisements are delivered to a particular consumer. Amazon’s Alexa smart speakers use AI to translate consumer commands into understandable text. Apple iPhones use facial recognition to identify users and provide secure access to banking apps. The examples from everyday life are too numerous to list.
AI is not just for Big Tech. The key challenge for organizations not “doing AI” is to put in place the foundations necessary for AI innovation and implementation. This involves six steps from setting out the plan to managing risks:
1. Plan. Have a business plan for AI that includes a portfolio of use cases, a roadmap for development and deployment, and metrics of success.
2. People. Mobilize leadership, cross-functional teams, and appropriate engineering, data science, project management, and product management skills to do AI.
3. Data. Invest in quality, labelled, unbiased and inclusive data to drive AI models and systems – this can often be 80% of the effort on an AI project.
4. Technology. Invest in a technology infrastructure that allows for rapid iteration and deployment of AI models and systems
5. Operations. Ensure AI is operationalized in a production environment with workforce involvement, customer awareness, and constant monitoring and improvement of the performance of the AI system.
6. Risks. Manage the risks of AI.
For many companies, the most topical step is the last one: managing risks. The risks of AI in an organization can be considered in three areas:
1. Strategic. If a company does not adopt AI, it may lose out to competitors who have embraced AI technologies. Consumers have seen AI and data-first companies, such as Amazon, encroach upon many traditional retail sectors. Many new challenger consumer banks are built from the ground up with an AI and data platform that can deliver a more personalized, robust, and satisfying customer experience than banks that are encumbered by legacy technologies.
2. Operational. There are myriad operational risks of running AI on a day to day. These range from safety issues if AI fails through to customer services challenges if the AI does not work well, for example, in customer service chatbots.
3. Legal and Ethical. These risks include ethical concerns over embodying bias in algorithms, risks over IP ownership, AI supplier contract risks, and the risks of not adhering to existing and new AI regulations.
Ultimately AI is a technology that requires Board of Director oversight as it can lead to brand reputational damage along with significant financial damage if it goes wrong. Organizations need to proactively put in place AI governance frameworks to help identify, manage, and mitigate the risks of AI. Existing enterprise risk management frameworks should be complimented by having clear principles for using AI, having a Head of AI Risk and Compliance, ensuring fair, unbiased, and safe AI by design and default in engineering practices, and providing training across the organisation on the risks of AI.