top of page
Dots
60 LEADERS ON AI-COVER-FINAL2.png
60L-FINAL-COVER.jpg

'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.

ABOUT 60 LEADERS

'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders. 

DOWNLOAD (PDF 256 pages)

'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.

DOWNLOAD (PDF 286 pages)

To regulate or not? How should governments react to the AI revolution?

Kashyap Kompella

Q.

Digital technologies pose a paradox. The barriers to entry are seemingly low. For example, anyone with an idea and a credit card can start a company and challenge the incumbents. But there is also a centralizing tendency and a winner-take-most dynamic at play.

This has given rise to BigTech, which arguably has more power than many nations today. Naturally, there are calls for regulating BigTech. Similarly, in Artificial Intelligence the odds favour the deep-pocketed companies because AI breakthroughs require massive amounts of data, huge computing power, and highly skilled talent. But the whys, and hows of BigTech regulation are different from those of AI regulation. Regulation of BigTech is a detailed topic on its own and let’s limit ourselves to AI regulation here.

Let’s apply first-principles reasoning: why do we need AI regulation? To protect citizens and consumers. But protect them from what? From the harmful effects of AI and products that use AI. But why does AI cause harm? Such questions lead us to the topic of AI Ethics and the careful consideration of algorithmic bias. Contrary to popular perception, AI performance can vary significantly based on factors such as the training data and the types of models used. Outside of the data on which it’s trained, an AI model can result in high error rates. Automated decision-making in areas where AI makes consequential mistakes is the elephant in the room. Preventing the risk of such errors is the goal of AI regulation.

Ethical AI (or Responsible AI) is about exercising a duty of care in how AI is being used. This starts with a clear understanding of the limitations and blind spots of AI both among the creators/developers and the operators of such AI applications. Based on this, the right scope and boundaries of AI can be determined. Safeguards such as mechanisms for exception handling, human interventions, the ability to override AI decisions, human oversight, and governance are all part of Responsible AI.

Another dimension of Responsible AI is how much it is trusted by the stakeholders. Deep learning AI systems are not intuitive and there is a “because AI said so” angle to their automated decisions. Even if AI is performing better than the current baseline, a few highly-publicised errors can undermine confidence in their reliability. Furthermore, if the decisions are challenged, there needs to be an easily understood rationale.

To increase the transparency and explainability of Deep Learning systems, technical and non-technical solutions are emerging. Documentation of datasets, specifying the boundary conditions for AI usage, relying on AI techniques that are more explainable, and deconstructing how AI models arrived at the decision (aka explainability) are some example approaches to increase the transparency and trust. Trust in AI systems is a necessary condition for their adoption.

As to the how of AI regulation, a wide range of regulatory tools and approaches are available. There is self-regulation where industry participants voluntarily establish standards and best practices and agree to abide by them. Organizations declaring their support for Responsible AI principles and standards is an example. Independent audits and third-party certifications of compliance to standards define the next level.

We already have product safety, consumer protection, and anti-discrimination laws, which also apply to products and services that use AI. When AI makes mistakes, the consequences depend on the context and use case. For example, autocorrect not working properly carries low stakes, while getting charged for a crime because of an AI error, has a massive impact and must be avoided. The bar for AI should be set higher when the cost of errors and consequences of mistakes is high. That’s exactly the level-of-risk approach to regulation currently being considered in the EU. Additionally, sector-specific regulation can bring contextual granularity to regulation.

AI is not only a horizontal technology with broad applications but also a dual-use technology, meaning that it can be put to both good and bad uses. AI requires updates to our regulatory approach and upgrades to our risk architectures. As a final note on regulation, AI easily crosses borders and we also need global treaties and conventions, particularly for AI used in warfare and military applications.

"Automated decision-making in areas where AI makes consequential mistakes is the elephant in the room."

Kashyap Kompella is an award-winning industry analyst, best-selling author, educator, and AI advisor to leading companies and start-ups in US, Europe, and Asia-Pac. Thinkers 360 has ranked Kashyap as the #1 thought leader on AI globally (Oct 2021).

LI-In-Bug.png

Kashyap Kompella

"Trust in AI systems is a necessary condition for their adoption."

CEO, AI PROFS / RPA2AI Research

MEET THE LEADERS #

ARTIFICIAL INTELLIGENCE

linkedin.jpg

Created by many, offered to all. Help us reach more people!

facebook.jpg
twitter.jpg
bottom of page