top of page
Dots
60 LEADERS ON AI-COVER-FINAL2.png
60L-FINAL-COVER.jpg

'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.

ABOUT 60 LEADERS

'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders. 

DOWNLOAD (PDF 256 pages)

'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.

DOWNLOAD (PDF 286 pages)

To regulate or not? How should governments react to the AI revolution?

Yannis Kalfoglou

Q.

The recent advances in Artificial Intelligence (AI) and remarkable progress have caused concerns or even alarm to some of the world’s best-known luminaries and entrepreneurs and society as a whole. We’ve seen calls in the popular press for watchdogs to keep an eye on the uses of AI technology for our own sanity and safety.

But, is that a genuine call? Do we really need AI watchdogs? The key point to bear in mind here is that a watchdog is not just a monitoring and reporting function of some sort; it should also have the authority and means to help regulate the market by ensuring standards are adhered to, and to make sure that companies that develop AI do so in a legitimate manner. This is easy to say, but not easy to execute given the extremely versatile nature of AI and its applications. AI brings to the table a lot of unknowns which make it difficult to even start thinking about establishing a standard in the first place.

AI verification and validation are not easy. We could encounter issues with the brittleness of AI systems, dependencies on data, and configurations that constantly change in order to keep improving AI’s performance (a key advantage of Machine Learning is that it constantly learns and improves its current state). Or, when we develop AI systems that are non-modular, changing anything could change everything in the system. Moreover, there are known issues with privacy, security, and so on.

Importantly, AI systems open pandora’s box for businesses with a lot of thorny and unanswerable ethical concerns: ethics has been a delicate issue for businesses who need to make sure their behaviour aligns and complies with their values and wider societal norms. Traditionally they have managed ethical dilemmas through careful human oversight, and subtle and subjective processes. But the advent and proliferation of AI systems have changed that. Automated, algorithmic decision-making has created new challenges for businesses because it reduces decision-making time to mere milliseconds, based on past data patterns and little, if any, contextual input.

Modern Machine Learning learns from historical data without context or common sense. As a result, many AI products in the market cannot adapt to context or changing environments. Practitioners need to incorporate rigorous data provenance checks at design and development time to ensure contextually sensitive information is considered when training ML models. Contextual information is key if we are to tackle one of the foundational issues of AI ethics, that of fairness.

Getting context right is key but we also need to account for the many interpretations of fairness as it is technically impossible to satisfy all of them. Local, regional, and country-specific nuances will always be a challenge to a mechanical notion of fairness, embedded in an AI system by training on past historical data. One way to tackle this is to engage early on with stakeholders and decide on some metric of fairness. It is common to have different views on what fair means, so incorporating those views through consultations with customers and early mitigating protocols prior to deploying the AI system, helps with acceptance and smooth operations. Likewise, it is common to identify bias in training data at design and deployment time. Engineering practice has progressed a lot lately and allows practitioners to train on edge cases, try different ML models, and provide sufficient guidance and education on modelling so that everyone in the organisation has a stake in ensuring the model is working as intended and makes sense. However, we shall not expect our AI systems to be completely bias-free and fair for all. This will take time.

Biased data existed for as long as society archives everyday life. From historical biased data (in the 50s most doctors were male) to associations of words to stereotypes (pretty=female, strong=male) our societal digital archives are littered with all sorts of biased data. The real opportunity with AI ethics is to apply ethically aligned practices to all business tasks. So tackling the ethics in AI, can and will have a wider impact than just the AI systems we use. But this will take time as society and businesses need to change and adopt new norms for measuring success: ethics is not treated as a necessity; profit-making and performance take precedence. The right incentives and procedures need to be in place so that businesses can prioritise ethics and embed them in their corporate culture. One way of doing that is to work and adopt by-design practices.

A by-design culture allows a company to treat ethics as a first-class citizen, not an afterthought. All companies should incorporate the notion of responsible AI by design. This should be as commonplace as climate change awareness or fair-trade supply chains agreements. Equally, companies and society should consider the possibility that an AI system might still be biased. However, as we are progressing fast with a mechanistic, mathematical-proven, notion of fairness and de-biasing of data, one could prove that the AI system could discriminate less than humans. This is important as societal acceptance will take time and success stories of ethical AI need attention and celebration to win over the sceptical consumer.

There is help from a plethora of frameworks and guides to get organisations started. There is an explosion of ethical frameworks being developed around the world in the last 3 years. But most of them are not helpful because they’re very vague and general with their aims, and they don’t tell businesses how they can achieve these aims. But the presence of all these frameworks helps an organisation put in place the appropriate procedures for accountability.

It’s also important not to reduce ethics to compliance, because the tech runs faster. Practitioners recommend bringing your audience, the consumers, into the discussion. This helps with fine-tuning the AI system and creating an affinity with the outcome.

Lastly, and certainly not least, one sobering admission from field experts is that there is an incredibly low proportion of AI researchers that are female/minorities. Most are white males and this leads to bias in AI. A more diverse and inclusive AI workforce is necessary as AI ethics is not something that can be entirely and totally automated – a human element will always be involved, and we’d better make sure this is as diverse and inclusive as possible.

But there is a positive, progressive aspect to AI ethics. It can, for the first time in history, reveal and uncover all the latent biases in our past data histories, and help us to design and build a better approach. Organisations that can operate ethical, secure, and trustworthy AI, will be more appealing to the 21st century conscious consumer.

"We shall not expect our AI systems to be completely bias-free and fair for all. This will take time."

Yannis Kalfoglou, PhD is a seasoned professional with over 30 years of experience and exposure to AI. Technically, Yannis is well versed in the art and science of engineering AI and building cutting-edge systems. Professionally, he led the customer-driven innovation programme of Ricoh in Europe, and Invesco’s innovation programme. He consulted clients in Ethical uses of AI technology with PA Consulting and delivered an AI strategy for Samsung and BP. He currently leads Frontiers’ AI programme.

LI-In-Bug.png

Yannis Kalfoglou

"Ethics is not treated as a necessity; profit-making and performance take precedence."

Head of AI

MEET THE LEADERS #

ARTIFICIAL INTELLIGENCE

linkedin.jpg

Created by many, offered to all. Help us reach more people!

facebook.jpg
twitter.jpg
bottom of page