'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.
ABOUT 60 LEADERS
'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders.
'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.
What are the ethical concerns associated with the general adoption of AI?
In 2020, a New England Journal of Medicine study looked at more than 10,000 patients throughout America and suggested that pulse oximeters used overestimated blood-oxygen saturation more frequently in black people than white. Normal oxygen saturation is at 92-96%. In the study, some patients who registered normal levels per the pulse oximetry had a saturation as recorded by the arterial blood-gas measure (the most accurate saturation measure) of less than 88%. For black participants, this occurred 12% of the time – 3X the rate for white participants. It is the difference between being admitted to the hospital and being sent home.
In 2013, the Journal of the American Medical Association found that women in four American regions had a 29% higher risk of their hip implants failing within three years of hip-replacement operations. A similar study from 2019 found that women were twice as likely to experience complications from cardiac device implants, such as pacemakers, within 90 days of such procedures. In both instants, the failures of device-makers to recognise physical differences between male and female body types were to blame.
While such findings are troubling in themselves, they also illustrate the deep-seated prevalence of bias. Bias is omnipresent. It may be machine learning bias (Selection, Exclusion, Observer, etc.), psychology, or cognitive (Dunning-Kruger Effect, Gambler’s Fallacy, Ben Franklin Effect, etc.). We also have new ones like the Google Effect and the IKEA Effect. Companies like NetFlix, Nike, Google, and others effectively use these biases to deliver what they want us to see using the algorithms that run their platforms and marketplaces.
Some questions arise from this. Will AI be a threat to humanity? Unlikely in the next 30-40 years at least. We are still far away from Artificial General Intelligence. So, is AI a threat to our jobs? Very much so. Approx. 45% of low education workers will be at risk of technological unemployment by 2030. And can we trust the judgment of AI systems? Not yet is the best answer for now. And for several reasons – bias being one of them.
A classic example of bias in the Computational Linguistics world is the well-known king – man + woman = queen equation. In this, if we substitute the word ‘doctor’ for the word ‘king’ we get ‘nurse’ as the female equivalent of the ‘doctor’. This undesired result reflects existing gender biases in our society and history. If doctors are generally male and nurses are usually female in most available texts, that’s what our model will understand.
Another well-known example of an image classification mistake is when Google Photos misclassified black people as gorillas. While a single misclassification of this sort may not substantially impact the overall evaluation metrics, it is a sensitive issue.
So how do we solve this? AI Developers need large, carefully labelled, unbiased datasets to train Deep Neural Networks. The more diverse the training data, the more accurate the AI models. The problem is that gathering and labelling sufficiently large and rich datasets, that may contain a few thousand to tens of millions of elements, is a time-consuming and very expensive process.
The use of synthetic data could be a possible answer. For example, a single image costs about $6 from a labelling service. However, we can generate the same labelled image for as low as six cents in the lab. But, cost savings are just one aspect of the issue. Synthetic data is also key in dealing with privacy issues and reducing bias by ensuring you have the data diversity to represent the real world. Furthermore, synthetic data is sometimes better than real-world data because they are automatically labelled and can deliberately include unusual but critical data properties.
Though the synthetic data sector is only a few years old, more than 50 companies already provide synthetic data. Each has its unique quality, often focusing on a particular vertical market or technique. For example, a handful of these companies specialise in health care uses. A half dozen offer open-source tools or datasets, including the Synthetic Data Vault, a set of libraries, projects, and tutorials developed at MIT.
The above is an excellent start to creating a more level playing field for ‘fair’ AI Systems.