top of page
Dots
60 LEADERS ON AI-COVER-FINAL2.png
60L-FINAL-COVER.jpg

'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.

ABOUT 60 LEADERS

'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders. 

DOWNLOAD (PDF 256 pages)

'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.

DOWNLOAD (PDF 286 pages)

What are the ethical concerns associated with the general adoption of AI?

Therése Svensson

Q.

When the risks of Artificial Intelligence are discussed, my experience is that the discussion quickly touches upon the AI singularity or the use of AI technology to support certain problematic use cases such as mass surveillance, raising concerns regarding violations of basic human rights. However, I would argue that these are not the risks over which we need to be the most concerned. In my view, the true risk of AI originates from the fact that when it is being used in uninformed or careless ways it can lead to unfair or even inaccurate results.

When it comes to AI singularity, we are still too far from achieving general AI to let this concern keep the masses up at night. The application of AI in ways that are harmful to a fair and democratic society is a bigger problem than solely the performance and behavior of AI algorithms. Most research and new technologies could be used in ways they were initially not intended to and finally harm our societies. With that said, there is certainly the need for regulation and worldwide agreement to prevent the use of AI in unethical and unwanted ways. Such regulations are already being discussed and will likely be implemented soon, for example, in the form of the proposed EU AI Act.

To understand why uninformed and careless use of AI systems poses a risk, let us consider the following, hypothetical, scenario: You have created a decision support system that is designed to help healthcare professionals to identify possible signals of addiction among patients. Your system is becoming an appreciated tool and is adopted within the healthcare industry. After some time, you learn that a patient was wrongfully accused of being an addict all due to the incorrect prediction of your decision support system. You knew all along that your system was not always correct in its prediction, as is the nature of a predictive model, but the system was meant only to support the healthcare professional, and not to replace his or her judgement. This hypothetical example closely resembles an actual situation that did occur within the healthcare industry. One of the problems, in this case, was that the system was not used in the way it was intended to, and somewhere along the way, it transformed from being a decision support system to making the final decision on behalf of the healthcare professionals.

To take this addiction scenario further, let us consider the reasons for the incorrect prediction. In our thought example, the patient got high addiction scores due to the historical prescriptions being given by several different doctors, indicating that the patient wasn’t seeing one doctor solely, a behaviour often found among addicts. In reality, several of these prescriptions were given by veterinaries for her ill pets. The AI model did not distinguish her prescriptions from her pets’ as they were all prescribed to her. If the addiction score had come with an explanation of what influenced the score, the healthcare professional would have had a chance to discover that the strongest driver of the high score was caused by a data issue. Therefore, the model should not have been trusted in this particular case.
To help ensure that AI systems are used the way they are intended and to highlight the limitation of these, a proposed solution is to define a content declaration attached to every AI model. This would contain information about how the model was created, which data was used, the expected accuracy of the model, and its intended usage.

My point is that if the healthcare professionals had been better informed on the intended usage of the model, its limitations, and how it worked, this case of unfair and inaccurate treatment of the patient would have been prevented. We grow up learning that the world isn’t a fair place but that shouldn’t stop us from trying to make it one. AI technology has a great potential to improve the lives of many which is why it is important to use it responsibly.

"AI technology has a great potential to improve the lives of many which is why it is important to use it responsibly."

Therése Svensson is a Technical Specialist within IBM Data & AI. She has previous experience at IBM Expert Labs where she was working as a consultant in the area of Data Science, Predictive Analytics, and Ethical AI.

LI-In-Bug.png

Therése Svensson

"AI systems should come with a content declaration describing the intended use and limitations."

Data Science & AI Ethics Solution Specialist

MEET THE LEADERS #

ARTIFICIAL INTELLIGENCE

linkedin.jpg

Created by many, offered to all. Help us reach more people!

facebook.jpg
twitter.jpg
bottom of page