top of page
Dots
60 LEADERS ON AI-COVER-FINAL2.png
60L-FINAL-COVER.jpg

'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.

ABOUT 60 LEADERS

'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders. 

DOWNLOAD (PDF 256 pages)

'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.

DOWNLOAD (PDF 286 pages)

What are the ethical concerns associated with the general adoption of AI?

João Azevedo Abreu

Q.

Some of the most urgent ethical concerns regarding the massive power and general adoption of AI come down to issues of bias. If bias, in general, has been acknowledged as a major social problem, it represents an even greater challenge in the realm of AI, particularly because it is unclear how much control humans can hope to maintain over AI systems in the future - or even nowadays.

As it is broadly understood, bias takes the form of unfair treatment of individual stakeholders and/or certain groups, resulting in ethnic, religious, and gender discrimination. Among the best-known examples are those related to legal decisions: Based on AI, formerly imprisoned African American candidates for probation were labelled higher-risk without subsequently committing new offenses, whereas their white counterparts were labelled lower-risk despite eventually re-offending. This is an example in which an AI-based legal deliberation has resulted in racism.

There is also the case of automated recruiting processes. A new team of interns and trainees are selected according to an algorithm based on data of successful people in previous generations when unfair opportunities were given by human recruiters to candidates of a certain race, gender, sexual orientation, and age range. In this case, the impact of bias amounts to the reinforcement of racism and/or sexism and/or homophobia, and/or ageism.

We can also refer to cases where opportunities for smaller businesses are diminished by recommendation systems. Such is the case when an online bookstore offers one of its frequent customers a new list of titles that other similar customers have purchased. This list might discriminate against smaller publishers and not-so-famous authors. It is unclear whether this kind of discrimination is as harmful as those related to gender and race, mentioned above. Yet the example of an online bookstore does serve as evidence against the argument that AI is as biased as society already is; it demonstrates that AI has been making bias even more pervasive than it was in the pre-AI era.

Some might wonder whether it is desirable, or even possible, to remove bias from the system entirely . Some even claim that straightforward measures such as the removal of classes would be a naïve approach to debiasing on the grounds that the accuracy of data would be compromised by the absence of details of sex, race, etc.

Yet, there are practices and methods that might contribute to countering and reducing bias. Some of them could be implemented on the technical side. That is, the AI system should operate in such a way as to allow immediate redesign according to the identificaiton of potential bias. Mechanisms need to be developed to identify the processes most likely to result in unfair discrimination and enable their monitoring.

However obvious the necessity of purely technical transformation is, what AI needs most in order to meet ethical expectations is an enhancement of the human/technical balance in its processes. That balance can be reached by a variety of human actions on AI. For instance, AI-guided processes could be accompanied by ethical-quality control, by means of third-party auditing. In addition, diversity can be promoted in human-driven processes, particularly among software designing teams and organizations.

Regardless of the intrateam diversity among AI designers and other technicians, it is hard to ignore one of the most necessary human changes in the professional world, namely, the increase of ethical awareness among software designers. Such a measure faces at least two considerable barriers. First, there is the political/cultural factor, as some countries do not perceive privacy as vital as some others do. There is also a philosophical difficulty. Take the key concept of ‘fairness’ which turns out to be puzzling enough in itself. Contemporary professional philosophy has provided elaborate and influential accounts of fairness . Yet, the mere fact that theoretical sophistication is still required in tackling fairness, combined with the sheer absence of consensus on which practices count as universally fair, can be taken as evidence of the difficulties facing this potential measure.

Despite the lack of a straightforward notion of fairness, however, promising ethical changes can be pursued on the basis of a distinction between discrimination in general and bias in particular, with some level of pragmatism: even admitting that algorithms are essentially supposed to be discriminatory, we could filter out all those algorithms whose decisions are most likely to be interpreted as unfair by some stakeholders.

Any combination of the above debiasing measures, technical and human, would help de-bias automated systems and increase the control over AI. Although bias has always been a problem for human societies, AI bias might have a wider impact than any of the forms of pre-technological bias and, therefore, much harder to control.

Could humanity lose control of the ‘global AI’ system? In a certain way, the control may have been lost already. It is far from clear, to say the least, whether researchers, technicians, lawyers, and other professionals have had access to all cases of biased decisions based on AI, even though we can be hopeful that a substantial number of such errors have been at least detected and (can be) fixed.
That said, the lack of control will be more pronounced if humans fail to address bias in a timely fashion, which might be an unattainable goal if the creation of sophisticated mechanisms of bias detection and fixing is not as fast and effective as those mechanisms that produce or enable biased decisions in the first place.

Yet the lack of control will be even more likely and extreme if we adopt a conservative view according to which bias in AI simply reflects bias embedded in society and, therefore, if there is something to be done to counter bias, it should be pursued in general education and other domains not necessarily overlapping with AI design. Still, there is no reason to doubt that these two purposes, ethical development in general and ethical AI in particular, could and should be pursued simultaneously.

"Could humanity lose control of the ‘global AI’ system? In a certain way, the control may have been lost already."

João Azevedo Abreu has held scholarly positions in the UK, China, and his home country Brazil. He’s passionate about making philosophy relevant to the corporate world and other non-academic audiences.

LI-In-Bug.png

João Azevedo Abreu

"To meet ethical expectations, AI needs an enhancement of the human/technical balance in its processes."

Philosophy Instructor and Researcher

MEET THE LEADERS #

ARTIFICIAL INTELLIGENCE

linkedin.jpg

Created by many, offered to all. Help us reach more people!

facebook.jpg
twitter.jpg
bottom of page