top of page
Dots
60 LEADERS ON AI-COVER-FINAL2.png
60L-FINAL-COVER.jpg

'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.

ABOUT 60 LEADERS

'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders. 

DOWNLOAD (PDF 256 pages)

'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.

DOWNLOAD (PDF 286 pages)

To regulate or not? How should governments react to the AI revolution?

Jaroslav Bláha

Q.

The current media hype instigates that AI has some mystical qualities, which require special treatment. Politicians embark on this opportunity to invent new regulations, without having a deep understanding of the technology. In fact, AI software is just like any other software: It has source code that can be inspected, it has inputs and outputs that can be scrutinized, and computing theory clarifies that such software can be executed on a primitive Turing Machine - just like any other software that people use for many decades. The difference to ‘classical’ software is that the ratio between data-driven vs. rule-driven (algorithmic) behaviour shifts. AI simply uses much more data and much fewer explicit encoded rules, which makes the behaviour less transparent, although not less deterministic.

It is an engineering prerequisite for any kind of software product to be designed and eventually tested and verified against its allocated requirements. For safety-critical or sensitive software, there exist plenty of standards that mandate the necessary processes and level of scrutiny. Examples are DO-178C for aircraft systems, ISO-13485 for medical systems, or EN-50128 for railway software.
Every piece of (non-quantum) software and data is deterministic and can be inspected down to its bits, e.g. of every neuron in an Artificial Neural Network. Unfortunately, modern software – AI or not – is typically too large to allow easy understanding and verification. Instead of additional regulation, which constrains but doesn’t facilitate improvement, investments are necessary for advanced quality assurance techniques and tools for large-scale software and data systems.

Transparency or high-level explainability are not defining factors. E.g., very few people (if any) have full transparency into Microsoft’s Windows operating system or a deep understanding of their car’s control software. Still, they are utilizing such software in medical devices and on highways by the millions. In particular, certification of medical devices is based on their medical evidence, i.e. a consideration between desired performance and the expected risk to cause harm. It is significant and reasonable that the EU Medical Device Directive does not distinguish between types of software; neither does it have demands for explainability.

Yet, when reading the news about failed AI initiatives, obviously something is lacking. Candidate areas for improvement of the quality of AI systems – and by extension its public perception - are:

- Systematic investment into careful data acquisition and management. More than ever, the decades-old adage of ‘Junk in, junk out’ applies.

- Proper definition of requirements for the expected performance of AI systems including consideration of edge cases. This might help customers to distinguish between the few mature products and the countless prototypes.

- System engineering education for AI developers to remove the dangerous assumption that the naive use of AI frameworks is a substitute for diligent management of requirements, implementation, and testing.

- Since the 1960s, every new software development paradigm created an associated need for appropriate test, debugging, and verification tools. AI software is the latest paradigm shift, where further development of advanced tool suites (e.g., for inspection of Neural Networks’ behaviour with methods like LIME) will improve product quality.

- And maybe most important, honesty by AI developers when explaining to their users what their AI can or cannot do.

Consequently, as the portfolio of existing regulations for safety-critical or sensitive areas applies, AI does not need more or different regulation than classical software. What AI requires though is the same diligent definition of functional requirements and the verification of how well those requirements are satisfied before a product is released to its users.

Would that help to make AI ethical? No – for the simple reason that we as a human society cannot even agree on a common ethical framework or its specific rules. The currently raging debate about Covid vaccination mandates is a prime example on a basic level. Interpretation of ethics, morale, fairness, equality and similar concepts are very personal, cultural, and regional. Reality shows that such concepts cannot be strictly regulated beyond a very local context. As we cannot even define the requirements for universal ethical behaviour for ourselves, how could it be done for software? Again – AI or not. Any attempt at regulation will remain either uselessly abstract or only applicable to very narrow use-cases within the mindset of the regulating entity. Every piece of software will always reflect the ethical attitude of its developer.

Let’s rephrase positively: Once our society is able to define common, binding, generally accepted, and verifiable ethical rules, it will be easy to implement those in software. Unfortunately, when for example looking at the draft EU regulation for ‘Harmonized Rules on AI’ and excerpts like the one from its Article 5 “… prohibited: … an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes … psychological harm” we have proof that we are far away.

"Every piece of software will always reflect the ethical attitude of its developer."

Jaroslav Bláha is CIO/CTO of trivago. He has 30 years of experience in global innovation leadership; amongst others for NATO, ThalesRaytheonSystems, DB Schenker, Swissgrid, Solera, PAYBACK. He developed his first AI in 1995; his startup CellmatiQ provides AI-based medical image analysis.

LI-In-Bug.png

Jaroslav Bláha

"AI does not need more or different regulations than classical software."

CIO/CTO • AI Evangelist

MEET THE LEADERS #

ARTIFICIAL INTELLIGENCE

linkedin.jpg

Created by many, offered to all. Help us reach more people!

facebook.jpg
twitter.jpg
bottom of page