top of page
Dots
60 LEADERS ON AI-COVER-FINAL2.png
60L-FINAL-COVER.jpg

'60 Leaders' is an initiative that brings together diverse views from global thought leaders on a series of topics – from innovation and artificial intelligence to social media and democracy. It is created and distributed on principles of open collaboration and knowledge sharing. Created by many, offered to all.

ABOUT 60 LEADERS

'60 Leaders on Artificial Intelligence' brings together unique insights on the topic of Artificial Intelligence - from the latest technical advances to ethical concerns and risks for humanity. The book is organized into 17 chapters - each addressing one question through multiple answers reflecting a variety of backgrounds and standpoints. Learn how AI is changing how businesses operate, how products are built, and how companies should adapt to the new reality. Understand the risks of AI and what should be done to protect individuals and humanity. View the leaders. 

DOWNLOAD (PDF 256 pages)

'60 Leaders on Innovation' is the book that brings together unique insights and ‘practical wisdom’ on innovation. The book is organized into 22 chapters, each presenting one question and multiple answers from 60 global leaders. Learn how innovative companies operate and how to adopt effective innovation strategies. Understand how innovation and experimentation methods blend with agile product development. Get insights from the experts on the role of the C-Suite for innovation. Discover ways that Innovation can help humanity solve big problems like climate change.

DOWNLOAD (PDF 286 pages)

To regulate or not? How should governments react to the AI revolution?

Dr. Lance Eliot

Q.

A vexing question that has sparked heated debate amongst lawmakers and society all told entails whether there ought to be regulations overseeing and ultimately governing the advent of AI. There are reputable and valid considerations for both sides of this thorny issue. In short, some say that Yes, we must indeed make sure to pass laws that will keep AI in check, while the classic retort is that the correct answer is No, namely that regulating AI will lamentedly kill the golden goose and undermine well-sought advances and vital benefits accruing as a result of the ongoing adoption of AI-powered systems.

I tend to categorize the Yes and No answers into four flavors, two each for the No replies and two each for the Yes replies:

1) Flat-Out No
2) Qualified No
3) Partial Yes
4) Full Blown Yes

Let’s explore each of those responding assertions. Note that I’ve numbered the items for ease of reference only. The numbering does not denote any sense of prioritization or other sequence-related significance.

1) Flat-Out No

The flat-out “No” camp will usually insist that any regulation of AI is entirely shortsighted and abundantly unnecessary. They point out that we do not have sentient AI today and there is little chance that we will soon have AI that reaches human or superhuman levels of cognition. As such, the qualms about AI being now or soon becoming a worldwide existential risk are demonstrably overblown.

By removing the oft-used sentience “scare tactic” as the seemingly largest basis for seeking AI regulations, the flat-out “No” perspective argues that we can comfortably exist without any new regulations regarding AI. We should allow the marketplace to organically determine what happens with AI. Companies that craft beneficial AI will presumably flourish as the market reacts favorably to their AI-infused wares. Firms that devise untoward AI will find themselves shunned by markets and inevitably fall by the wayside.

Let the free market decide the fate of AI.

2) Qualified No

The qualified “No” pundits are quick to emphasize that existing laws are already adequately sufficient to cope with the emergence of AI. No need to create new regulations that are targeted specifically at AI. Thus new efforts to make additional laws governing AI are purely knee-jerk regs, being pushed forward solely to appear as though lawmakers are valiantly rescuing us from an AI apocalypse.

Furthermore, the problem with new laws that cover AI is that we will end up in a legal morass. How are we to suitably define AI? A new law might overstate the case of what AI is. This in turn will force all manner of automated systems to be suddenly confronted with onerous new laws. The other side of the coin is that AI might be so narrowly defined by a particular law that just about any clever AI developer can avert it. The AI confronting law will be summarily off-target and appallingly be useless and toothless.

3) Partial Yes

In the partial “Yes” viewpoint, our existing laws are said to be regrettably insufficient when it comes to dutifully being able to oversee AI. This perspective acknowledges that many of our regulations already do cover much of what AI does or might do. The crux of the issue is that there are gaps between the existing laws and the bona fide legal requirements or limits of what we ultimately truly need to fully encompass AI.

We need to create new laws that are smartly written. Do not inadvertently replicate existing laws that already can deal with many AI-related legal concerns. Having undue overlaps will unnecessarily clog up our courts and lead to confusion over which of our laws ought to apply when AI issues are being adjudicated. Focus on the areas of our laws that do not now cover AI. Plug the holes. That should be the North star attention for devising new laws to regulate AI.

4) Full Blown Yes

The full-blown “Yes” segment exhorts that we must put in place new laws that are aimed specifically at AI. In contrast to the partial Yes camp, this perspective argues that relying on existing laws and only establishing additional gaps-oriented laws will produce ugly and unwieldy patchworks of laws about AI. They also forewarn that clever attorneys will try to wiggle out of AI woes for their clients by trying to find inconsistencies between the presumed allied existing laws and the newly enacted laws.

You have to either go the whole hog or not get into the AI regulation gambit at all, they would exert. Start clean. Build up a new set of laws exclusively aimed at AI. Make sure it is coherent and comprehensive. From then on, everyone will know what is expected. No excuses will be entertained as to which law ought to cover this AI or that AI. They all come under one grand AI-focused legal umbrella.

Hard And Soft Laws

As hopefully was apparent by my preceding overview of the four flavors of answers, it is possible to be in any of the four camps and consider yourself to be right on all accounts. At this time, each of the distinct responses has merits. Likewise, each has downsides. We are faced with a judgment call as to which path to undertake.

The odds are that we will see varying actions at all levels of lawmaking. Some countries will decide to enact AI laws, while others will not. One consideration is whether putting in place regulations about AI might hamper the competitive posture of a nation when it comes to ingratiating AI advances. Imagine the blowback toward top leaders that opt to establish AI laws, yet perhaps this slows down their national AI efforts and puts their country at a disadvantage to other nations that allow a freer reign. In the race to be the best and topmost adopter of AI, regulations are often denigrated as putting a yoke on AI progress.

Of course, there is a counterargument to the same consideration. If a nation lets AI be wildly brought into use, lawlessly so, the country might be shooting its own foot. The AI could go awry and there might be few regulatory means to stop the onslaught. By the time AI laws are suddenly thought to be needed, the horse is already out of the barn. A nation that instead fostered a sensible and prudent set of AI laws might be the winner in the AI race due to setting a level playing field and forging ahead safely and beneficially with AI.

The proverbial tortoise versus the hare.

We need to also briefly consider the role of so-called hard laws versus soft laws.

Hard laws are said to be laws that we formally put on our books and that our judicial affairs then rely upon. Soft laws are generally portrayed as non-laws that we kind of agree are nearly law-like and provide legal guidance accordingly. For example, there has been a great deal of attention to identifying AI Ethics principles and promulgating them to strive toward Ethical AI.

Those AI Ethics precepts are not usually codified in the law. They are nonetheless usable for various sound purposes. They are soft laws. Firms can decide to abide by AI Ethics principles, doing so on a roughly voluntary basis.

That being said, firms are bound to find themselves faced with potential legal vulnerabilities by doing so and also by not doing so (it can be a daunting choice). A legal case could be made that a firm knew about and claimed to adhere to the AI Ethics they willingly adopted, yet perhaps violated their own self-determined obligations. To some degree, this makes the adoption of AI Ethics a bit of a conundrum. If a firm does do so, they face the legal wrath that they did not do what they said they would do. In the same breath, if a firm declares nothing about the adoption of AI Ethics, they can be argued as having been out of touch and should have been more studious in their AI efforts.

The bottom line usually consists of being conscientious about AI and signing up for some reasonable set of AI Ethics (there are plenty to go around). The United Nations via UNESCO has provided a useful AI Ethics foundational set, for which nearly two-hundred countries have pledged to accept. Firms that are serious about AI and that want to try and protect themselves from or at least mitigate their AI legal risks would be wise to customize a set of AI Ethics to their organization. This requires doing more than just putting words onto paper. Companies need to talk the talk and walk the talk when it comes to the realistic embracing of AI Ethics precepts.

Overall, the legal side of AI is still in its infancy and we have a long ways to go. For those of you interested in the legal and political dynamics of AI, there is much work yet to be done and the growth of AI is assuredly going to guarantee an abundant need for AI-savvy lawmakers, policymakers, and AI-versed legal talent in the years ahead.

"The odds are that we will see varying actions at all levels of lawmaking. Some countries will decide to enact AI laws, while others will not."

Dr. Lance Eliot is the founder and CEO of Techbrium Inc. and a Stanford Fellow in the Stanford University Center for Legal Informatics, a jointly run program by the Stanford Law School and the Stanford Computer Science Department. He is globally known for his expertise in AI & Law and also AI Ethics. His Forbes column has amassed over 6.4+ million views, plus his other writings and numerous books are highly acclaimed. Having served as a top executive at a major Venture Capital firm, Dr. Eliot is a highly successful entrepreneur. He previously was a professor at the University of Southern California (USC) where he headed a pioneering AI lab.

LI-In-Bug.png

Dr. Lance Eliot

"The legal side of AI is still in its infancy and we have a long ways to go"

Global AI Expert and Stanford Fellow

MEET THE LEADERS #

ARTIFICIAL INTELLIGENCE

linkedin.jpg

Created by many, offered to all. Help us reach more people!

facebook.jpg
twitter.jpg
bottom of page