Artificial intelligence and regulations for machines

Think:Act Magazine Artificial Intelligence
Artificial intelligence and regulations for machines

February 26, 2018
Article

by Dan Matthews
illustrations by Tavis Coburn

With the breakneck pace of experimentation happening in the field, artificial intelligence is fast becoming something of a Pandora's box. Though the technology is in its infancy, examples are already emerging that suggest the need for regulation – and sooner rather than later.

Revolution in warfare

Experts, 116 of them from the fields of artificial intelligence and robotics, wrote a now celebrated and frequently quoted letter to the United Nations in August. In it, they warned of the prospect of autonomous weapons systems developed to identify targets and use lethal force without human intervention. Signatories included the great and good of AI, including Tesla boss Elon Musk and Head of Applied AI at Google DeepMind Mustafa Suleyman. The letter anticipates a "third revolution in warfare" that could change conflict to the degree that gunpowder did. It states autonomous weapons "will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend." This, coupled with the risk of systems being hacked or falling into the hands of despots and terrorists, provides grounds for an early global ban.

If "intelligent" weapons sound like science fiction, they are not. Around the same time as the technologists penned their letter, Kalashnikov Group – famous for its eponymous machine gun – unveiled a formidable-looking AI cannon. The company says its "fully automated combat module" can spot and kill a target without a human finger on the trigger. This raises complex questions, both ethical and practical, about what limits should be placed on AI. Should robots be trusted to make decisions? Would their choices be better than human ones? Even if democracies curtail development, will authoritarian regimes follow suit?

Whatever the answers, they need to address not just military scenarios, but every other sphere in which AI could impact society: health care, transport, government, legal and medicine, to name only a handful of areas where the technology is already being developed. And the answers need to come sooner rather than later.

Second law

Three-quarters of a century ago, science fiction author Isaac Asimov provided a useful starting point for the governance of AI with his Three Laws of Robotics: Robots may not injure a human being, must obey orders (unless they go against the First Law) and must protect themselves (unless to do so conflicts with the First or Second Law).

But even these simple rules will encounter difficulties when applied in the real world, according to Greg Benson, professor of computer science at the University of San Francisco. Take autonomous vehicles. "A self-driving car might have to decide between potentially harming its passengers or a greater number of pedestrians. Should the car protect the passengers at all costs, or try to minimize the total harm to humans involved, even if that means injuring people in the car?" He points out that if people knew autonomous vehicles were coded to treat their safety equitably with other road users, they probably wouldn't buy one.

Who is liable for accidents? Autonomous driving requires a new definition of traffic law.
Who is liable for accidents? Autonomous driving requires a new definition of traffic law.

Two main challenges now face policymakers and non-government organizations wrestling with governance. One is AI's moral maze and the infinite scenarios that are still to be addressed; the other is the rapid speed of technological progress. "Computer technology has advanced at such a rapid pace that government oversight has not been able to keep up," says Benson. "To build a bridge you must be a licensed mechanical engineer; however, software developers require no such license to work on many types of systems that can affect human life."

Fine-tuning

Some experts believe regulation needs fine-tuning, but not necessarily wholesale change. Firstly, because blocking AI would stifle innovation and, secondly, existing laws are flexible enough to cover the foreseeable future. If a person fires a gun and it injures someone else, that person is culpable under law – not the gun or its manufacturer. The same applies to a line of code. As Babak Hodjat, the inventorm behind the Apple Siri technology and CEO of Sentient Technologies, explains: "The answer to the question whether a robot is capable of committing a crime can only be 'yes' if we assume a 'self' for an AI system. If we do not allow this, and such an allowance is completely up to us humans, then robots cannot ever commit crimes." If we can say for certain that humans will always be responsible for the actions of robots, then existing laws can be adapted to cover new threats. But if it's possible to breathe life into robots, equipping them with emotions and morals, the game changes and regulators will have to work a lot harder.

Most AI works via structured algorithms, which provide systems with a defined course of action in response to a set of variables. But one branch of AI, neural networks, has been developed to mimic a biological brain and works without task-specific programming. "When a software is written, programmers are usually able to retrace its functioning, which is not the case with neural networks," explains Jean-Philippe Rennard, a professor at the Grenoble Ecole de Management whose work focuses on biologically inspired computing and its application in economics and management. "We do not really understand how they reach a result. This loss of comprehension – and of control – must persuade us to be prudent. How could we control tools with functioning we only partly understand? If the threat is not imminent, it will certainly exist in the future."

"Whether a robot is capable of committing a crime can only be 'yes' if we assume a 'self' for an AI system."
Portrait of Babak Hodjat

Babak Hodjat

Siri inventor and CEO
Sentient Technologies

Competitive advantage

There is evidence that action is required even at a more prosaic level. Big data and machine learning – the latter is a subset of AI – were employed during general elections in the US and UK to gauge voter sentiment via social media and influence voting patterns on a grand scale. Does the systemic nature of this emerging tactic constitute voter manipulation, especially when set in conjunction with so-called "fake news"? The answer to that question has profound implications for modern democracies, especially if foreign powers can intervene.

"AI already provides a significant competitive advantage in its ability to understand customers in the business world," says Scott Zoldi, chief analytics officer at analytics software firm FICO. "In the political arena, it has an ability to sway public opinion or garner support for a candidate and their agenda. Each of us leaves huge digital footprints, which allows algorithms to classify us into archetypes. This amount of data is incredibly valuable to understand how to best reach, persuade and convert voters and the public."

New opportunities

Yet another dimension is the role human workers will play in an AI world. Experts are split on whether this will spell mass unemployment and perhaps the need for a universal basic income, or augment what people already do and provide new opportunities in more creative fields. Daniel Kroening, CEO of Diffblue and professor of computer science at Oxford University, says a universal wage won't happen: "It is a shibboleth of technology that every technology revolution promised that humans would be freed from the burdens of employment. The more advanced the society, the harder everyone seems to work." Rennard, however, argues that a future universal wage is "a self-evident fact, which is partially rejected because of social inertia."

Directing AI's growth

Businesses and companies, not policymakers, are taking the lead in directing AI's growth – and they are making efforts to show responsibility. Google, Microsoft, Facebook and Amazon created the Partnership on AI to Benefit People and Society in September 2016 to guide innovation toward positive outcomes in climate change, inequality, health and education. Yet successive examples hint that solid regulation could be required soon. In July 2016, a Tesla engineer was killed in a crash involving an experimental autonomous vehicle; earlier in the year Microsoft released an AI chatbot on Twitter that quickly spouted racist and violent language. And Baidu found itself entangled with the law after its CEO tested a driverless car on public roads.

Many experts agree action is needed and that regulators are playing catch-up. In July 2017, Elon Musk told a gathering of the US National Governors Association: "By the time we are reactive in AI regulation, it's too late... Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something that represented a fundamental risk to the existence of civilization." Yet, questions remain over who should regulate, what must be covered and how the mechanics of global governance will work.

One year in AI: the challenge for regulators
January 2017

Researchers at the Alan Turing Institute call for the creation of a neutral regulatory body to monitor the uses of AI by companies and investigate when people feel they have been treated unfairly by an AI decision-making process.

March 2017

Elon Musk announces his backing of Neuralink, a startup creating devices to implant in the human brain to help people keep pace with advancements in AI. The enhancements could eventually improve brain power, including enhanced memory, and could aid interactions with software.

June 2017

Microsoft creates a campus for new AI firms with a capacity for 1,000 startups. Facebook, Apple and Amazon already have similar projects in place. The campus will offer startups mentoring, research collaborations and potential investment to help speed up the implementation of AI concepts.

July 2017

Google Launchpad creates an AI studio to develop startups in machine intelligence. In November the accelerator announces its first tranche of four businesses, all from the health care and biotechnology space.

October 2017

In a further sign that tech companies want to regulate AI before governments do, Google’s DeepMind announces the creation of a new internal ethics group. The group brings together academics and NGOs to “help technologists put ethics into practice.”

November 2017

Researchers at MIT publish a paper declaring they had managed to confuse Google’s AI into classifying a 3D printout of a turtle as a rifle. The finding raises concerns about potential AI security issues.

53 companies and institutions are involved in the Partnership on AI to Benefit People and Society. They range from online and tech giants such as eBay and Sony to Amnesty International and the ACLU.

Harry Armstrong, head of technology futuresat innovation foundation Nesta, says the only substantial regulation relating to AI is enshrined within the EU's General Data Protection Regulation (GDPR), due for implementation in May 2018. It stipulates that someone who is made subject to a machine-made decision also has the right to an explanation by a human, but Armstrong argues the language used within the GDPR is open to interpretation. Nesta has proposed a Machine Intelligence Commission to build a better understanding of AI's impacts. "Its work would look at key sectors like transport, employment, health and finance to make recommendations to existing regulatory bodies and government departments about potential risks and abuses," says Armstrong.

Governance is never an easy thing to get right, but while regulators at the local, national and global levels chew over the societal ramifications of a world driven by AI, technologists are racing forward at breakneck speed. Kroening sums up the challenge ahead. He urges lawmakers to "hurry up and get more people thinking about the implications, which are almost endless."

Further Reading
Our Think:Act magazine
blue background
Think:Act Edition

AI think, therefore AI am

{[downloads[language].preview]}

What exactly do people mean when they talk about AI in 2018? Where do I start if I want to embrace AI in my business? Get your questions answered in our Think:Act magazine on artificial intelligence.

Published February 2018. Available in
Subscribe now!

Curious about the contents of our newest Think:Act magazine? Receive your very own copy by signing up now! Subscribe here to receive our Think:Act magazine and the latest news from Roland Berger.

Portrait of Think:Act Magazine

Think:Act Magazine

Munich Office, Central Europe