UABS KNOWLEDGE

COMMERCIAL LAW

The dark side of artificial intelligence

13 November 2017

Automated decision-making can be as manipulative and biased as a human, and it urgently needs regulating, warns Benjamin Liu.

Dubbed the "new electricity", artificial intelligence (AI) is transforming the way we learn, work, and play. Lawyer Benjamin Liu warns that as it permeates our lives, and begins to make decisions that affect us personally, AI is throwing up a tangled web of legal issues.

Take Uber, for instance. Unlike traditional taxi fares, Uber fares are set by AI – or, more accurately, by machine-learning algorithms, says Liu, a Lecturer in the Business School's Department of Commercial Law.

"The Uber fare takes into account not only the travel time and distance but also the customer demand at the relevant time in that area. For example, if you are travelling from a wealthy neighbourhood your fare is likely to be higher than for someone travelling from a poorer part of the city because the computer 'knows' you can afford it," he says.

Paying a few extra dollars for a ride is one thing. But AI is also being used to make decisions in areas that seriously affect people's lives, such as credit scores, recruiting and promotion, medical care, crime prevention, and even criminal sentencing.

While the benefits of such automated decision-making are obvious, it suffers from two serious problems, says Liu.

The first is non-transparency. Just as Google does not disclose how it ranks search results, AI system designers do not reveal what input data AI relies on, or which learning algorithms it uses. The reason is simple: such processes are considered trade secrets.

"A 2016 study in the United States showed that 'risk scores' – scores given by a computer program to predict the likelihood of defendants committing future crimes – were systematically biased against black people," says Liu.

"However, the program designer would not publicly disclose the calculations, arguing that they were proprietary. As a result, it is impossible for the risk scores to be legally challenged."

The second difficulty with automated decision-making goes deeper into how AI works. Many advanced AI applications use 'neural networks' – machine-learning algorithms based on the structure of human brains. While a neural network can produce accurate results, the way it does so is often impossible to explain in terms of human logic, says Liu. This is commonly referred to as the 'black-box' problem.

In response to such issues, overseas regulators have started to regulate automated decision-making, says Liu. Recently, the European Union passed the General Data Protection Regulation (GDPR), which will come into force in May 2018. One of its key features is the right to explanation.

"In short, if a person is being subjected to automated decision-making, that person has the right to request 'meaningful information about the logic involved'. And, individuals have the right to opt out of automated decision-making in a wide range of situations."

The GDPR will have important implications for this country, says Liu. To the extent that a New Zealand company controls or processes personal information of EU residents, it will need to comply with the GDPR, even if it doesn't have a physical presence in the EU.

Though the use of AI for such decision-making in New Zealand is rare, it is expanding, with one credit score company allowing customers free access to check their credit scores. It is expected that soon robots will be advising customers on the most suitable KiwiSaver account or mortgage offering.

Without proper oversight, an AI can be as manipulative and biased as a human, says Liu. Therefore, policymakers, lawyers, and market participants need to start thinking about a regulatory framework for AI decision-making.

"Should we set up an AI watchdog to ensure that AI applications are being used in a fair way? Should each person have the right to an explanation? The answer to this last question seems, at least to me, clear."

 

Benjamin Liu

Benjamin Liu is a Lecturer in the University of Auckland Business School's Department of Commercial Law.

b.liu@auckland.ac.nz

You may also like

STRATEGY

What Apple's business model tells us

Apple has a singularly disciplined approach to the process of corporate reinvention, says Richard Brookes.

MORE...

INTERNATIONALISATION

What is behind China's global shopping spree?

Chinese businesses are buying up overseas companies at an increasing rate, but not for the reasons you might think, says Peter Williamson.

MORE...

IN THE MEDIA

Is New Zealand being managed, rather than led?

New Zealand doesn’t need maverick leaders but we do need leadership, argues Fiona Kennedy.

MORE...

RELATED CONTENT

IN THE MEDIA

 

New style of reform needed to reverse growing inequality

A fresh approach to personal savings is the only way to stop our slide into inequality and low productivity, argues Robert MacCulloch.

MORE...

IN THE MEDIA

 

Net-zero carbon requires serious innovation

The Emissions Trading Scheme needs re-thinking if we are to achieve net-zero carbon by 2050, says Basil Sharp.

MORE...

IN THE MEDIA

 

What New Zealand should learn about renewable energy

Climate change goals must be integrated into industrial policy if they are to succeed, urges Anna Berka.

MORE...

LEADERSHIP

 

How youth are breaking leadership taboos

Young people are starting to reject common assumptions about how they should act, argues Fiona Kennedy. Other groups could learn from them.

MORE...