X
Innovation

AI and ethics: The debate that needs to be had

Like anything, frameworks and boundaries need to be set -- and artificial intelligence should be no different.
Written by Aimee Chanthadavong, Contributor

Whether we know it or not, artificial intelligence (AI) is already steeped into everyday life. It's present in the way social media feeds are organised; the way predictive searches show up on Google; and how music services such as Spotify make song suggestions. 

The technology is also helping transform the way enterprises do business.

Commonwealth Bank of Australia, for instance, has applied AI to analyse 200 billion data points to free up more time so its customer service officers can focus on doing exactly what their title suggests: servicing customers. As a result, the bank has seen a 400% uplift in customer engagement.

IBM is using the technology to preserve Australia's iconic beaches from washing away. Scientists are using the capabilities to put their time towards addressing coastal erosion, rather than on mapping it – which is very time-consuming.

According to Data61 principal scientist in strategy and foresight Stefan Hajkowicz, AI creates a "window of problem-solving capabilities".

"AI is going to be able to save many people from cancer, it will improve mental health by AI-enabled counselling session, it will help reduce road accidents -- there are huge benefits in the future to your life due to AI," he said.

"Humanity in Australia desperately needs it. AI is going to be critical in solving dilemmas in healthcare, for instance, where healthcare expenditure is growing at unsustainable rates. AI is going to be crucial technology that is going to help pretty much every sector in our society."

PwC report released in 2017 predicted that AI will boost global GDP by 14% -- or $15.7 trillion – by 2030.

What not to do

Head of the School of Philosophy at the Australian National University (ANU) Seth Lazar believes that given the impact AI will have, there's scope to make the technology better.

"There are so many ways in which we could use AI for social good, but over the last year or two it has become apparent that there are potentially a lot of unintended consequences -- not to mention in which AI could potentially be used for bad reasons -- so there's huge demand and interest for developing AI with our values," he said.

Evidence of when AI has gone wrong could be pinpointed to the time in the United States when AI algorithms were used to provide recommendations on prison sentences. A report from ProPublica concluded that the AI system was bias against black defendants as it consistently recommended longer sentences in comparison to white counterparts for the same crime.

See also: Artificial intelligence ethics policy (TechRepublic)

The United Nations Educational, Scientific, and Cultural Organisation (UNESCO) recently accused Apple's Siri, Microsoft's Cortana, and Amazon's Alexa, along with other female-voice digital assistants, of reinforcing "commonly held gender biases".

"Because the speech of most voice assistants is female, it sends a signal that women are obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like 'hey' or 'OK'. The assistant holds no power of agency beyond what the commander asks of it," the I'd Blush If I Could report outlined.

"It honours commands and responds to queries regardless of their tone or hostility. In many communities, this reinforces commonly held gender biases that women are subservient and tolerant of poor treatment."

Another example would have to be when Microsoft's AI bot, Tay, which was originally designed to interact with people online through casual and playful conversation, ended up hoovering good, bad, and ugly interactions. After less than 16 hours of launch, Tay turned into a brazen anti-Semite, stating, "The Nazis were right".

Professor of AI at the University of New South Wales Toby Walsh said: "There's plentiful examples of how our algorithms can inherit the bias that exists in society we have if we're not careful".

He noted however, if AI is carefully programmed to ask the right questions and designed by diverse teams, "it will make much more just decisions".

For both Lazar and Hajkowicz, their greatest concern about existing AI are the people who build these systems, which they say are just extracting the value of people working in Silicon Valley that come from elitist backgrounds.

"One of the key concerns that is often raised is that AI is being built by a lot of young white males in the 20-30 age bracket, because that's the AI workforce," Hajkowicz said.

"I think it immediately means they are building AI that is bias, but I think it's worth a look into how that is happening, and whether they are creating AI that is genuinely reflective of the diverse world."

Building ethical AI with diversity

Part of the solution to help overcome these systemic biases that are built into existing AI systems, according to Lazar, is to have open conversations about ethics -- with input from diverse views in terms of culture, gender, age, and socio-economic background -- and how it could be applied to AI.

"What we need to do is figure out how to develop systems that incorporate democratic values and we need to start the discussion within Australian society about what we want those values to be," he said.

"It's all about constant review and revision and recognising we do evolve as a society and hopefully we evolve to becoming morally better."

At ANU, a research project, which is being led by Lazar, is currently underway and is focused on designing Australian values into AI systems. Part of it will also involve building a design framework for moral machine intelligence that can be widely deployed.

"We have to decide as a country is whether in the end, we want to be massive importers of technology, given that when you're importing technologies, you'll also be importing the values," he said.

Hajkowicz warned that if Australia fails to engage in the global AI ecosystem, it would put the country at risk of being exposed to other ethics that may not be compatible with Australia's.

 "There are huge differences in the way countries approach AI. Some countries are using facial recognition to track the movements of people, for example," he said.

"On the other hand, it's much more limited and there's much more caution around how much it enters into somebody's personal life.

"I think Australia needs to think about what kind of AI future it wants. It's an open discussion and an AI ethics framework is the start; people need to drive us into the AI future that we want."

But of course, like anything, the approach needs to be considered. Otherwise, there's the potential to make mistakes, much like the one Stanford University made when it launched its Institute for Human-Centred Artificial Intelligence.

The goal for the institute was for a diverse group of people to have conversations about AI's impact and potential.

"Now is our opportunity to shape that future by putting humanists and social scientists alongside people who are developing artificial intelligence," Stanford President Marc Tessier-Lavigne said.

Except of the 121 faculty members that were initially announced, the majority of them were white and male.

Setting ethics from the inside

Technology companies are also making a concerted effort to ensure the data that is fed into AI algorithms is ethical.

Last year, Google set out its AI principles to ensure that all AI applications it builds meet seven key objectives: It is socially beneficial; avoids creating or reinforcing unfair bias; is built and tested for safety; is accountable to people; incorporates privacy design principles; upholds high standards of scientific excellence; and is made available for uses that accord with these principles.

In a blog post, Google CEO Sundar Pichai said the principles mark the company's recognition that such powerful technology raises equally powerful questions about its use.

"How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right," he said, assuring the principles are not "theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions".

The company explained how it's putting these principles into action through internal education, building tools and carrying research on topics in responsible AI, and reviewing its processes, as well as engaging with external stakeholders.

Additionally, the global tech giant announced the establishment of an external advisory council for the responsible development of AI. The makeup of the Advanced Technology External Advisory Council featured a mix of women and men from diverse backgrounds and different age groups.

However, only a few weeks after inception, Google axed the group, after thousands of Google workers signed a petition protesting the appointment of a member who was "vocally anti-trans, anti-LGBTQ, and anti-immigrant".

"We're ending the council and going back to the drawing board," Google said. "We'll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics."

But Google isn't alone. Microsoft has its own set of AI principles and Facebook has co-founded an AI ethics research centre in Germany.

Governing the right from wrong

While determining AI ethics will undoubtedly be a group effort, Walsh believes it will ultimately come down to regulators to set the boundaries, much like anything else today.

"If we don't regulate, for instance, to ensure that there are X% of taxis that are wheelchair accessible there wouldn't be. Uber is not going to provide cars for the disabled, unless it's regulated," he said.

This process of regulating AI in Australia has begun. The Commonwealth Scientific and Industrial Research Organisation (CSIRO) digital innovation arm Data61 published a discussion paper on key issues raised by large-scale AI, seeking answers to a handful of questions that are expected to inform the Australian government's approach to AI ethics.

Read more: The real reason businesses are failing at AI (TechRepublic)  

At the time, then-Minister for Human Services and Digital Transformation Michael Keenan said the government would use the paper's findings and the feedback received during the consultation period to develop a national AI ethics framework.

It is expected the framework will include a set of principles and practical measures that organisations and individuals can use as a guide to ensure their design, development, and use of AI "meets community expectations".

Lazar said legislators will bring cohesion to the AI ethics conversation in Australia.

"It would be good not to leave it up to individuals or companies.…I think it's worth acknowledging that within each tech companies there are people taking these questions very seriously and doing superb work on how to do it in an ethical way," he said.

"But there are plenty of people in those companies who see it as important profit measures, but obviously they will behave in a way they behave and that's a worry, which is why legislation is crucial."

Related Coverage

  • These 10 technologies are most likely to help save planet Earth
  • Softbank to open a cafe run by Pepper robots
  • Can shelf scanning robots unlock a massive dataset and save $1.7 trillion?
  • What are the best Raspberry Pi alternatives? Everything you need to know about Pi rivals
  • Amazon delivery robots are officially on the streets of California
  • What is AI? Everything you need to know about Artificial Intelligence
  • Editorial standards