X
Innovation

The ethical challenges of artificial intelligence

As AI advances, systems will need to be trained and 'raised' in much the same way as humans
Written by Bob Violino, Contributor

One of the issues that arises when people are discussing the use of artificial intelligence (AI) is how to ensure that decisions based on AI are ethical. It's a valid concern.

"While AI is by no means human, by no means can we treat it like just a program," said Michael Biltz, managing director of Accenture Technology Vision at consulting firm Accenture. "In fact, creating AIs should be viewed more like raising a child than programming an application. That's because AI has grown to the point where it can have just as much influence as the people using it."

Also: What is AI? Everything you need to know about Artificial Intelligence

Employees at companies are not only trained to do a specific job; they're also expected to understand polices around diversity and privacy, for example. "AIs need to be trained and 'raised' in much the same way, to not only perform a task but to act as a responsible co-worker and representative of the company."

AI systems are making decisions in a variety of industries today -- or will be doing so in the near future -- that could have an impact on virtually everything they touch. "But the reality is that we don't yet have the standards in place to govern what's acceptable and what's not, or to outline what a company is responsible or liable for as a result of [AI-based] decisions," Biltz said.

Autonomous vehicles provide an example. "They're sure to be involved in accidents that cause damage or injury, just like human drivers today," Biltz said. "The difference is that we have a clear understanding for defining fault and blame for human drivers, and that doesn't yet exist for AI."

Some forward-looking organizations are leading the way in this area, Biltz said. For example, Audi has announced that the company will assume liability for accidents involving its 2019 A8 model when its "Traffic Jam Pilot" automated system is in use. And the German federal government has adopted rules around the way autonomous cars should act in an unavoidable accident, he said. The cars must choose material damage over hurting people.

Also: New Zealand examining AI ethical framework and action plan

"This idea of responsibility and liability is not just an issue for the automotive industry, but for all industries that use AI," Biltz said.

Organizations can address ethical issues with AI in a number of ways, Biltz said. One is augmentation. "Fundamentally, machines need to be designed to work with humans," Biltz said. "AI should put people at the center, augmenting the workforce by applying the capabilities of machines so people can focus on higher-value analysis, decision-making and innovation."

In addition, companies need to build and train their AI applications to provide clear explanations for the actions they take. "What happens if an AI-powered mortgage lender denies a loan to a qualified prospective homebuyer?" Biltz said. "If it can't explain why, then its decisions wouldn't be trusted. The way that people communicate and collaborate together is through explanation, so AI co-workers need to be taught to behave in the same way."

Furthermore, a clear, explicit, and transparent code of ethics about what AI can and can't do needs to be established. "A litmus test for responsible behavior will ensure AI is accountable for its actions," Biltz said.

Previous and Related coverage

Human in the loop: Machine learning and AI for the people

HITL is a mix and match approach that may help make ML both more efficient and approachable.

Stanford makes a startling new discovery. Ethics

The president of the university where so many tech titans started, says that he wishes someone had thought about teaching them ethics, oh, a decade ago.

IBM Watson CTO: The 3 ethical principles AI needs to embrace TechRepublic

TechRepublic spoke to IBM's Rob High about the ethical, privacy, and security obstacles that artificial intelligence has to overcome.

Read Google's AI ethics memo: 'We are not developing AI for use in weapons' CNET

CEO Sundar Pichai says the company will still work with the military though.

Editorial standards