X
Innovation

Google Duplex beat the Turing test: Are we doomed?

Google’s new Duplex AI sounds human, with stammers, pauses, and all. It could be a useful addition to Google Assistant or the harbinger of something much more dark and worrisome.
Written by David Gewirtz, Senior Contributing Editor

Video: Google's Assistant gets an AI upgrade with Duplex

Alan Turing helped pioneer the idea of programmable computers and built one of the first general purpose computing machines, the Bombe, which decrypted the Nazi's Enigma code and saved thousands of lives. Turing's contributions to the war effort, and to computer science as a discipline, are astonishing. As Albert Einstein was to math and physics, Alan Turing was to computer science.

But in the 1950s, the British government considered Turing a criminal. The irony and injustice of this is mind boggling, not just because he probably saved more British lives in World War II than any other Briton, but because his only so-called crime was that he was gay.

TechRepublic: Photos: The life of Alan Turing

For this, which at the time the British government called "gross indecency," he was given the choice of imprisonment or chemical castration. He killed himself in 1954, at the age of 41.

It is impossible to overstate how much the loss of Alan Turing cost the world. Two years before his death, Turing was thinking about the relationship between human and computer intelligence. Today, that concept is part of everyday life, as AI permeates everything from GPS to video games to the behavior of apps on our phones.

Back then, the idea that a device the size of a house designed to break codes could, someday, imitate human intelligence was about as far thinking as you could get. Turing not only understood and pioneered the idea of AI, but created some metrics by which we could judge whether we'd actually gotten to the point where AI was intelligent.

The Turing test

Modern AI scientists have called what became known as the Turing test somewhat simplistic, because computer intelligence can be seen in a wide variety of actions beyond the imitation of human conversation. Even so, Turing's test has gone essentially unsolved since 1952.

The test is simple. In Volume LIX, Number 236 (October 1950) of Oxford University's MIND, a Quarterly Review of Psychology and Philosophy, Turing published a paper, Computing Machinery and Intelligence. While there were many important concepts in this document, one concept he put forth was what he called an "imitation game."

There's a 2014 movie by that name, starring Sherlock's Benedict Cumberbatch. It's about Turing, and it's worth watching. The idea of the imitation game was that both a human and a computer would be communicated with by a second human, the "interrogator."

TechRepublic: Hacking the Nazis: The secret story of the women who broke Hitler's codes

The interrogator would send, essentially, text messages to the human and to the computer and get replies. If the interrogator could not tell which of the two respondents was the human and which was the computer, the computer was said to have passed the Turing test, where a computer could so fully imitate a human that a human couldn't tell the difference.

Most AI researchers will tell you that the Turing test is interesting, but it's not the point of AI. We don't need AI to imitate a human. We need AI to help us accomplish real tasks in the world. Even so, the Turing test has been "out there," toyed with by developers for years.

One limited example of Turing test compliance was the old computer game ELIZA. ELIZA goes back to 1966 and the MIT AI Lab. She was limited, and just a moment or two of discussion would break the spell, but you can see early signs of Alexa in the interactions.

eliza-computer-therapist-2018-05-12-19-01-16.jpg

A short ELIZA conversation

AI, of course, has improved tremendously over the years, with AI-based conversations often the foundation of customer support phone trees, automated assistants, and other customer management tools.

Still, even though modern AI systems have gotten more helpful, no one confuses them with real humans. But that may be about to change.

Google Duplex

At Google I/O last week, Google demonstrated something it calls Google Duplex. As demonstrated, Duplex is a tool for making telephone appointments. The idea is that you tell your Android device that you want to set up an appointment for a time or set of times, and then Duplex, running from the Google cloud, dials the phone and conducts a voice conversation with the person on the other end.

What sets Duplex apart is how realistic it is. The conversation has the pauses, breaks, and minor exclamations that are the hallmark of informal human interaction. Duplex doesn't sound like a computer. Duplex sounds like a real person making a phone call for an appointment.

Let me make this clear: there is no uncanny valley in the demonstrated Duplex conversation. You can't tell that the machine making the call is a machine making the call. It passes the Turing test, not only for text, but for an actual voice conversation.

Also: Google I/O 2018: Key takeaways on Duplex, AI, privacy, Android

Alphabet chairman John Hennessy acknowledged this at an I/O talk. Before you dismiss this as hype from a guy in a suit, you need to know who Hennessy is, beyond just the chairman of Google's parent company.

Hennessy was part of the Stanford team that pioneered RISC (Reduced Instruction Set Computing) processors, the processor technology inside nearly all smartphones. He was chair of Stanford's computer science department, dean of the school of engineering, and eventually became Stanford's president. He holds the IEEE Medal of Honor, the Queen Elizabeth Prize for Engineering (how ironic is that?), and was given a khata by the His Holiness, the 14th Dalai Lama. Just this year, he was given the ACM's prestigious Turing Award for innovation.

In other words, if someone is going to declare the Turing test passed, John Hennessy is credibility incarnate.

The promise of Duplex and AI

So far, Google isn't promising Duplex as anything other than a disembodied appointment-making robotic friend. But it's obvious that this accomplishment has legs. The potential for human-sounding interaction has many positive benefits (don't worry, we'll get to the bad stuff in a bit).

Google Assistant's big update: All the new AI tricks and features, explained

Take, for example, the idea of self-driving cars. I've been thinking a lot about these. In the last years of my parents' lives, there was a time where all they really needed was help being driven around. They were still lucid and mobile, but they couldn't safely drive. A self-driving car, accompanied by a voice (ala KITT in Knight Rider), could reduce the tech discomfort and help them make their way about town.

Automated assistants might actually be able to assist. IBM's Watson has made astonishing inroads in its ability to integrate institutional knowledge and bring that knowledge to humans, assisting and recommending solutions for problems ranging from cooking recipes to supply chain, and even medical diagnosis.

We can certainly envision a day where uncanny-valley-free conversations can be had with systems containing deep institutional knowledge, and where actual problems can be solved, freeing up humans to do more important work (or, at least, freeing up us hapless consumers from the unending purgatory of hold music).

Where it could all go so terribly wrong

What goes up, must come down. Where there's a will, there's a way. Where there's promise in a new technology, there's also a terrifying dark side. Simulated human conversation, freed from the warning of the uncanny valley, can have dire consequences.

Let's start with a simple, Googlish example. How many of you get robocalls from so-called Google specialists? I know many of you do, because when I wrote about it, I got tons of responses on Twitter and Facebook. TL;DR: Most of the time, it's a scam.

Also: AI 'more dangerous than nukes': Elon Musk still firm on regulatory oversight

We know it's a scam, because the conversation inevitably breaks down as soon as the recorded demon dialer starts "talking". But what if the robot behind robocallers was not identifiable as a fake human? What if that call sounded and acted like it was from an actual person?

How much scamming could be done if scammers could combine AI, a corpus of psychological manipulation knowledge, human-sounding callers, and the ability to scale? It boggles.

Or what about all those support jobs? Right now, we in America sometimes complain when we have to talk to someone with an accent in another country. Sometimes, the complaint is about the difficulty in understanding someone with an accent. Sometimes the complaint is about the loss of American jobs.

No matter what, that person with an accent in another country is still a human with a job, earning money for his or her family. But what if all those jobs, and all the jobs in emergency dispatch, telesales, and in almost anything else that requires phone skills, can be taken over by an AI network? How many jobs will be lost because of Duplex?

When will Duplex start talking to Duplex? What happens when a human-sounding appointment caller reaches a human-sounding appointment maker? Will there be two computers sharing "ums" and "uhs," or will an API trigger, sending XML messages back and forth instead?

What about at election time? What happens if a Duplex-like system is able to impersonate a candidate? How many citizens will think they've gotten an actual call, and had an actual conversation, with a candidate, when it's merely just another SaaS service purchased with a credit card?

Also: Ex-Google CEO Schmidt's warfare warning: We need AI ground rules for Pentagon work

What about impersonation? If Duplex-like technology gets good enough, will it be possible for your phone to impersonate you? Then what happens if someone gets their hands on your phone? Will your family members think it was you calling them in a panic to lure them from the house?

The darker implications of this sort of technology go on and on. Like much of the tech we've created before, there are advantages and disadvantages. But as AI gets smarter and smarter, and now, more convincing, will we need to "do something" to rein it in before we reach Terminator phase?

Can we bottle this genie?

One thing we can expect is some sort of legislation requiring human-like calls to identify themselves as such. Unfortunately, in a world where calls can be made across borders with ease, legislation in one country is unlikely to protect us against attacks from other countries. Malware is illegal, and yet it's constant.

Back in Turing's day, science fiction writer Isaac Asimov promoted a positive future of robots, controlling the AIs with what he called the "Three Laws of Robotics":

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Even before Blade Runner, Asimov postulated human replicant machines. In 1953's The Caves of Steel (and its sequel, The Naked Sun), Asimov introduced readers to R. Daneel Olivaw, a human replicant detective (who was also a good guy). If you ever get time, read these two books. Human replicants aren't the only future-looking things Asimov presented.

He also talked about a time when people would have video conversation studios in their homes. On Friday, I'm using such a studio, in my home, to talk to three other ZDNet columnists about the future of 5G and other communications technology. Look for that to go online in a week or so. (If you want to see how one of these looks, check out our Peak Smartphone discussion from last month.)

My point, in all of this, is that technology is fluid. Nearly every innovation has a light side and a dark side. Duplex is fascinating and scary in its implications, all at the same time.

What worries me is not that we might have this technology, for I'm reasonably convinced that as we get to the Caves of Steel level, some companies will incorporate something like the Three Laws.

No, it's not the robots that scare me. It's the humans, those in rogue nation states, those affiliated with organized crime organizations, and even those just a little too focused on accomplishing their goals without regard for their fellow humans.

Those folks scare me, because artificial intelligence, in the hands of unscrupulous and evil-minded real intelligence, may well respond to no law. We may not be able to stop it, which means we may be living in a robot-eat-robot-eat-human world.

That'll help you sleep tonight, I'm sure.

One final thought for you: evil does not always come in a package with clear labeling.

The UK's regressive treatment of Alan Turing was not only unfair and horrible, as well as incredibly short-sighted and stupid, it was evil. And yet, it was all done in the name of Queen and country. We need to be aware that we may deploy these AI systems for what we (or our leaders at the time) think are the best of reasons. And it may all go horribly wrong.


You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Editorial standards