Google won’t confirm if AI assistant will identify itself up front as a robot

This ethical tension could be partly resolved if its AI assistant was required to announce that a computer is on the phone calls, but Google has been vague and will not confirm if Duplex will be made to self-disclose.

Yesterday, Ladders reported on Google’s new human-sounding AI chatbot – which makes phone calls and carries a conversation with people – and the possible ethical questions the service raises.

Even though Google said the experience is meant to make the experience comfortable, the ethical question being raised is: For whom is the comfort intended? Does it feel comfortable for customer service representatives to not know if they are communicating with a machine? Or, is that too intimate and invasive?

This ethical tension could be partly resolved if Duplex was required to announce that a computer is on the phone calls, but Google has been vague and will not confirm if Duplex will be made to self-disclose up front.

“We understand and value the discussion around Google Duplex — as we’ve said from the beginning, transparency in the technology is important. We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product,” a spokesman said in a statement to Ladders.

The new statement came a day after Fast Company asked Google whether or not Duplex would definitely identify itself as a bot, and got an interesting reply to the simple “yes or no” question – a repeat of a line from a blog post: “We’ll be experimenting with the right approach over the coming months.”