Apple is hiring a trainer to teach its artificially-intelligent voice interface, Siri, to have "serious conversations." Would you trust it?
The Future of work

Apple posts job teaching Siri to listen to users’ troubles

As we look toward a future of working hand in hand with machines, the future of work means hiring people to translate a robot’s engineered results into something familiar to human ears. As part of this future, Apple is hiring a trainer to teach its artificially intelligent voice interface, Siri, to have “serious conversations.”

In a recent job listing, Apple announced that it is looking for someone to fill its “Siri Software Engineer, Health and Wellness” role. As part of the qualifications, this candidate needs to both know how to code and how to engage in peer counseling.

“People talk to Siri about all kinds of things, including when they’re having a stressful day or have something serious on their mind. They turn to Siri in emergencies or when they want guidance on living a healthier life,” the listing states. “Does improving Siri in these areas pique your interest? Come work as part of the Siri Domains team and make a difference.”

Is the future of mental health humans talking to machines?

This job listing is part of a wider trend of super-intelligent assistants moving beyond the role of being just a smart appliance you can ask directions from. Now, companies are designing these assistants to go one step further and be your therapist. WoeBot, for example, is a Facebook Messenger bot that was created by Stanford psychologists and is programmed to “capture your moods” as you tell WoeBot about your day. Some robots are even being designed to be medical ethicists. Researchers at Georgia Institute of Technology developed a robot, called an “intervening ethical governor,” to help patients with Parkinson’s Disease have a neutral advocate in doctor-patient interactions.

But there are many ethical and practical minefields to overcome before our next therapist or patient advocate is a robot. Some people are uncomfortable with the idea of a robot having more final say in decisions than a human. As one observer to a robot refereeing a patient’s interactions put it, “If the robot stood there and told me to ‘please calm down,’ I’d smack him.”

Do you really want to tell Silicon Valley giants all your problems?

And then there’s the big hurdle that these AI-powered assistants are often owned by technology giants like Facebook and Apple, which don’t currently have the same legal requirements to keep your questions about depression and stress private as licensed mental health workers do.

In its disclaimer, WoeBot admits that Facebook can still read the content of your messages and know exactly who you are. While a licensed medical provider is bound by the Health Insurance Portability and Accountability Act [HIPAA] to keep your medical information private, messenger bots and AI-assistants are under no such obligation.

So you can talk to a robot about your problems, but these companies cannot guarantee that there won’t be someone else listening in. Being engineer with a psychology background is one step forward in addressing this new future of mental health in technology. The next step is hiring someone with familiarity in big data privacy issues.