Illustrator: Ashley Siebels
Experts estimate that as many as half of all jobs are vulnerable to automation. But you’d think soft skills, like empathy and ethical judgment, might make some jobs less vulnerable than others. Well, maybe not. Here’s a new set of jobs in the path of the robot invasion.
[pullquote]If the robot stood there and told me to ‘please calm down,’ I’d smack him.[/pullquote]
When we humans fight, we don’t always fight fair. Arguments can bring out the worst in us, bringing out our our preconceived prejudices and assumptions. A new kind of ethicist wants to be your referee by taking messy, personal politics out of the equation, and adding impartial algorithms into it. This new breed of ethicist: Robots.
Researchers at Georgia Institute of Technology developed a robot, called an “intervening ethical governor,” to help patients with Parkinson’s Disease. As people with this disease lose control over their facial expressions and motor functions, they have trouble communicating emotions and needs with their caregivers. Between humans, this miscommunication can result in misunderstandings or worse. But with a robot weighing in and enforcing intervention rules, researchers aim to stop arguments from ever getting that far.
Robot referees in your doctors’ offices
By observing patient-caregiver interactions, this robot referee will, according to its developers, intervene if a “human’s dignity becomes threatened due to other’s inappropriate behavior.” Tracking patients’ and caregivers’ voice volume, speech, and location, the ethical robot will decide to intervene if we trigger the robot’s “too angry,” “too quiet,” or “safety-first” protocols. For example, if a patient starts getting loudly frustrated that they can’t open a pill bottle, the robot will detect the patient’s raised voice and tell them, “I understand. Let’s calm down a little bit!” If a patient gets up and leaves the room, the robot’s camera sensors will pick this up. The robot will begin to wave its hands and implore, “The session is not yet finished! Please come back!”
Although the researchers are still in the proof-of-concept stage, their five-year study could have a lasting positive impact on the millions of Americans affected by Parkinson’s Disease if the robot ethicist gets adopted by clinics. In Europe, therapeutic robots are already being used to help children with autism and elderly patients with socialization issues.
But we still have a ways to go before a robot referee will be weighing in on healthcare debates. To get qualitative feedback, the researchers recruited nine older adults to observe the robot ethicist in action as it mediated a patient-caregiver conversation. Results were mixed. Although the participants gave positive feedback on the robot’s safety-first purpose, they were less happy about the robot’s commanding and critical tone and the potential privacy issues. “Would both parties consent to having a robot mediator?” participants wanted to know. No one wanted the robot to have the authority to judge patients and make them feel blamed.
As one observer to the robot’s interactions put it, “If the robot stood there and told me to ‘please calm down,’ I’d smack him.”