Accurately evaluating one’s own work is a near impossible task. We all can’t help but form certain biases in favor of ourselves, which is why it’s common practice for employers to keep tabs on their employees’ activities and productivity.
Performance evaluations come in a variety of shapes and forms nowadays. Some companies may rely solely on quarterly performance evaluations conducted by managers, while others opt for a more comprehensive approach that entails near-constant monitoring of work devices.
As artificial intelligence continues to disrupt virtually every corner of modern life and business, many organizations have begun experimenting with automated AI systems tasked with monitoring both employee behavior and subsequent productivity. Now, fascinating new research from Cornell University is the first ever to examine how human employees react to having their every professional move monitored and judged by an automated colleague. Read on to learn more.
AI evaluations usually backfire
There’s no shortage of apparent workday wonders available via AI right now. Employers are able to keep tabs on their employees today in ways managers could only dream of just a few years ago. Physical motion, facial expressions, and even vocal tone can all be monitored by automated systems.
But are such approaches really advantageous when it comes to employee evaluations? According to the research team at Cornell, probably not. Their study concluded that when AI is used to monitor employees it often results in more complaints and less productivity all around. Moreover, the use of AI to track and evaluate leads to a much greater perceived loss of autonomy among workers compared to assessments made by other humans.
The use of emerging technologies to help ensure employees aren’t slacking off, being rude to customers, or engaging in any other counterproductive or unethical activities sounds like a no-brainer from way up in the c-suite. That being said, study authors stress that corporate decision makers should think very carefully about how such protocols will affect employees psychologically.
The right way to frame AI evaluations
Importantly, however, the study does point to one way of talking about automated employee tracking and surveillance technologies that appears to help human workers view the situation more positively. Researchers say it can help tremendously if workers believe the technologies are being put in place to help further their own development and assist in getting tasks done as opposed to simply watching and judging performances.
“When artificial intelligence and other advanced technologies are implemented for developmental purposes, people like that they can learn from it and improve their performance,” says Emily Zitek, associate professor of organizational behavior in the ILR School, in a university release. “The problem occurs when they feel like an evaluation is happening automatically, straight from the data, and they’re not able to contextualize it in any way.”
Assessing AI’s influence on employees
These findings, published in the scientific journal Communications Psychology, were produced via four separate experiments encompassing close to 1,200 people. Study authors primarily focused on how much of a difference it makes if other people or AI conduct surveillance on employees.
Additionally, the context of the situation and intended uses of the technologies (performance evaluations, supporting development) were heavily factored in as well. In other words, what were employees told about these new automated systems?
The first experiment asked participants to remember and write down a few past experiences in which they were monitored and evaluated by either human or automated surveillance. Researchers found that when watched by AI specifically, employees typically reported feeling far less autonomous and were also much more likely to showcase “resistance behaviors.”
Then, the next two experiments simulated real-life surveillance. During those initiatives, subjects began by working in groups and brainstorming new ideas for a hypothetical theme park. After that everyone was told to individually generate ideas about one particular area of the park. Participants were made to believe all of their work was being monitored by either a human research assistant or a form of AI (represented in Zoom calls as AI Technology Feed).
After a few minutes passed by, either the human observer or the AI sent messages informing participants that they weren’t producing enough ideas and needed to try harder. One cohort filled out a survey after the exercise, and 30% criticized the AI program while only 7% had critical words for the human monitor. Participants also tended to produce fewer ideas if they were under the impression AI was watching.
“The reinforcement from the AI made the situation just more stressful and less creative,” one person wrote.
“Even though the participants got the same message in both cases that they needed to generate more ideas, they perceived it differently when it came from AI rather than the research assistant,” Prof. Zitek explains. “The AI surveillance caused them to perform worse in multiple studies.”
AI, employees, and autonomy
To be fair, it wasn’t all bad news for the future prospects of AI in the workplace. A fourth experiment told subjects to imagine they work in a call center, and that either a human or an AI program would be listening in on their calls. Some participants were instructed this was being done as a means of performance evaluation, while others were told this approach was intended to provide developmental feedback.
Sure enough, those told the AI would be listening for developmental purposes did not feel the surveillance was encroaching on their autonomy and didn’t report a greater desire to quit. All in all, study authors posit this finding in particular indicates there is a viable way for organizations and employers to incorporate these technologies into the workplace in a positive manner. The key is to ensure employees don’t feel like they’re every movement and decision is under scrutiny.
“Organizations trying to implement this kind of surveillance need to recognize the pros and cons,” Prof. Zitek concludes. “They should do what they can to make it either more developmental or ensure that people can add contextualization. If people feel like they don’t have autonomy, they’re not going to be happy.”