When the going gets tough, would you want Han Solo or R2D2 by your side? Just like any comparison, both options offer their own set of advantages. Han would probably be more helpful at sweet-talking stormtroopers, but R2D2 has a rocket booster.
While modern technology hasn’t quite reached R2D2 levels just yet, and no real human could ever be as cool as Han Solo, a new study conducted at the University of Georgia set out to answer that very question (without the Star Wars references). Do people trust computers or their fellow humans more?
According to their findings, people appear to value the advice of algorithms and computers over other humans. This is particularly true when faced with a tough task.
You may be thinking to yourself that you would never let a machine make a decision for you, but in all likelihood, you already do on a daily basis. It isn’t something we all put that much thought into, but algorithms already automate much of modern life. Each time your music streaming app suggests a new song to try, that’s an algorithm. Similarly, whenever you use Google to find a new local lunch spot, an algorithm determines which food trucks are displayed first.
“Algorithms are able to do a huge number of tasks, and the number of tasks that they are able to do is expanding practically every day,” says Eric Bogert, a Ph.D. student in the Terry College of Business Department of Management Information Systems. “It seems like there’s a bias towards leaning more heavily on algorithms as a task gets harder and that effect is stronger than the bias towards relying on advice from other people.”
It’s well documented that nobody is perfect. We all make mistakes. Computers, however, aren’t weighed down by pesky emotions or biases. In theory, at least, machines and computers aren’t supposed to be as prone to errors as people. Of course, everyone knows computers are more than capable of mistakes (spellcheck, anyone?), but generally speaking people tend to believe computer-provided information.
To test this phenomenon, researchers analyzed 1,500 study participants. Each subject was asked to carry out a task that involved counting the number of people visible in a photograph. The catch? Well, the number of people in the picture kept growing gradually, making it harder and harder for participants to keep track of their counting. Meanwhile, subjects were provided with suggestions given by both computers and other people regarding the number of people in the images.
Sure enough, as the number of visible individuals slowly but surely increased, subjects became much more likely to simply rely on the machine’s suggestions than any advice from people, or bother to count on their own for that matter.
“This is a task that people perceive that a computer will be good at, even though it might be more subject to bias than counting objects,” assistant professor Aaron Schecter, a study co-author, explains. “One of the common problems with AI is when it is used for awarding credit or approving someone for loans. While that is a subjective decision, there are a lot of numbers in there — like income and credit score — so people feel like this is a good job for an algorithm. But we know that dependence leads to discriminatory practices in many cases because of social factors that aren’t considered.”
Relying on an algorithm to find your next burrito or favorite band isn’t all that big a deal, but machine-made decisions are already infiltrating far more serious topics like facial recognition and hiring choices. Troublingly, study authors note such projects have already faced significant criticism due to discovered “cultural biases” tied to how they were designed in the first place by programmers.
Imagine for a moment that the local police knock on your door and tell you a facial recognition program has ID’d you as the person who robbed a bank last week. You’re innocent, but with an expensive algorithm pointing its digital finger at you, it will be very hard to convince a judge and jury of that.
Computers and algorithms are incredible assets that continue to make our lives easier by the day. Still, it’s important not to blindly believe any suggestion, even if it comes from a piece of sophisticated technology.
“The eventual goal is to look at groups of humans and machines making decisions and find how we can get them to trust each other and how that changes their behavior,” Schecter concludes. “Because there’s very little research in that setting, we’re starting with the fundamentals.”
The full study can be found here, published in Scientific Reports.