We’ve all heard of the psychological exercise known as the Prisoner’s Dilemma.
Two criminals get busted for committing a crime together and are interrogated separately. If one criminal rats out the other one while the other one keeps quiet, the rat walks and their accomplice does the max time of three years. If they both rat each other out they both get shorter sentences- two years each. If neither snitch, the criminals get a year each for a lesser charge.
Follow Ladders on Flipboard!
The idea is to determine how to manipulate individuals that privilege self-interest to ultimately serve cooperation. Ladders previously reported on an experiment conducted by the Victoria University of Wellington and The University of Southampton in the United Kingdom. The researchers involved aimed to establish how successful teams operated when they were forced to cooperate with psychopaths. The results unfolded in the way you might have suspected. The higher the volume of psychopaths the less productive the team.
“I think, in corporations, they might engage in teamwork to the extent it furthers their careers, but will be less likely to co-operate, and probably back-stab others when necessary to further themselves.” added, associate professor of psychology at the University of Otago, Martin Sellbom.
But can this potent lack of empathy be informative in some way? For instance, in the ever-expanding world of A.I, a team of researchers recently motioned that psychopaths can potentially provide avenues of more nuanced occasions of self-interest. Their new experiment sought to test the limits of artificial intelligence and the technology’s ability to correctly interpret emotions.
Robot see, robot do
Three researchers from the University of Waterloo conducted an experiment to determine how AI units can manipulate selfish individuals into cooperation
The team recreated the classic prisoner’s dilemma mentioned above but replaced one of the self-motivated criminals with an AI avatar and the incentive of decreased time with gold. Avatars were set with an array of emotional displays, from nuanced facial expressions to specific utterances.
The human prisoner and the avatar were permitted to survey each other’s emotions. The more human-like the avatar, the more likely the human participant was to cooperate. “While researchers can successfully improve perception of Human Uniqueness traits by making agents smarter, emotions are critical for perception of Human Nature traits. This improvement also positively affected users’ cooperation with the agent and their enjoyment,” says the study’s lead authors.
Recently researchers at MIT designed an AI robot after the lead character from Alfred Hitchcock’s 1960 opus, Psycho. According to the researchers, Norman was the world’s first psychopathic robot. Among other things, Normal was created to eliminate the potential for inherent biases in virtual units. Back in 2016, an algorithm utilized by a US court was discovered to conceivably label black prisoners to re-offend. Norman was exposed to “the darkest corners of Reddit,” proving that data can be introduced to dramatically alter the behavior of artificial intelligence.
The potential for artificial intelligence to mimic human mannerisms for alterer motives is monumental. While this may very well present many positive strides in psychological research, some have smartly identified the grim portends this new data usher in. Virtual agents furthering their grasp on sentience could present several very real dilemmas in the near future.
You might also enjoy…
- New neuroscience reveals 4 rituals that will make you happy
- Strangers know your social class in the first seven words you say, study finds
- 10 lessons from Benjamin Franklin’s daily schedule that will double your productivity
- The worst mistakes you can make in an interview, according to 12 CEOs
- 10 habits of mentally strong people