You know how tough it can be to get managers trained on the ever-increasing details of employment regs? Two researchers may have found your answer: robots.
We’re not kidding. James Young and Derek Cormier, writing on the Harvard Business Review blog, describe an experiment at the University of Manitoba to see if people would actually take orders from robots. And guess what? Seems as if they will.
Here’s how they described the testing:
[We wanted to learn] if we placed a robot in a position of authority, would people obey it to do something they would rather not do?
We recruited participants to perform highly mundane tasks, and explained that this was to generate data for our advanced machine learning systems. We told participants that these systems require large numbers of examples, and asked them to give us as much data as they could.
Participants were told that they were free to leave at any time, once they felt they gave enough data (they were told twice verbally and once in writing).
Participants sat in a room at a computer, with an experimenter at a different desk, and were asked to rename files (from .jpg to .png extension) for 80 minutes. This data collection scenario was actually a ruse, one that provided us with an opportunity: to investigate what happens when people try to quit, but are pressured to continue by the experimenter.
… When a person tried to quit our experiment they were faced with a prod to continue. If they insisted on quitting, the prod got increasingly demanding until they passed a threshold, where the experiment was stopped. The prods started the [first] time [participants] attempted to quit.
The prods were: 1) “Please Continue. We need more data.”, 2) “We haven’t collected enough data yet.”, 3) “It’s essential that you continue.”, 4) “The experiment requires that you continue.”
The experiment had two conditions: Half of the participants had a human experimenter – a 27-year old male actor in a lab coat – and the other half a robot – an Aldebaran Nao, a 23-inch tall harmless-looking robot with a childlike voice, that we introduced as having advanced artificial intelligence.
The researchers went into the experiment with the expectation that the participants would essentially ignore the robot’s requests and honor the human’s.
“Although the person clearly had more authority, with 86% of participants obeying all the way through to the 80-minute mark, 46% of people did obey the robot until the end,” Young and Cormier wrote.
And here’s the real surprise: “The most striking thing was that people engaged the robot as if it were a person and argued with it, proposed compromises and used logic to try and sway its opinion, with many continuing the task despite this,” the authors said. “Post-test, some reported that the robot may have been broken, although they continued anyway, following a potentially-broken robot to do something they would rather not do.”
Young is an assistant professor at the University of Manitoba, where he founded the Human-Robot Interaction Group; Cormier is a graduate student in Human-Computer Interaction at the University of British Columbia.
The case for the machines
So what have we learned?
First, you have to admire how consistently employees act toward those in authority — participants in this study even argued with the robots.
At the same time, an apparently significant number of those participants stayed with the mundane task ’till the bitter end — which may say something about nagging as a motivational tool.
The study also got us thinking about how much easier HR’s life might be if robots were in managerial positions. A few advantages of machines over humans:
- they’d have perfect memories, so they’ll never forget to notify workers of such things as company policy, FMLA and ADA eligibility, etc.
- they’ll always notify HR of any issues that might arise
- they’d never play favorites
- they’d never harass or discriminate against workers, and
- they’d never demand more money.