Humans May Accept Robot Lies, Depending on the Situation

Summary: A new study explores human acceptance of robot deception, finding that people are more likely to approve of robots telling “white lies” to protect feelings, but strongly disapprove of hidden surveillance. The research reveals that acceptance of robot lies depends on context and perceived intentions, with important implications for the ethical development of social robots.

Estimated reading time: 5 minutes

As robots become increasingly integrated into our daily lives, questions arise about how they should navigate complex social situations. A new study published in Frontiers in Robotics and AI explores an intriguing question: Will humans accept robots that can lie?

The research, led by Andres Rosero of George Mason University, examined how nearly 500 participants responded to different types of deceptive behavior by robots. The results suggest that people’s acceptance of robot lies depends heavily on the context and perceived intentions.

Why it matters

As robots take on more social roles in healthcare, retail, and other settings, understanding how humans perceive and respond to potential robot deception is crucial. This research provides insights that could shape the ethical development and deployment of social robots.

Lies that protect vs. lies that mislead

The study looked at three types of robot deception:

  1. External state deception: Misrepresenting facts about the world (e.g. a robot caretaker lying to an Alzheimer’s patient that their deceased spouse will be home soon)
  2. Hidden state deception: Concealing the robot’s true capabilities (e.g. a cleaning robot secretly recording video)
  3. Superficial state deception: Overstating the robot’s capabilities (e.g. a robot falsely claiming to feel pain while moving furniture)

Participants were most accepting of external state deception, with many viewing it as justified to spare someone’s feelings or prevent emotional harm. As Rosero explains:

“Humans may be readily capable of understanding the underlying social and moral mechanisms that drive the pro-social desires for lying and thus are capable of articulating a justification for the robot’s similar behavior.”

In contrast, hidden state deception was viewed very negatively. Participants saw it as highly deceptive and disapproved strongly. Many extended blame beyond the robot to its owners or creators.

“If people realize that a robot designed for a certain role has hidden abilities and goals that undermine this role, people may consider discontinuing its use because the robot does not primarily serve them but the interests of a third party,” notes Rosero.

Superficial state deception fell in the middle – seen as moderately deceptive and somewhat disapproved of. Participants struggled more to justify this type of deception.

Implications for robot design and deployment

The findings have important implications for how social robots should be designed and programmed. Rosero suggests that allowing robots to tell “white lies” in certain contexts may actually be beneficial and align with human social norms.

However, hidden capabilities like covert surveillance are likely to significantly damage trust if discovered. The strong negative reaction “may provide evidence for the claim that some forms of deception committed by robots are less actions that the robot itself chose to take, but rather are a vessel for another agent that is the deceiver,” explains Rosero.

For superficial state deception, like a robot falsely claiming pain, acceptance may depend on the specific situation and outcome. If it manipulates humans in problematic ways, it’s less likely to be justified.

Dr. Elizabeth Phillips, a co-author on the study, emphasizes the need to carefully consider competing needs:

“A real challenge for social robots, then, is determining and potentially balancing ‘whose goals and needs’ are the ones that are justifiable and should supersede in these cases.”

More research needed

The researchers note that this study is just a first step. Future work should examine reactions to robot deception in real-world interactions over time. Developing effective justifications or explanations for necessary instances of robot deception will also be crucial.

As robots become more advanced and take on more social roles, navigating the complex ethical landscape around robot “lies” will only become more important. This research provides an intriguing glimpse into how humans may respond to deceptive robots – and what types of robot deception we may or may not be willing to accept.


Quiz:

  1. Which type of robot deception did participants view most negatively? a) External state deception b) Hidden state deception
    c) Superficial state deception
  2. Why might people be more accepting of external state deception by robots? a) It aligns with human social norms around “white lies” b) Robots are incapable of telling lies c) It’s impossible for robots to misrepresent facts
  3. What factor might influence acceptance of superficial state deception? a) The specific situation and outcome b) The robot’s appearance c) The robot’s name

Answers:

  1. b) Hidden state deception
  2. a) It aligns with human social norms around “white lies”
  3. a) The specific situation and outcome

Substack subscription form sign up