When Machines Command

Exploring the Psychology of Obedience and the Evolution of Leadership in the AI Era

A recent study by Grzyb et al. (2023) reveals that people obey orders from humanoid robots at the same rate as they do from human authorities—echoing the unsettling legacy of Milgram’s obedience experiments. This breakthrough forces a critical re-examination of leadership, agency, and ethics in an AI-driven world. As artificial systems increasingly occupy positions of influence, the core challenge becomes clear: will we author our own decisions, or surrender them to machines that mirror authority without embodying responsibility

Introduction: A New Frontier in Authority and Influence

For decades, psychology has explored the unsettling reality of human obedience to authority. From Stanley Milgram’s infamous experiments in the 1960s to more recent studies, the findings have remained consistent: under certain conditions, people will comply with directives—even when they conflict with their moral compass. But what happens when the authority issuing orders is not human?

A recent study (Grzyb et al., 2023) has taken Milgram’s classic paradigm and applied it to a new authority figure: a humanoid robot. The results? A staggering revelation that obedience levels remain just as high when commands to harm another person come from a machine rather than a human professor.

This finding forces us to confront urgent ethical, psychological, and leadership challenges. If people already exhibit this level of submission to robots, what does that mean for the future of leadership, decision-making, and moral agency in an increasingly AI-driven world?

The Experiment: Obedience in the Age of Robots

Revisiting Milgram’s Paradigm

The study (Grzyb et al., 2023) was a modified version of Milgram’s original experiment on obedience. Two groups of participants were tested: one receiving instructions from a university professor and the other from a humanoid robot named Pepper.

Key Findings

  • No significant difference in obedience: The presence of a robot versus a human authority figure did not alter compliance levels.
  • Psychological experience was similar: Participants reported nearly identical stress, discomfort, and perceptions of control in both conditions.
  • Authority, not identity, dictated compliance: The source of the directive—human or machine—was less relevant than the context of authority itself.

Leadership Implications: Meta-Integral Perspectives and Late-Stage Development

1. The Automation of Authority

Late-stage ego development perspectives suggest that as societies mature, authority must be increasingly self-authored rather than externally imposed. However, the findings indicate that automation itself is becoming a viable authority, shaping human behaviour in ways we may not fully control.

2. Ethical Leadership and Moral Maturity

Leaders must develop:

  • Contextual authority: Understanding when authority is legitimate versus when it is a construct that can be challenged.
  • Meta-awareness of influence: Identifying how AI and automation shape decision-making.
  • Moral courage: Strengthening the ability to resist unjust or unethical directives.

3. The Danger of Unchecked Obedience

Milgram’s original work revealed the perils of blind obedience to authority. This new study extends that risk to automated obedience—a scenario where humans defer moral agency to machines.

How to Maintain Agency and Develop Moral Maturity

1. Cultivate Intelligent Disobedience

The concept of intelligent disobedience (Chaleff, 2015) is critical in an AI-driven world. Leaders must learn to:

  • Train in ethical decision-making: Recognise when compliance is harmful.
  • Encourage critical thinking: Develop the habit of questioning automated processes.
  • Build resilience against authority bias: Recognise the cognitive tendency to obey perceived authority figures.

2. Strengthen Self-Authorship

According to Robert Kegan’s developmental framework, self-authorship is a hallmark of late-stage human development. Leaders must learn to:

  • Question programmed narratives: Understand that AI is designed with biases and agendas.
  • Own their decision-making process: Resist deferring moral responsibility to automated systems.
  • Navigate complexity with awareness: Integrate multiple perspectives rather than defaulting to external authority.

3. Redefine Authority in Human-AI Collaboration

Rather than allowing AI to function as an unquestioned authority, leaders must establish new frameworks for AI-human interaction:

  • AI as an advisor, not a commander: Reframing AI’s role as a tool for augmentation, not decision-making.
  • Human oversight as a fundamental principle: Ensuring that ethical review processes govern all AI-driven decisions.
  • Empowering dissent in organisations: Encouraging teams to question both human and AI directives.

Conclusion: Mastering the Future of Authority

We stand at the threshold of a profound transformation in how authority is constructed and perceived. The study by Grzyb et al. (2023) is more than an academic exercise—it is a stark warning that human obedience mechanisms remain deeply ingrained, even in the face of artificial authority.

For leaders, this means one thing: mastery of one’s own agency is no longer optional—it is essential. The future will not be led by those who merely comply with automated systems, but by those who have the moral maturity and strategic clarity to challenge, refine, and direct the role of AI in human decision-making.

The question we must ask ourselves is this: Will we shape AI, or will AI shape us?

more

Learn, grow and know
Subscribe for more access