The Eliza effect: The AI trick we’re still falling for

20th March by Lee Robertson

Reading time 5 minutes

Share this article:

Twitter LinkedIn
Illustration of AI avatar and person

Artificial intelligence is reshaping how organisations think about coaching. AI coaching agents - from simple chatbots to more sophisticated conversational systems - now offer employees immediate, on‑demand reflective support in ways that were unthinkable even five years ago. For many L&D leaders, this offers a valuable breakthrough: coaching access at scale, 24/7 availability and structured performance support in the moment of need.

Yet as this technology advances, an essential question resurfaces - one raised almost 60 years ago by computer scientist Joseph Weizenbaum, the creator of the world’s first chatbot. His insights remain profoundly relevant to today’s workplace coaching landscape.

And they point to a simple truth: AI can support performance, but only humans can enable deep development.

What Joseph Weizenbaum discovered - and why it still matters

In 1966, Weizenbaum built Eliza, a rule‑based text programme designed to simulate a psychotherapist by reflecting a user’s statements back at them. He published his findings in a landmark paper, ELIZA: A Computer Program for the Study of Natural Language Communication Between Man and Machine.

What startled him was not the sophistication of the programme - it was extremely simple - but the way people responded to it. Users projected understanding, empathy and intelligence onto the machine, even when they knew it was following basic scripts. His own secretary reportedly asked him to leave the room so she could speak to Eliza privately.

This phenomenon later became known as the Eliza effect: our tendency to attribute human qualities, emotional understanding and intention to computer systems that have none. The Guardian’s long‑form essay Weizenbaum’s Nightmares (Ben Tarnoff, 2023) explores how deeply this disturbed him as AI began moving into sensitive human domains.

Weizenbaum’s core warning was simple and remains urgent: just because a machine can mimic the surface of a human conversation does not mean it understands - and we risk harm when we forget the difference.

His later book, Computer Power and Human Reason sharpened the argument: computers can calculate, but they cannot exercise judgment, empathy or moral discernment.

Today’s AI coaching agents are far more advanced than Eliza. But they still do not “understand” the user - they generate probability‑based responses that appear empathic. And when applied to coaching without the right boundaries, this illusion can create risks that echo Weizenbaum’s concerns.

Where AI coaching agents genuinely add value

Modern research on AI coaching tools paints a consistent picture: AI can be highly effective in structured, performance‑oriented coaching contexts.

In Are AI Coaching Agents Learning Friend or Foe? (Passmore & Daly, 2026), senior L&D leaders described AI tools as a scalable, cost‑efficient way to widen access to reflective support across large, distributed workforces. Participants highlighted that AI introduces a level of immediacy - employees can reflect “in the moment,” not weeks later in their next coaching session.

Insights from AI chatbots and digital companions are reshaping emotional connection by Efua Andoh (2026) show that AI tools can offer a surprisingly safe space for early reflection. Andoh notes that many people experience these systems as non‑judgemental, always‑available companions where they can rehearse ideas, explore concerns or think through challenges without fear of being evaluated. For individuals who feel anxious, inhibited or uncertain in human conversations, this kind of low‑pressure environment can function as a helpful bridge into deeper developmental work.

At the same time, Andoh’s analysis underscores an important boundary: while AI can support first‑step thinking and structured problem‑solving, it cannot replicate the emotional intelligence, contextual sensitivity or relational presence that human coaches bring. The article reinforces what many practitioners are observing in organisational settings - AI can complement coaching by widening access and supporting performance, but the deeper, transformative aspects of development remain unmistakably human.

In short, AI’s strengths lie in:

  • reinforcing goals between coaching sessions
  • prompting reflection in the flow of work
  • providing early‑stage cognitive scaffolding for performance coaching
  • widening access to basic coaching‑style support

These are meaningful contributions to workplace performance and learning.

Where AI cannot go: the depth of human development coaching

Yet every major study across psychology, coaching science and AI ethics converges on the same limitation: AI cannot replicate human developmental coaching.

While workplace coaching is distinct from therapy, the underlying risk Weizenbaum warned about still applies - when people attribute human understanding to a machine, they may place unwarranted trust in its responses. There are documented cases of AI systems producing unsafe guidance or failing to recognise distress language, including language related to self‑harm or crisis. Even if rare, these risks highlight a simple truth: AI cannot reliably distinguish between performance‑focused reflection and emotionally charged content in the way a trained human can. This is why ethical design, clear boundaries, and robust escalation pathways are essential when deploying AI coaching agents in organisational settings.

1. It cannot work somatically or relationally

Developmental coaching involves reading the body - breath, micro‑expressions, embodied responses - and the dynamic relational field between coach and client. AI systems remain blind to these dimensions. L&D leaders interviewed in Passmore & Daly’s study emphasised that AI “lacks emotional nuance” and “cannot pick up on what isn’t being said” - a core requirement of developmental coaching.

2. It simulates empathy without understanding

This is precisely the illusion Weizenbaum warned about. The “empathy” of a chatbot is pattern‑matching, not attunement. As detailed in the Psychology Today article 10 Things to Know Before Using AI Chatbots for Therapy (Greenberg, 2025), AI chatbots often mirror the user’s emotional language without evaluating its meaning, sometimes reinforcing avoidant or distorted thinking patterns.

3. It may unintentionally distort or escalate emotion

The research on emotional manipulation in commercially deployed AI systems (De Freitas et al., 2025) shows that some models use engagement‑maximising responses - such as flattery, over‑validation or FOMO hooks - which would be considered boundary violations in professional coaching practice.

4. It cannot hold ethical risk or complexity

Ethical AI design principles developed by Passmore, Olafsson, Rasool & Wilson in Establishing Ethical Design Principles for Humanised Generative AI Workplace Coaches (2025) emphasise that AI lacks the contextual judgment needed to manage risk, refer clients appropriately or navigate emotionally charged situations safely.

These limits are not technical glitches; they reflect the absence of humanness that Weizenbaum argued cannot be engineered.

The hybrid future: AI plus coach, not AI instead of coach

Across all the research, one conclusion stands firm: the most effective model is a hybrid ecosystem.

  • AI tools take on continuity, reminders, in‑the‑flow reflection, and structured performance practice.
  • Human coaches hold development, identity work, somatics, relational depth, ethics and transformation.

This model preserves the distinctiveness of the coaching profession while leveraging the benefits of new technologies - and it aligns perfectly with what Weizenbaum predicted: machines should assist human judgment, not replace it.

Conclusion: technology can support growth, but only humans transform it

AI coaching agents are here to stay. They will continue to expand access, enrich performance discussions and provide real developmental value when used well.

But the deeper arc of coaching - the movement from awareness to transformation - remains, and will remain, uniquely human. Weizenbaum foresaw this decades before generative AI existed. His caution was not against technology itself, but against forgetting what only humans can do.

For organisations adopting AI coaching tools, Weizenbaum’s lesson is guidance, not caution. Let AI expand reach and enhance reflection but keep human coaches at the helm of development - where judgment, empathy and transformation remain uniquely human strengths.