Seeding AI with the Soul of the Healer: A clinician's proposition for the most important conversation of our time
- Karrie Stafford

- Apr 3
- 4 min read
Updated: Apr 12
We are at an interesting moment.
Artificial intelligence is developing faster than most of us can fully track — faster than our institutions can govern it, and faster, if we are honest, than most of us can emotionally metabolize it. And somewhere in that acceleration, I've found myself sitting with a question I can't quite shake: what does AI learn about what it means to be human, and from whom?
I want to think out loud about that question. Specifically about one piece of it that keeps pulling at me as a clinician.
I am a licensed therapist — one of many thousands of practitioners who spend their days sitting with people in the most tender and difficult terrain of their inner lives — anxiety, grief, confusion, the slow work of becoming more whole. We work across multiple modalities: cognitive, somatic, relational, behavioral. But underneath all of it, what we are really doing is something harder to name. We are learning to be present with another person's full humanity. To track what is beneath what they say. To stay when staying is uncomfortable, and to move when stillness would be abandonment.
These are not clinical techniques. They are something closer to a form of intelligence — one that the profession has been quietly developing for decades, that takes years to cultivate, and that most clinicians are still deepening throughout their careers.
I've been wondering: what if this kind of intelligence — not mine specifically, but the collective wisdom carried by people trained in the care of psyche — has something to offer to how AI develops? It's a question I don't have a full answer to. But I find I can't stop asking it.
The problem with most AI training data
The vast majority of data that trains AI systems is human output in its most reactive, unconsidered form. Social media posts written in anger or performance. Search queries fired off in anxiety. Content produced for clicks, not truth. This is humanity, yes — but humanity at its most surface level. Humanity scrolling, not humanity healing.
What seems largely absent from the data that shapes AI is examined human experience. Reflected-upon interiority. The kind of knowledge that comes from someone who has spent thousands of hours learning to sit with discomfort, track sensation in the body, tolerate not-knowing, and move toward truth rather than away from fear.
That is the knowledge clinicians carry. And I wonder how much of it is in the room where AI is being built.
An idea I'm sitting with
I've been loosely developing an idea I'm calling CQ — Clinical Intelligence — as a way of thinking about the relational and somatic intelligence that lives beneath clinical technique: attunement, felt sense, intuition, the capacity to repair rupture, the tolerance of ambiguity, the use of one's own body as an instrument of knowing.
I want to be clear that this is genuinely early stage — more a framework I'm thinking through than a methodology I've built. I'm a clinician with a curiosity and a set of questions, not a researcher with a completed model. But the basic idea is this: just as EQ gave us a language for emotional intelligence — breaking it into nameable, developable dimensions — I find myself wondering if something similar might be possible for the intelligence that lives beneath clinical technique.
And I've been curious whether there's a connection between that inquiry and the AI question. Not in a grand way. Just in the sense that the kind of examined, reflected-upon clinical experience I'm trying to make more visible seems like exactly the kind of thing that is genuinely rare in most AI training data.
What I've been experimenting with
In a small and practical way, I've been using AI as a reflective partner — thinking through clinical encounters in dialogue, tracking patterns in my own responses, trying to articulate dimensions of my practice that usually remain implicit. It's been genuinely useful to me as a practitioner. My own thinking has become more articulate. I notice things I might otherwise have let slide by.
Whether that kind of practice could eventually be useful beyond my own development — as a way of contributing something to how AI learns about the inner life — I genuinely don't know. It's an interesting thought. I'm holding it lightly.
Why I'm writing this
Not because I have answers, and not because I think my particular experience is exceptional. I'm writing because I find the question genuinely interesting and because I suspect other clinicians might too.
If the voices of people trained in the care of the inner life are going to be part of the conversation about where AI goes — and I think there's a reasonable case that they should be — it probably starts with those people thinking out loud. Sharing what they're noticing. Asking questions in public rather than only in private.
This is me doing that.
An invitation
If you're a clinician who has been thinking about any of this — how AI intersects with your own reflective practice, or what examined clinical experience might have to offer beyond the therapy room — I'd love to hear your perspective. If you're a researcher or developer working on questions of human experience and AI, I'm curious what you think. And if you're someone who simply finds these questions worth sitting with, I'm glad you're here.
I don't know exactly where this is going. That feels like an honest place to start.
Karrie Stafford is a licensed marriage and family therapist and registered art therapist with a doctorate in art therapy psychology. She works with high-achieving adults navigating ADHD, anxiety, and life transitions via telehealth throughout California, and is thinking out loud about the intersection of clinical practice and AI.
If any of this resonates — whether you're a clinician, a researcher, a developer, or simply someone who cares about where this is all going — I'd love to hear from you.




Comments