ChatGPT Does it! Sooo.. What’s my Role

ChatGPT can write, summarize, and even sound convincing — but it can’t care. Here’s what that means for my work as a dissertation coach.

Thoughts from the Edge of Automation

In moments of quiet reflection — when I think about my decision to become a dissertation coach and the *why* behind starting Design2Defense — my mind often wanders into the world of Artificial Intelligence.

In that world, I find myself asking: “Doesn’t ChatGPT do what you do?”

And then, the business part of my brain chimes in with another question: 

“If ChatGPT can summarize, rephrase, and even suggest arguments — what am I, as a dissertation coach, really offering that’s different?”

This essay is a reflection on that question — on where my work begins, what it means to teach thinking in an automated world, and how human connection remains central to scholarship.

From Output to Insight

AI can assist in writing, but it only responds to the questions asked of it — the prompts matter.

These prompts need to be constructed; they emerge from thinking, from curiosity, from the struggle to make sense of ideas, evidence, and self.

That’s where my work begins.

Because what I help students develop isn’t just a better draft — I help them think critically, question assumptions, and articulate ideas with clarity.
It begins with learning to ask better questions, connecting theory to lived experience, and finding their own scholarly voice amidst the noise of tools and templates.

AI can process language, but it can’t foster insight.
It can generate ideas, but it can’t nurture
understanding.
It can summarize thought, but it can’t create
meaning.

In other words, my work begins where automation ends — in the space between confusion and clarity, between words and understanding, between writing and becoming a writer.

The Hidden Value of Guiding Understanding

When I work with doctoral scholars, I’m not just helping them finish a chapter. I’m helping them hold together the emotional, intellectual, and ethical threads that run through their work.

I’m helping them name what’s really going on when they say they’re “stuck.”

I’m helping them translate passion into researchable questions.

I’m helping them remember that they’re not just producing a document — they’re becoming the kind of scholar who can defend it.

This kind of work isn’t clinical or formulaic. It doesn’t simply produce a well-written chapter or confine ideas to a word count. But it’s visible in the moment a student says, “That’s what I was trying to say!” — and you can feel the shift from confusion to clarity, and their voice begins to emerge.

In every interaction, I strive to create these human moments — the quiet click of recognition when a scholar finally names what they’ve been circling all along, or the pause before they realize they’ve begun to believe in their own thinking.

Holding Space in an Automated World

The more I learn about AI, the more I begin to understand what’s at stake.

The question isn’t whether AI can do the work. It’s whether it can care -

about the student who’s uncertain

the writer who’s searching for their voice

the scholar who’s trying to make meaning out of complexity.

ChatGPT doesn’t notice exhaustion in a writer’s voice. It doesn’t understand that “I’m behind” could mean “I feel like I don’t belong.” It doesn’t hold silence long enough for a student to reach their own insight.

These are intentional moments, not automated ones — spaces where reflection happens, where silence has meaning, and where the quiet ethics of care guide the slow work of understanding.

So when I ask myself, “How am I helping?” — the answer is simple:

I’m helping by being human.

By offering time, attention, and the kind of care that can’t be scripted, scaled, or simulated.

Not What I Compete With, But What I Complement

The reality is, my students will keep using AI — and they should.
The question is not how to discourage them, but how to help them use it well— with curiosity, ethics, and self-awareness.

I’m realizing that my role is less about gatekeeping and more about sense-making.

Therefore, I help scholars interpret what the machine produces and ask:

  • Does this align with my argument?

  • Whose perspective is missing?

  • What assumptions is this text making on my behalf?

That’s the work of critical literacy — helping students think critically about what AI produces is, in itself, an act of care. Reminding them that scholarship is built on well-thought-out ideas that are shaped by intention, not by speed or automation.

A New Kind of Help

So yes, ChatGPT “does it.”
It can summarize, draft, rephrase, and sometimes even inspire.

But where automation ends, my work begins — in the slow, deliberate practice of meaning-making, reflection, and care.

When I help a scholar discover why their research matters, or when I hold space for their uncertainty, I’m not competing with AI.

I’m teaching them to reclaim the one resource technology can’t supply:
The recognition that what they’re creating — knowledge, insight, understanding — is a way of humanizing the world.

At its heart, this reflection isn’t just about AI — it’s about care.
It’s about what it means to guide, listen, and think alongside another person as they make meaning from complexity.

That’s the space I hold through Design2Defense

If this resonates with how you want to think, write, or mentor, I’d love to continue the conversation.

Get in Touch!
Next
Next

Learning to Sit with Uncertainty