For too long, I've been a perpetual student, always reaching for the next course, the next certification, the next piece of knowledge. The joy of learning is undeniable, but there's a quiet tension that comes with it: the gap between knowing and doing. Last week, that tension reached a breaking point, and I had a profound realization: it's time to stop learning and start building.
The Catalyst: A Familiar Impulse
It began with Mindvalley's new AI Clone Building program. My finger hovered over the purchase button, a familiar impulse to acquire more knowledge. I've taken Mindvalley courses before, and they've been genuinely valuable. Their approach to teaching AI through a personal development lens, rather than dry technical tutorials, helped me get started with custom GPTs. I'm not here to criticize their offerings.
But this time, something shifted. As I reviewed the program details, a moment of clarity struck me: I already know how to do most of this. And the parts I don't know? I can figure them out.
The Realization: What I Already Know
Mindvalley, in fact, reiterated for me one of the most valuable lessons I still use today: audience reframing. It's not just about building a custom GPT or chatbot, but about teaching that GPT to understand who you're talking to and why it matters. For example, creating workshop materials for skeptical engineering faculty requires a different approach than writing for excited undergraduate researchers. MindValley calls this "context injection," and it's brilliant. This skill transfers to everything I do, from faculty development to coaching and curriculum design. It's fundamental to good teaching and coaching, not just an AI skill.
This realization extended to other areas of my AI journey:
I've already built custom GPTs. They are straightforward once you understand the basic structure: upload content, set instructions, test, and refine. The technical barrier is not as high as it feels.
I already understand context injection. I apply it in my faculty development work, coaching conversations, and curriculum design without needing another course.
I have access to excellent alternative resources. Domestika and Udemy offer technical AI courses for a fraction of the cost. YouTube provides detailed tutorials. My institution offers valuable tools and training. The knowledge is available; I just needed to recognize I can use it.
What truly resonated was this: I wasn't looking for information I didn't have. I was looking for permission to trust what I already know. And that realization? That's the good kind of uncomfortable.
The Shift: From Learning to Building
This shift from perpetual learning to active building is particularly significant now. I recently completed the Gallup Global Strengths Coach course and will soon take the certification exam. While I've informally coached for years, this formal training is opening new doors, allowing me to offer my coaching skills more intentionally. My connectedness strength has never been more relevant; I genuinely love supporting people in finding and reaching their goals.
Suddenly, AI tools are no longer abstract concepts. They are directly connected to the work I'm actively building. The coaching certification provided the urgency and clarity to move from simply acquiring knowledge to applying it to create tangible value.
What I'm Actually Building Now
My focus has sharpened, and I'm now actively engaged in:
Custom GPTs for coaching contexts. I'm developing assistants trained on my coaching voice to help with client session prep, follow-up materials, and workshop design. These tools will augment, not replace, human interaction, handling administrative tasks so I can focus on actual coaching.
Audience-adaptive content generation. I'm systematically applying context injection principles. A single workshop outline can now be adapted into five different versions, tailored for early-career faculty, mid-career researchers, or department chairs.
Streamlined faculty support workflows. I'm automating repetitive tasks like scheduling, resource compilation, and initial feedback drafts. This frees up more time for meaningful conversations that genuinely advance people's goals.
The technical skills are either already present or quickly learnable. What I'm building now is the confidence to trust those skills and act on them.
A New Framework for Learning
Looking back, I've consistently underestimated my ability to figure things out independently. My Top 5 CliftonStrengths (Learner, Ideation, Developer, Achiever, and Connectedness), suggest I'm built for self-directed exploration. Yet, there's been a gap between intellectual understanding and emotional trust.
My Learner strength's growth edge is this: once I achieve competence, I often move to the next learning project instead of fully applying what I've learned. The accumulation of knowledge becomes satisfying in itself. With AI tools, I learned to build custom GPTs and understood the principles, but I didn't fully implement the systems I could use in my coaching and faculty development work. I kept learning more about AI instead of applying what I already knew.
This moment, recognizing I don't need another course but need to build the things I've been planning, is about working with this growth edge. The knowing is done. Now, I'm practicing the doing.
Questions to Ask Yourself
If you're considering more AI training, here are questions worth asking:
What specific skill are you missing? If you can name it precisely, you can likely find a targeted resource for less money than a comprehensive program.
Are you looking for information or confidence? Information is widely available. Confidence comes from actually building things and seeing them work.
What have you already learned that you're not giving yourself credit for? Seriously, make a list. You might be surprised.
What would you build if you already felt competent? Start building that. Competence follows action, not the other way around.
Resources If You're Building Your Own AI Systems
Since I enjoy helping people learn, here are resources that have genuinely helped me:
For Learning Custom GPT Creation:
OpenAI's official documentation (free, comprehensive)
Domestika's "Build Custom AI Assistants" courses ($10-40 - aimed at architects, but has general transferrable skills)
YouTube tutorials by AI Jason and Matt Wolfe (free, current)
For Content Transformation:
NotebookLM (free from Google, remarkably effective)
Claude Projects (what I'm using for this conversation)
ChatGPT Projects (with Plus subscription, $20/month)
For Automation:
For Voice/Avatar Video (if you actually need this):
Key Principle from Mindvalley Worth Keeping:
Context injection. Always tell your AI: Who is the audience? What's the background? What outcome do you want? What tone is appropriate? This framework transforms generic AI outputs into genuinely useful content.
Conclusion: The Real Lesson
It's Saturday evening. I have custom GPTs and ChatBots I've built before. I have new coaching skills I'm developing. I have a clear sense of what I want to create next.
What shifted this week wasn't learning new information. It was recognizing that I need to stop learning and start building. This is a hard shift for someone whose top strength is Learner. The pull to take "just one more course" is real. The satisfaction of acquiring new knowledge is immediate and tangible.
But building something meaningful from what I've learned requires a different kind of commitment. It means sitting with the discomfort of imperfect execution. It means choosing application over acquisition.
Mindvalley's programs provided foundations I'm genuinely grateful for. The audience reframing skill alone was worth the investment. However, the next phase of my AI learning won't come from another structured course. It will come from actually implementing the systems I've been planning, trusting my Learner, Ideation, and Developer strengths enough to create tools that serve my coaching and faculty development work.
This isn't a rejection of structured learning. It's recognizing when I've learned enough structure to create my own. More importantly, it's recognizing when continued learning becomes a way to avoid the harder work of actually doing. And honestly? That realization feels like growth.
Sometimes the most valuable learning doesn't come from the course you take. It comes from the moment you realize you don't need it anymore. I'm not saying I'll never take another Mindvalley course. I'm saying I've reached a point where I can choose to learn with them strategically, rather than reaching for them automatically because I don't trust my own competence. That's the difference. Now I just need to actually build the things I've been planning. The knowing is done. It's time for the doing.
If you want to talk through your own AI learning journey, coaching development, or just need someone to remind you that you probably know more than you think you do, I'm always up for that conversation. You can find me on LinkedIn or through my website.
A Note on This Post
Yes, I used Claude to help write this. Not because I couldn't write it myself (I’ve written blog posts for years), but because one of the things I've learned about AI is that it's excellent for helping you articulate thoughts you're still forming. The ideas are mine. The realization is mine. The voice is mine. Claude just helped me organize the rambling version into something readable. That's what good AI use looks like: augmenting your work, not replacing it.
The Ethical Tension I'm Still Sitting With
In all honesty, I haven't resolved my feelings about AI's broader impact. I work at Indiana University's Luddy School of Informatics, Computing, and Engineering. AI is central to several things we do. I help faculty integrate it into curriculum, design AI-resilient assessments, think through pedagogical implications.
But I'm also deeply aware of environmental costs, data privacy concerns, and the ways AI systems can perpetuate harm against marginalized communities. Faculty in my own school (people who build these systems) have a wide range of perspectives on appropriate AI use.
I don't think that tension is something to resolve by taking another course. I think it's the ongoing work of using powerful tools thoughtfully. I'm going to keep using AI, and I'm going to keep questioning how I use it. Both things can be true.