The Guide on the Side: Coaching STEM Students in Problem-Solving

From Manager to Mentor: A Practical Strategy for AI Development

As faculty, we know that working effectively with our Assistant Instructors (AIs) is key to a successful course. In last week’s post on Best Practices for Working with Assistant Instructors,” I highlight the importance of mentorship and creating professional development opportunities. But what does that mentorship look like in practice?

One of the most impactful ways to mentor our AIs is to equip them with high-leverage teaching strategies. Instead of just managing their grading, we can teach them how to teach. A powerful approach for this is the Guide on the Side philosophy, which shifts the AI’s role from a simple answer-key to a learning coach.

The Guide on the Side: Coaching STEM Students in Problem-Solving

It’s a familiar scene in any STEM lab or office hour: a student, staring at a screen, is utterly stuck. For new teaching assistants (Associate Instructors, or AIs), the temptation is strong to take the shortcut; to grab the keyboard, write the line of code, or simply provide the answer. But while this solves the immediate problem, it bypasses a crucial learning opportunity.

This is where the Guide on the Side approach comes in. It’s a teaching philosophy that equips new AIs with practical strategies to coach students through the problem-solving process rather than solving problems for them. For faculty in STEM, empowering your AIs with these skills can transform your students’ learning experience. 

Why This Shift in Pedagogy Matters

Across STEM disciplines, students frequently encounter “sticking points” moments of cognitive friction where the path forward isn’t obvious. If an instructor or AI simply hands over the solution, the student leaves with a single answer but no transferable skill. They learn to be dependent on an external expert.

By contrast, an instructor who guides the process models resilience, inquiry, and expert reasoning. The student leaves not only with a solution but with strategies they can apply to the next problem, and the one after that. They learn how to think.

Putting Theory into Practice: Activities for Your AIs

Faculty can use these activities in their own training sessions to help AIs develop a coaching mindset:

  • “Sticking Point” Brainstorm: In a think-pair-share format, AIs identify the most common places their students struggle. This builds a shared awareness of teaching challenges and normalizes the experience.

  • Scenario Analysis: AIs compare two contrasting dialogues: one where the AI gives the answer directly, and another where the AI uses Socratic questioning to lead the student to their own solution.

  • Questioning Roleplay: In pairs, AIs practice how to respond with guiding questions when students make common statements like, “I’m totally lost,” or “Can you just tell me if this is right?”

A Simple Framework for Modeling Expertise

A core strategy of this approach is teaching AIs to make their thinking visible. Experienced problem-solvers naturally follow steps that are often invisible to novices. Encourage your AIs to narrate their own problem-solving process explicitly using a simple four-step framework:

  1. Understand: Restate the problem in your own words. What are the inputs, the desired outputs, and the constraints?

  2. Plan: Outline possible approaches. What tools, algorithms, or libraries might be useful? What are the potential pitfalls of each approach?

  3. Do: Execute the plan step by step, narrating the reasoning behind each action. (“First, I’m going to create a variable to hold the total because I know I’ll need to update it in a loop.”)

  4. Reflect: Test the solution. Does it work for edge cases? Could it be more efficient? Are there alternative ways to solve it? 

This explicit modeling teaches students how to think, not just what to do.

The Power of a Good Question: Building a Question Bank

Guiding questions are the primary tool of a “Guide on the Side.” They skillfully shift the cognitive work back to the student. Encourage your AIs to build a bank of go-to questions, such as:

  • To start a conversation: “What have you tried so far?” or “Can you walk me through your current approach?”

  • To prompt a next step: “What does that error message suggest?” or “What’s the very next small step you could take?”

  • To encourage deeper thinking: “Why did you choose that particular method?” or “What are the trade-offs of doing it that way?”

  • To promote reflection and independence: “How could you check your answer?” or “What would you do if you encountered a similar problem next week?” 

Navigating Common Classroom Challenges

This approach provides concrete strategies for these common moments:

  • When a student is silent: Allow for sufficient wait time. If the silence persists, break the problem down and ask a simpler, first-step question.

  • When a student is frustrated: Acknowledge their feelings (“I can see this is frustrating; these problems are tough.”) and normalize the struggle before gently re-engaging with the task.

  • When a student just wants confirmation: Instead of giving a simple “yes” or “no,” redirect with a metacognitive prompt like, “What makes you confident in that answer?” or “How could you design a test to verify that?”

Resources for a Deeper Dive 

For faculty and AIs who want to explore this pedagogical approach further, these resources are short, impactful, and highly relevant:

  • Book: Small Teaching: Everyday Lessons from the Science of Learning by James M. Lang

  • Article: Asking Questions to Improve Learning – Washington University in St. Louis Center for Teaching and Learning

  • Video: Eric Mazur’s video on Peer Instruction is a great resource for understanding how to shift from traditional lecturing to more active, student-centered learning. He effectively demonstrates the curse of knowledge and how students learning from each other can be more effective than an expert trying to explain something they’ve long ago mastered.
    His approach, where students first think individually, then discuss with peers, and finally re-evaluate their understanding, directly aligns with the principles of guiding students through problem-solving rather than just showing them the answer. It emphasizes active processing and peer teaching, which are crucial for deeper learning and developing independent problem-solvers.

The Takeaway for Faculty

The “Guide on the Side” approach aligns perfectly with evidence-based teaching practices. By encouraging your AIs to slow down, model your thinking, and use questions effectively, you help them grow from being answer keys into becoming true teaching coaches. The result is a more engaged and resilient cohort of students who leave your courses not only with solutions, but with the confidence and strategies to tackle the next challenge independently.

Best Practices for Working with Assistant Instructors

Assistant instructors (AIs) can play an essential role in supporting your course.  They support student learning, enhance faculty efficiency, and gain valuable professional development experience along the way. When managed thoughtfully, the faculty-assistant instructor partnership creates a stronger, more engaging learning environment for students and a meaningful growth opportunity for graduate students.

This following are recommendations collected from the resources mentioned below in the reference section.

Core Principles of a Strong Partnership

The faculty–assistant instructor relationship is most successful when approached as a collaborative teaching partnership. Here are some guiding principles:

  • Clear Expectations and Roles
    Both faculty and assistant instructors need a shared understanding of their responsibilities. Clarity reduces confusion and sets everyone up for success.

  • Faculty as the Ultimate Authority
    While assistant instructors play an active role in teaching and assessment, faculty ultimately carry the responsibility for the course administration duties, including grading and alignment with institutional policies.

  • Professional Development Opportunity
    Serving as an assistant instructor should be a learning experience. Faculty should connect assigned tasks to professional growth, teaching skills, and career preparation whenever possible.

  • Consistent Communication
    Regular check-ins, open conversations, and transparency help prevent misunderstandings and make problem-solving much easier when issues arise.

Setting Up for Success

Before the Semester Begins

Early connection is key. Meet with your assistant instructor before classes start to set expectations, share goals, and establish communication methods. Some items to cover:

  • Course goals and learning outcomes

  • Roles, tasks, and boundaries

  • Meeting schedules and communication channels

  • Workload expectations (respecting weekly hour limits)

  • Familiarity with technology tools

  • Academic integrity policies

  • An introduction plan so students understand the assistant instructor’s role. 

Please see https://blogs.iu.edu/luddyteach/2023/08/16/quick-tip-working-with-ais/for a checklist developed by Dr. Angela Jenks and Katie Cox , in the Department of Anthropology at the University of California, Irvine.

Having these conversations upfront helps everyone enter the semester with confidence.

During the Semester

  • Regular Meetings
    Weekly or biweekly meetings provide a chance to prepare for upcoming lessons, review grading approaches, and troubleshoot challenges.

  • Grading Consistency
    Provide rubrics and sample feedback. Calibration or grade norming activities where everyone grades the same sample are especially effective for ensuring fairness.

  • Office Hours
    Encourage assistant instructors to hold consistent and accessible office hours at different times of day to accommodate students.

  • Mid-Semester Check-In
    Use this time to gather feedback, review workloads, and adjust if necessary.

End of the Semester

Wrap up with a reflective meeting. Discuss what worked well, identify challenges, and preserve useful materials for future iterations of the course. These conversations also strengthen the mentoring relationship.

Supporting Assistant Instructor Development

Faculty aren’t just supervisors, they’re mentors. Assistant instructors benefit when faculty take the time to:

  • Coach them on teaching strategies and classroom management

  • Encourage them to set professional development goals and build a teaching portfolio if they are interested in pursuing a faculty position

  • Provide opportunities for peer observation and self-reflection

  • Direct them to school and university-wide teaching resources

By positioning the role as both service and growth opportunity, faculty help assistant instructors build skills that last well beyond a single course.

References

Teaching Tip: What Are You Really Trying to Assess?

As you design quizzes, projects, and exams, it’s worth pausing to ask: What am I really trying to assess? Too often, assessments measure peripheral skills like memorization, rather than the intended learning outcomes. For example, a timed coding exam may end up evaluating typing speed and syntax recall more than algorithmic thinking or problem-solving strategy. Similarly, a multiple-choice exam on HCI principles may privilege memorization over the ability to apply design heuristics to new contexts.

Evidence-based practices to align assessments with your goals:

  1. Backwards Design (Wiggins & McTighe, 2005)

  2. Constructive Alignment (Biggs, 1996)

    • Ensure that learning activities, assessments, and outcomes are in sync. For instance, if collaboration is a stated goal, include a group design critique, not just individual tests.

    • Example: Reflections on applying constructive alignment with formative feedback for teaching introductory programming and software architecture (2016): https://dl-acm-org.proxyiub.uits.iu.edu/doi/pdf/10.1145/2889160.2889185

  3. Authentic Assessment (Herrington & Herrington, 2007; )

  4. Reduce Construct-Irrelevant Barriers

    • If the skill being assessed is debugging, for example, provide starter code so students aren’t penalized for setup. If the goal is conceptual understanding, consider allowing open-book resources so recall doesn’t overshadow reasoning.

Students also struggle not because the concepts are beyond their ability, but because the expectations of the assessment are unclear.

For example:

  • A programming assignment asks students to “optimize” code, but it’s unclear whether grading is based on correctness, runtime efficiency, readability, or documentation.

  • A human–computer interaction (HCI) project requires a prototype, but is the emphasis on creativity, usability testing, or fidelity of the mockup?

  • An informatics paper asks for “analysis,” but it’s unclear whether success depends on critical thinking, proper use of data, or following citation conventions.

When assessments lack clarity, students must guess what matters. This shifts the focus from demonstrating learning to playing a hidden “what does the professor want?” game.

Why It Matters (Evidence-Based):

  • Cognitive Load: Ambiguous assessments create unnecessary cognitive load—students waste energy interpreting instructions instead of applying knowledge (Sweller, 2011).

  • Equity Impact: Lack of clarity disproportionately disadvantages first-generation and other structurally disadvantaged students, who may not have tacit knowledge about faculty expectations (Winkelmes et al., 2016).

  • Misalignment: As mentioned above, vague assessments often misalign with course outcomes, undermining constructive alignment (Biggs, 1996).

What Faculty Can Do:

  1. State the Core Construct: Ask yourself: Am I assessing correctness, creativity, reasoning, or communication? Then state it explicitly.

  2. Communicate Priorities: If multiple criteria matter, indicate their relative weight (e.g., correctness 50%, efficiency 30%, documentation 20%).

  3. Provide a Sample Response: A brief example—annotated to show what “counts”—helps students see what you value.

  4. Check for Hidden Criteria: If you penalize for style, clarity, or teamwork, ensure that’s written down. Otherwise, students perceive grading as arbitrary.

Faculty Reflection Prompt:
Pick one upcoming assignment and ask yourself: If I gave this to a colleague in my field, would they immediately know what I was assessing? Or would they have to guess? If the latter, refine the task or rubric until the answer is obvious.

Takeaway: Unclear assessments don’t just frustrate students, they distort what is being measured. By clarifying exactly what skill or knowledge is under the microscope, faculty ensure assessments are fair, transparent, and aligned with learning outcomes. Before finalizing any assignment or test, ask yourself: Am I measuring the skill that truly matters, or something adjacent? That small moment of reflection can make assessments more equitable, meaningful, and aligned with the professional practices of your discipline.

Quick Tip: Name the Thinking (Cognitive Skill), Not Just the Task

Name the Thinking (Cognitive Skill), Not Just the Task

When introducing a problem set, coding lab, or design activity, take 1–2 minutes to make the thinking process explicit. For example:

  • Instead of just saying: “Debug this code”
    Add: “This task is about identifying assumptions in how the code should work versus how it runs. Pay attention to the strategies you use: reading error messages, testing small chunks, or tracing variables.”

  • Instead of just saying: “Sketch a wireframe”
    Add: “This is about perspective-taking; imagining the interface from a novice user’s point of view.”

By naming the cognitive skill (debugging, pattern recognition, abstraction, empathy, systems thinking), students begin to see how their work maps onto the broader competencies of your field.

Why it matters:

  • Supports metacognition (students reflect on how they learn, not just what they learn).

  • Helps novice learners connect class tasks to professional practices.

  • Reinforces disciplinary literacies and makes hidden expectations visi

Evidence-Based Classroom Assessment Techniques (CATS) for STEM Courses

Teaching a large lecture course in a STEM course can feel like steering a cargo ship; you’re moving a lot of people in the same direction, but small adjustments can be hard to see and manage in real time. Traditional assessments (midterms, finals, projects) may measure end-point achievement, but they don’t always help faculty understand how students are learning along the way. This is where classroom assessment techniques (CATs) https://vcsacl.ucsd.edu/_files/assessment/resources/50_cats.pdf come in: quick, research-backed methods that provide timely insights into student understanding, enabling instructors to adapt instruction while the course is still in motion.

Why CATs Matter in STEM Large-Enrollment Courses

Evidence from STEM education research underscores that formative assessment and feedback loops significantly improve student learning outcomes, especially in large courses where anonymity and disengagement can take hold. Studies show that structured opportunities for feedback (e.g., one-minute papers, peer assessments, low-stakes quizzes) can reduce achievement gaps and support retention in challenging majors.

At the same time, as Northwestern’s Principles of Inclusive Teaching https://searle.northwestern.edu/resources/principles-of-inclusive-teaching/note, students often struggle not only with course content but also with the “hidden curriculum” or unspoken rules about what “counts” as good work or participation https://cra.org/crn/2024/02/expanding-career-pipelines-by-unhiding-the-hidden-curriculum-of-university-computing-majors/ . Transparent communication about assessment criteria and expectations helps level the playing field.

High-Impact CATs for CS, Engineering, and Informatics

  • Algorithm Walkthroughs (Think-Alouds)
    Students articulate their reasoning step-by-step. Helps faculty identify gaps in procedural knowledge.

  • Debugging Minute Paper
    Prompt: “What was the most confusing bug/issue we discussed today, and why?” Surfaces common misconceptions in programming logic.

  • Concept Maps for Systems Thinking
    Students draw connections between components (e.g., CPU, memory, OS). Research shows concept mapping fosters transfer across domains.

  • Peer Review of HCI Prototypes
    Students exchange usability sketches with rubrics. Builds critique skills and awareness of user-centered design.

  • Low-Stakes Quizzing with Digital Dashboards
    LMS quizzes or polling tools provide immediate data on misconceptions while also scaffolding students’ goal monitoring.

Making CATs Inclusive in Large Lecture Halls

To avoid reinforcing inequities, instructors should:

  • Clarify criteria with rubrics for coding projects, design critiques, or participation.

  • Co-create ground rules for collaboration in labs and online forums, ensuring respectful and equitable engagement.

  • Balance rigor and empathy: challenge students while providing structures that acknowledge different starting points and prior knowledge.

Putting It into Practice

  • In a 250-student programming class, use a digital Muddiest Point poll after each lecture, then address top confusions in the next class.

  • In an HCI course, scaffold peer review CATs for wireframes inside the LMS, combining digital rubrics with analog small-group feedback.

  • In a systems engineering class, embed progress dashboards with reflective CAT prompts (“Where are you stuck? What resource might help?”). This makes metacognition visible and actionable.

Final Thought

Large-enrollment CS, engineering, informatics, and HCI courses don’t have to feel impersonal or assessment-heavy. By integrating classroom assessment techniques faculty can design courses that are responsive, transparent, and inclusive. The result: students who not only master disciplinary knowledge but also learn how to manage their own learning, a skill set essential for both the classroom and the future of work.

Further Reading:

  1. Angelo & Cross’s Classroom Assessment Techniques https://iucat.iu.edu/catalog/20750208
    50+ adaptable CATs. For large STEM courses, techniques like the “Muddiest Point” or “Background Knowledge Probe” are especially powerful.

  2. Nilson’s Teaching at Its Best https://iucat.iu.edu/catalog/16660002
    Offers frameworks for aligning CATs with learning objectives—critical in CS/engineering courses where problem-solving, debugging, and design thinking are central.

  3. Northwestern University, Principles of Inclusive Teaching https://searle.northwestern.edu/resources/principles-of-inclusive-teaching/; and Making Large Classrooms feel Smaller: https://searle.northwestern.edu/resources/our-tools-guides/learning-teaching-guides/making-large-classes-feel-smaller.html