Best Practices for Working with Assistant Instructors

Assistant instructors (AIs) can play an essential role in supporting your course.  They support student learning, enhance faculty efficiency, and gain valuable professional development experience along the way. When managed thoughtfully, the faculty-assistant instructor partnership creates a stronger, more engaging learning environment for students and a meaningful growth opportunity for graduate students.

This following are recommendations collected from the resources mentioned below in the reference section.

Core Principles of a Strong Partnership

The faculty–assistant instructor relationship is most successful when approached as a collaborative teaching partnership. Here are some guiding principles:

  • Clear Expectations and Roles
    Both faculty and assistant instructors need a shared understanding of their responsibilities. Clarity reduces confusion and sets everyone up for success.

  • Faculty as the Ultimate Authority
    While assistant instructors play an active role in teaching and assessment, faculty ultimately carry the responsibility for the course administration duties, including grading and alignment with institutional policies.

  • Professional Development Opportunity
    Serving as an assistant instructor should be a learning experience. Faculty should connect assigned tasks to professional growth, teaching skills, and career preparation whenever possible.

  • Consistent Communication
    Regular check-ins, open conversations, and transparency help prevent misunderstandings and make problem-solving much easier when issues arise.

Setting Up for Success

Before the Semester Begins

Early connection is key. Meet with your assistant instructor before classes start to set expectations, share goals, and establish communication methods. Some items to cover:

  • Course goals and learning outcomes

  • Roles, tasks, and boundaries

  • Meeting schedules and communication channels

  • Workload expectations (respecting weekly hour limits)

  • Familiarity with technology tools

  • Academic integrity policies

  • An introduction plan so students understand the assistant instructor’s role. 

Please see https://blogs.iu.edu/luddyteach/2023/08/16/quick-tip-working-with-ais/for a checklist developed by Dr. Angela Jenks and Katie Cox , in the Department of Anthropology at the University of California, Irvine.

Having these conversations upfront helps everyone enter the semester with confidence.

During the Semester

  • Regular Meetings
    Weekly or biweekly meetings provide a chance to prepare for upcoming lessons, review grading approaches, and troubleshoot challenges.

  • Grading Consistency
    Provide rubrics and sample feedback. Calibration or grade norming activities where everyone grades the same sample are especially effective for ensuring fairness.

  • Office Hours
    Encourage assistant instructors to hold consistent and accessible office hours at different times of day to accommodate students.

  • Mid-Semester Check-In
    Use this time to gather feedback, review workloads, and adjust if necessary.

End of the Semester

Wrap up with a reflective meeting. Discuss what worked well, identify challenges, and preserve useful materials for future iterations of the course. These conversations also strengthen the mentoring relationship.

Supporting Assistant Instructor Development

Faculty aren’t just supervisors, they’re mentors. Assistant instructors benefit when faculty take the time to:

  • Coach them on teaching strategies and classroom management

  • Encourage them to set professional development goals and build a teaching portfolio if they are interested in pursuing a faculty position

  • Provide opportunities for peer observation and self-reflection

  • Direct them to school and university-wide teaching resources

By positioning the role as both service and growth opportunity, faculty help assistant instructors build skills that last well beyond a single course.

References

Teaching Tip: What Are You Really Trying to Assess?

As you design quizzes, projects, and exams, it’s worth pausing to ask: What am I really trying to assess? Too often, assessments measure peripheral skills like memorization, rather than the intended learning outcomes. For example, a timed coding exam may end up evaluating typing speed and syntax recall more than algorithmic thinking or problem-solving strategy. Similarly, a multiple-choice exam on HCI principles may privilege memorization over the ability to apply design heuristics to new contexts.

Evidence-based practices to align assessments with your goals:

  1. Backwards Design (Wiggins & McTighe, 2005)

  2. Constructive Alignment (Biggs, 1996)

    • Ensure that learning activities, assessments, and outcomes are in sync. For instance, if collaboration is a stated goal, include a group design critique, not just individual tests.

    • Example: Reflections on applying constructive alignment with formative feedback for teaching introductory programming and software architecture (2016): https://dl-acm-org.proxyiub.uits.iu.edu/doi/pdf/10.1145/2889160.2889185

  3. Authentic Assessment (Herrington & Herrington, 2007; )

  4. Reduce Construct-Irrelevant Barriers

    • If the skill being assessed is debugging, for example, provide starter code so students aren’t penalized for setup. If the goal is conceptual understanding, consider allowing open-book resources so recall doesn’t overshadow reasoning.

Students also struggle not because the concepts are beyond their ability, but because the expectations of the assessment are unclear.

For example:

  • A programming assignment asks students to “optimize” code, but it’s unclear whether grading is based on correctness, runtime efficiency, readability, or documentation.

  • A human–computer interaction (HCI) project requires a prototype, but is the emphasis on creativity, usability testing, or fidelity of the mockup?

  • An informatics paper asks for “analysis,” but it’s unclear whether success depends on critical thinking, proper use of data, or following citation conventions.

When assessments lack clarity, students must guess what matters. This shifts the focus from demonstrating learning to playing a hidden “what does the professor want?” game.

Why It Matters (Evidence-Based):

  • Cognitive Load: Ambiguous assessments create unnecessary cognitive load—students waste energy interpreting instructions instead of applying knowledge (Sweller, 2011).

  • Equity Impact: Lack of clarity disproportionately disadvantages first-generation and other structurally disadvantaged students, who may not have tacit knowledge about faculty expectations (Winkelmes et al., 2016).

  • Misalignment: As mentioned above, vague assessments often misalign with course outcomes, undermining constructive alignment (Biggs, 1996).

What Faculty Can Do:

  1. State the Core Construct: Ask yourself: Am I assessing correctness, creativity, reasoning, or communication? Then state it explicitly.

  2. Communicate Priorities: If multiple criteria matter, indicate their relative weight (e.g., correctness 50%, efficiency 30%, documentation 20%).

  3. Provide a Sample Response: A brief example—annotated to show what “counts”—helps students see what you value.

  4. Check for Hidden Criteria: If you penalize for style, clarity, or teamwork, ensure that’s written down. Otherwise, students perceive grading as arbitrary.

Faculty Reflection Prompt:
Pick one upcoming assignment and ask yourself: If I gave this to a colleague in my field, would they immediately know what I was assessing? Or would they have to guess? If the latter, refine the task or rubric until the answer is obvious.

Takeaway: Unclear assessments don’t just frustrate students, they distort what is being measured. By clarifying exactly what skill or knowledge is under the microscope, faculty ensure assessments are fair, transparent, and aligned with learning outcomes. Before finalizing any assignment or test, ask yourself: Am I measuring the skill that truly matters, or something adjacent? That small moment of reflection can make assessments more equitable, meaningful, and aligned with the professional practices of your discipline.

Quick Tip: Name the Thinking (Cognitive Skill), Not Just the Task

Name the Thinking (Cognitive Skill), Not Just the Task

When introducing a problem set, coding lab, or design activity, take 1–2 minutes to make the thinking process explicit. For example:

  • Instead of just saying: “Debug this code”
    Add: “This task is about identifying assumptions in how the code should work versus how it runs. Pay attention to the strategies you use: reading error messages, testing small chunks, or tracing variables.”

  • Instead of just saying: “Sketch a wireframe”
    Add: “This is about perspective-taking; imagining the interface from a novice user’s point of view.”

By naming the cognitive skill (debugging, pattern recognition, abstraction, empathy, systems thinking), students begin to see how their work maps onto the broader competencies of your field.

Why it matters:

  • Supports metacognition (students reflect on how they learn, not just what they learn).

  • Helps novice learners connect class tasks to professional practices.

  • Reinforces disciplinary literacies and makes hidden expectations visi

Evidence-Based Classroom Assessment Techniques (CATS) for STEM Courses

Teaching a large lecture course in a STEM course can feel like steering a cargo ship; you’re moving a lot of people in the same direction, but small adjustments can be hard to see and manage in real time. Traditional assessments (midterms, finals, projects) may measure end-point achievement, but they don’t always help faculty understand how students are learning along the way. This is where classroom assessment techniques (CATs) https://vcsacl.ucsd.edu/_files/assessment/resources/50_cats.pdf come in: quick, research-backed methods that provide timely insights into student understanding, enabling instructors to adapt instruction while the course is still in motion.

Why CATs Matter in STEM Large-Enrollment Courses

Evidence from STEM education research underscores that formative assessment and feedback loops significantly improve student learning outcomes, especially in large courses where anonymity and disengagement can take hold. Studies show that structured opportunities for feedback (e.g., one-minute papers, peer assessments, low-stakes quizzes) can reduce achievement gaps and support retention in challenging majors.

At the same time, as Northwestern’s Principles of Inclusive Teaching https://searle.northwestern.edu/resources/principles-of-inclusive-teaching/note, students often struggle not only with course content but also with the “hidden curriculum” or unspoken rules about what “counts” as good work or participation https://cra.org/crn/2024/02/expanding-career-pipelines-by-unhiding-the-hidden-curriculum-of-university-computing-majors/ . Transparent communication about assessment criteria and expectations helps level the playing field.

High-Impact CATs for CS, Engineering, and Informatics

  • Algorithm Walkthroughs (Think-Alouds)
    Students articulate their reasoning step-by-step. Helps faculty identify gaps in procedural knowledge.

  • Debugging Minute Paper
    Prompt: “What was the most confusing bug/issue we discussed today, and why?” Surfaces common misconceptions in programming logic.

  • Concept Maps for Systems Thinking
    Students draw connections between components (e.g., CPU, memory, OS). Research shows concept mapping fosters transfer across domains.

  • Peer Review of HCI Prototypes
    Students exchange usability sketches with rubrics. Builds critique skills and awareness of user-centered design.

  • Low-Stakes Quizzing with Digital Dashboards
    LMS quizzes or polling tools provide immediate data on misconceptions while also scaffolding students’ goal monitoring.

Making CATs Inclusive in Large Lecture Halls

To avoid reinforcing inequities, instructors should:

  • Clarify criteria with rubrics for coding projects, design critiques, or participation.

  • Co-create ground rules for collaboration in labs and online forums, ensuring respectful and equitable engagement.

  • Balance rigor and empathy: challenge students while providing structures that acknowledge different starting points and prior knowledge.

Putting It into Practice

  • In a 250-student programming class, use a digital Muddiest Point poll after each lecture, then address top confusions in the next class.

  • In an HCI course, scaffold peer review CATs for wireframes inside the LMS, combining digital rubrics with analog small-group feedback.

  • In a systems engineering class, embed progress dashboards with reflective CAT prompts (“Where are you stuck? What resource might help?”). This makes metacognition visible and actionable.

Final Thought

Large-enrollment CS, engineering, informatics, and HCI courses don’t have to feel impersonal or assessment-heavy. By integrating classroom assessment techniques faculty can design courses that are responsive, transparent, and inclusive. The result: students who not only master disciplinary knowledge but also learn how to manage their own learning, a skill set essential for both the classroom and the future of work.

Further Reading:

  1. Angelo & Cross’s Classroom Assessment Techniques https://iucat.iu.edu/catalog/20750208
    50+ adaptable CATs. For large STEM courses, techniques like the “Muddiest Point” or “Background Knowledge Probe” are especially powerful.

  2. Nilson’s Teaching at Its Best https://iucat.iu.edu/catalog/16660002
    Offers frameworks for aligning CATs with learning objectives—critical in CS/engineering courses where problem-solving, debugging, and design thinking are central.

  3. Northwestern University, Principles of Inclusive Teaching https://searle.northwestern.edu/resources/principles-of-inclusive-teaching/; and Making Large Classrooms feel Smaller: https://searle.northwestern.edu/resources/our-tools-guides/learning-teaching-guides/making-large-classes-feel-smaller.html

Pedagogical Tips for the Start of the Semester

The first weeks of the semester are a unique window to shape not only what students will learn, but how they will learn. In STEM courses, where concepts can be abstract, skill levels vary wildly, and technologies evolve quickly, intentional, evidence-based practices can help you set students up for long-term success.

Below are a few strategies with examples and tools you can implement immediately.

Design an Inclusive, Transparent Syllabus

Evidence base: Transparent teaching research (Winkelmes et al., 2016) shows that when students understand the purpose, tasks, and criteria for success, they perform better.

Implementation tips:

  • Purpose statements: For every major assignment, include a short note on why it matters and how it connects to industry or future coursework.
    Example: “This database schema project builds skills in relational modeling, which are directly relevant to backend software engineering interviews.”

  • Clear expectations: Break down grading policies, late work policies, and collaboration guidelines into plain language, avoiding overly technical or legalistic phrasing.

  • Accessibility & flexibility: Link to tutoring labs, office hours, online learning resources, and note-taking tools. Indicate whether assignments can be resubmitted after feedback.

  • Create a one-page “Quick Reference” sheet covering key policies (late work, collaboration, grading)

  • Norm-setting: Add a “Community Norms” section that covers respectful code reviews, how to ask questions in class, and expectations for group work. In large classes, it’s vital to set expectations for respectful online discussions, effective use of the Q&A forum (e.g., checking if a question has already been asked), and guidelines for group work if applicable (e.g., conflict resolution strategies).

Establish Psychological Safety Early

Evidence base: Google’s Project Aristotle (2015) and Edmondson’s (1999) work on team learning show that psychological safety, where students feel safe to take intellectual risks, is essential for high performance.

Implementation tips:

  • Low stakes start: In week one, run short, open-ended coding challenges that allow multiple solutions. Make it clear that mistakes are part of the process.

  • Start with anonymous polls about programming experience to acknowledge the diversity of backgrounds in the room.

  • Instructor vulnerability: Share a personal example of a bug or failed project you learned from. This normalizes challenges in programming. In a large lecture, you can briefly mention common misconceptions students often have with a new concept, and how to navigate them.

  • Model Constructive Feedback: When providing feedback on early assignments (even low-stakes ones), focus on growth and learning. When addressing common errors in a large class, frame it as an opportunity for collective learning rather than pointing out individual mistakes.

  • Multiple communication channels: Set up a Q&A platform (InScribe) where students can post questions anonymously.

Use Early Analytics for Intervention

Evidence base: Freeman et al. (2014) found that early course engagement strongly predicts later success, allowing for timely support.

Implementation tips:

  • Student Engagement Roster (SER): https://ser.indiana.edu/faculty/index.html During the first week of class,  consider explaining the SER to your students and tell them how you will be using it. If students are registered for your class and miss the first class, report them as non-attending in SER.  It will allow outreach that can help clarify their situation. Here’s a sample text you could put into your syllabus:
    This semester I will be using IU’s Student Engagement Roster to provide feedback on your performance in this course. Periodically throughout the semester, I will be entering information on factors such as your class attendance, participation, and success with coursework, among other things. This information will provide feedback on how you are doing in the course and offer you suggestions on how you might be able to improve your performance.  You will be able to access this information by going to One.IU.edu and searching for the Student Engagement Roster (Faculty) tile.

  • Use Canvas Analytics:

  • Identify struggling students. “Submissions” allows you to view if students submit assignments on-time, late, or not at all.

    1. See grades at a glance. “Grades” uses a box and whisker plot to show the distribution of grades in the course.

    2. See individual student data. “Student Analytics” shows page view, participations, assignments, and current score for every student in the course.

  • Track early submissions: Note which students complete the first assignments or attend early labs

  • Personal outreach: Email or meet with students who are slipping to connect them with tutoring, peer mentors, or study groups.

  • Positive nudges: Celebrate early wins (e.g., “I noticed you submitted the optional challenge problem. Great initiative!”).

  • Proactive Outreach (with TA Support): If you identify students who are struggling, send personalized emails offering support and directing them to available resources (e.g., tutoring, office hours with TAs). Consider delegating some of this outreach to TAs in large courses.

  • Announcements Highlighting Resources: Regularly remind the entire class about available support resources, study strategies, and upcoming deadlines through announcements.

Key Implementation Strategies for Success

  • Start Small and Build Don’t attempt to implement all strategies simultaneously. Choose 2-3 that align with your teaching style and course structure, then gradually incorporate additional elements.

  • Leverage Your Teaching Team In large courses, TAs are essential partners. Invest time in training them on consistent feedback practices, student support strategies, and early intervention protocols.

  • Iterate Based on Data Use student feedback, performance analytics, and your own observations to refine your approach throughout the semester. What works in one context may need adjustment in another.

  • Maintain Connection at Scale Even in large courses, students need to feel seen and supported. Use technology strategically to maintain personal connection while managing the practical demands of scale.

Conclusion

By implementing these research-backed strategies, faculty can create learning environments where diverse students thrive, engagement remains high, and learning outcomes improve significantly.

The investment in implementing these practices pays dividends not only in student success but also in teaching satisfaction and course sustainability. As you prepare for the new semester, consider which strategies best align with your course goals and student population, then take the first step toward transforming your large enrollment course into a dynamic, supportive learning community.

Remember: even small changes, consistently applied, can create significant improvements in student learning and engagement. Start where you are, use what you have, and do what you can to create the best possible learning experience for your students.

References

  1. Winkelmes, M. A., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K. H. (2016). A teaching intervention that increases underserved college students’ success. Peer Review, 18(1/2), 31–36. Association of American Colleges and Universities.

  2. Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. https://doi.org/10.2307/2666999

  3. Google Inc. (2015). Project Aristotle: Understanding team effectiveness. Retrieved from https://rework.withgoogle.com/intl/en/guides/understanding-team-effectiveness

  4. Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111(23), 8410–8415. https://doi.org/10.1073/pnas.1319030111