By this point in the semester, the honeymoon is over. Students have hit their first complex coding project, circuit simulation, or data analysis lab, and many of them are hitting a wall. The feedback you give at this moment does real work. It shapes confidence, persistence, and students’ internal sense of what quality looks like in your discipline.
In technical disciplines, “good job” isn’t enough, and “wrong” is frustrating without context. Research tells us that feedback is most effective when it’s specific, timely, and actionable (Shute, 2008). But in STEM courses, we also need it to be efficient for you as a faculty member with several responsibilities, and accessible across a wide array of learner backgrounds. So, here’s how to provide high-impact feedback without losing your entire weekend to the grading pile.
This post focuses on how to use rubrics, short screencasts, and carefully chosen AI tools to deliver feedback that is specific, humane, and sustainable without turning grading into a second job.
In computing, engineering, and data-focused courses, the first major assignment is often the point where students encounter professional-level complexity for the first time. Many report that feedback feels generic, delayed, or disconnected from how they can improve. In programming courses in particular, feedback frequently centers on whether something works, rather than how well it is designed, documented, tested, or maintained (Hundhausen et al., 2013; Keuning et al., 2018). Research consistently shows that feedback is most powerful when it helps students answer four questions: Where am I going? How am I doing? Where to next? What strategies will help me improve? (Hattie & Timperley, 2007). Other work shows that students also need support to actively engage with feedback, not just receive it, for it to influence learning (Winstone et al., 2017).
The goal is to structure feedback so students can answer those questions for themselves while keeping your workload manageable.
Strategy 1: Make Rubrics Do the Heavy Lifting
Well-designed rubrics turn vague judgment into transparent expectations. When rubrics include descriptive performance levels and are shared before students begin the work, they improve learning outcomes and student satisfaction (Reddy & Andrade, 2010; Andrade & Du, 2007). In technical courses, rubrics also make implicit professional norms visible, assessable, and discussable (Winkelmes et al., 2016).
Common rubric problems in technical disciplines include broad categories like “Code Quality: 0–25 points,” binary “works or doesn’t work” distinctions, and descriptors such as “good documentation” without observable indicators. These approaches provide little guidance for improvement and often lead to repetitive instructor commenting (Nicol & Macfarlane-Dick, 2006).
A stronger rubric distinguishes performance levels for criteria such as code organization, error handling, testing, documentation, security practices, and version control. For example, exemplary error handling might include meaningful user messages, logging, and graceful failure, while beginning-level work might include crashes, unhandled input, and no diagnostics.
Three moves you can implement right now:
Align each rubric criterion with a specific learning objective and an authentic professional practice such as test coverage, branching strategy, or secure queries (Winkelmes et al., 2016).
Share the rubric when you assign the work and ask students to self-assess against it before submission. This supports metacognition and improves performance (Andrade & Du, 2007; Nicol & Macfarlane-Dick, 2006).
In advanced courses, co-create parts of the rubric by asking students what distinguishes novice work from expert work. This builds agency and self-regulation (Winstone & Carless, 2020).
When rubrics are doing real instructional work, they reduce repetitive commenting and anchor feedback in shared language.
Strategy 2: 120-Second Over-the-Shoulder Screencast Reviews
Some problems are just inefficient to explain in text. A subtle logic error, a flawed circuit diagram, or a misapplied data transformation is often easier to show than describe. Research on multimedia learning shows that concise audio-visual explanations improve understanding for procedural and spatial tasks (Mayer, 2014) and increase students’ sense of instructor presence (Borup et al., 2013).
Here’s a practical pattern: the 120-second over-the-shoulder review.
Open the student’s code or artifact. Start a quick screencast using a tool (like Adobe Express, Kaltura or Zoom). Spend two focused minutes highlighting strengths and one to three priority issues.
What this sounds like:
“I see what you’re doing here on line 42. You’re reinitializing the counter inside the loop, which is why it never increments. Watch what happens when I move it above the loop.”
Or:
“When debugging, trace variable values step by step instead of changing multiple things at once. That process will help you isolate the break.”
Frame your comments at three levels:
Task: what is happening and why it is incorrect.
Process: strategies the student can reuse in future work.
Self-regulation: reinforcement of productive approaches they are already using.
Avoid feedback about personal traits. Research shows that praise focused on the self has little impact on learning (Hattie & Timperley, 2007). Short screencasts allow students to see expert reasoning in action and have been linked to increased engagement and improved code quality in computing courses (Cavalcanti et al., 2021; Hundhausen et al., 2013).
If grading in Canvas, consider using its built-in Video Feedback feature to keep everything in one place:
Strategy 3: Tiered Feedback So You Don’t Burn Out
Not every student needs the same type or depth of feedback. Research on formative assessment suggests that differentiated feedback can be both more effective and more sustainable for instructors (Shute, 2008; Winstone & Carless, 2020).
For a class of about 30 students, a realistic model looks like this:
Top 20 percent: Brief text comments highlighting exemplary practices and optional stretch ideas. Time per student: two to three minutes.
Middle 60 percent: Five- to seven-minute screencasts focused on two or three rubric criteria that would move the work to the next level.
Bottom 20 percent: Five- to seven-minute screencast plus an invitation to meet, emphasizing growth and support rather than penalty.
This typically adds up to about three hours of feedback time, comparable to detailed written comments but with higher instructional payoff (Ice et al., 2007).
Efficiency improves further if you create two or three short videos addressing common issues and link them instead of repeating the same explanations student by student (Borup et al., 2013).
Adapting Strategy 3 for large-enrollment courses (e.g., 100+ students) requires shifting from individual tiering to collective and automated tiering. While individual screencasts are sustainable for 30 students, they become a “second job” for 150.
Here is how to scale the tiered feedback model without losing the instructional payoff:
The Scaled Tiered Model
In a large course, you can replace individual screencasts with “Problem-Set Debriefs” and automated triggers.
Top Tier (High Achievers): Use automated “Success” triggers. In platforms like Canvas or Gradescope, you can set a rule that students scoring above 90% receive a pre-written comment praising their specific mastery and providing “stretch” links to advanced professional documentation or extra-credit challenges.
Middle Tier (The “Global” Feedback): Instead of individual videos, record three 2-minute “Common Pitfall” screencasts based on the most frequent errors identified during grading. Link these videos directly in the rubric or assignment comments so students can see expert reasoning for the specific logic errors they likely encountered.
Bottom Tier (High Support): Identify students who failed to meet core learning objectives and send a templated, supportive “Check-in” message through the Student Engagement Roster. Invite them to a group “Office Hour Lab” specifically designed to review the foundational concepts they missed, emphasizing growth over penalty.
Strategy 4: Thoughtful Use of AI for Feedback and Grading
AI tools (such as Gradescope) can reduce routine grading work and provide fast correctness feedback, but they are not substitutes for professional judgment. Research shows that automated systems are effective at identifying syntax errors, style violations, and common mistakes, but struggle with design quality, documentation, and nuanced reasoning (Keuning et al., 2018).
Use AI or automated tools to:
Run test suites and identify edge cases. Enforce style and basic security rules. Provide immediate, low-stakes feedback for iteration.
Reserve human review for:
Architecture and design choices. Maintainability and documentation. Security practices and professional conventions.
To keep this ethical and transparent:
Clearly label which rubric criteria are auto-scored and which are human-scored (Holmes et al., 2022). Provide a process for students to request review when automated feedback seems incorrect. Periodically audit AI-assisted scoring for bias or uneven impact (Baker & Hawn, 2021).
You can also frame AI as a learning partner rather than a grader by asking students to document how they used AI tools, rewarding reflective use, and setting clear boundaries around generated content (Kasneci et al., 2023).
Your Implementation Checklist
If you want a minimal, high-impact plan:
Finalize and share a detailed rubric aligned with professional practices (Reddy & Andrade, 2010). Require a brief student self-assessment using the rubric (Andrade & Du, 2007). Set up one reliable screencasting workflow and practice a short review (Mayer, 2014). Plan a tiered feedback scheme so your time is targeted (Shute, 2008). Create two or three short videos for common issues. Clearly explain how automated tools are used and how students can respond (Holmes et al., 2022). Reserve support time for students who are still developing core skills (Kuh, 2008; Winstone et al., 2017).
When feedback shows students what quality looks like, helps them see where they are, and gives them clear next steps, assessment becomes part of learning rather than an endpoint. In technical disciplines, that shift often determines whether students decide they don’t belong or begin to see themselves as capable, growing professionals.
References
Andrade, H., & Du, Y. (2007). Student responses to criteria-referenced self-assessment. Assessment & Evaluation in Higher Education, 32(2), 159–181. https://doi.org/10.1080/02602930600801928
Baker, R. S., & Hawn, A. (2021). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32(4), 1052–1092. https://doi.org/10.1007/s40593-021-00285-9
Borup, J., West, R. E., & Graham, C. R. (2013). The influence of asynchronous video communication on learner social presence: A narrative analysis of four cases. Distance Education, 34(1), 48–63. https://doi.org/10.1080/01587919.2013.770427
Cavalcanti, A. P., Barbosa, A., Carvalho, R., Freitas, F., Tsai, Y.-S., Gašević, D., & Mello, R. F. (2021). Automatic feedback in online learning environments: A systematic literature review. Computers and Education: Artificial Intelligence, 2, Article 100027. https://doi.org/10.1016/j.caeai.2021.100027
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
Holmes, W., Porayska-Pomsta, K., Holstein, K., et al. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504–526. https://doi.org/10.1007/s40593-021-00239-1
Hundhausen, C. D., Agrawal, A., & Agarwal, P. (2013). Talking about code: Integrating pedagogical code reviews into early computing courses. ACM Transactions on Computing Education, 13(3), Article 14. https://doi.org/10.1145/2499947.2499951
Ice, P., Curtis, R., Phillips, P., & Wells, J. (2007). Using asynchronous audio feedback to enhance teaching presence and students’ sense of community. Journal of Asynchronous Learning Networks, 11(2), 3–25. https://doi.org/10.24059/olj.v11i2.1724
Kasneci, E., Seßler, K., Küchemann, S., et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, Article 102274. https://doi.org/10.1016/j.lindif.2023.102274
Keuning, H., Jeuring, J., & Heeren, B. (2018). A systematic literature review of automated feedback generation for programming exercises. ACM Transactions on Computing Education, 19(1), Article 3. https://doi.org/10.1145/3231711
Kuh, G. D. (2008). High-impact educational practices: What they are, who has access to them, and why they matter. AAC&U. https://www.aacu.org/trending-topics/high-impact
Mayer, R. E. (2014). Cognitive theory of multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 43–71). Cambridge University Press. https://doi.org/10.1017/CBO9781139547369.005
Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. https://doi.org/10.1080/03075070600572090
Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448. https://doi.org/10.1080/02602930902862859
Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. https://doi.org/10.3102/0034654307313795
Winkelmes, M. A., Boye, A., & Tapp, S. (2016). Transparent design in higher education teaching and leadership. Stylus. https://www.routledge.com/Transparent-Design-in-Higher-Education-Teaching-and-Leadership-A-Guide-to-Implementing-the-Transparency-Framework-Institution-Wide-to-Improv/Winkelmes-Boye-Tapp/p/book/9781620368237 IUCAT https://iucat.iu.edu/catalog/20508845
Winstone, N. E., & Carless, D. (2020). Designing effective feedback processes in higher education. Routledge. https://www.routledge.com/Designing-Effective-Feedback-Processes-in-Higher-Education-A-Learning-Focused-Approach/Winstone-Carless/p/book/9780815361633 IUCAT – https://iucat.iu.edu/catalog/20522170
Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist, 52(1), 17–37. https://doi.org/10.1080/00461520.2016.1207538