Strategy 3: Tiered Feedback So You Don’t Burn Out
Not every student needs the same type or depth of feedback. Research on formative assessment suggests that differentiated feedback can be both more effective and more sustainable for instructors (Shute, 2008; Winstone & Carless, 2020).
For a class of about 30 students, a realistic model looks like this:
Top 20 percent: Brief text comments highlighting exemplary practices and optional stretch ideas. Time per student: two to three minutes.
Middle 60 percent: Five- to seven-minute screencasts focused on two or three rubric criteria that would move the work to the next level.
Bottom 20 percent: Five- to seven-minute screencast plus an invitation to meet, emphasizing growth and support rather than penalty.
This typically adds up to about three hours of feedback time, comparable to detailed written comments but with higher instructional payoff (Ice et al., 2007).
Efficiency improves further if you create two or three short videos addressing common issues and link them instead of repeating the same explanations student by student (Borup et al., 2013).
Adapting Strategy 3 for large-enrollment courses (e.g., 100+ students) requires shifting from individual tiering to collective and automated tiering. While individual screencasts are sustainable for 30 students, they become a “second job” for 150.
Here is how to scale the tiered feedback model without losing the instructional payoff:
The Scaled Tiered Model
In a large course, you can replace individual screencasts with “Problem-Set Debriefs” and automated triggers.
Top Tier (High Achievers): Use automated “Success” triggers. In platforms like Canvas or Gradescope, you can set a rule that students scoring above 90% receive a pre-written comment praising their specific mastery and providing “stretch” links to advanced professional documentation or extra-credit challenges.
Middle Tier (The “Global” Feedback): Instead of individual videos, record three 2-minute “Common Pitfall” screencasts based on the most frequent errors identified during grading. Link these videos directly in the rubric or assignment comments so students can see expert reasoning for the specific logic errors they likely encountered.
Bottom Tier (High Support): Identify students who failed to meet core learning objectives and send a templated, supportive “Check-in” message through the Student Engagement Roster. Invite them to a group “Office Hour Lab” specifically designed to review the foundational concepts they missed, emphasizing growth over penalty.
Strategy 4: Thoughtful Use of AI for Feedback and Grading
AI tools (such as Gradescope) can reduce routine grading work and provide fast correctness feedback, but they are not substitutes for professional judgment. Research shows that automated systems are effective at identifying syntax errors, style violations, and common mistakes, but struggle with design quality, documentation, and nuanced reasoning (Keuning et al., 2018).
Use AI or automated tools to:
Run test suites and identify edge cases. Enforce style and basic security rules. Provide immediate, low-stakes feedback for iteration.
Reserve human review for:
Architecture and design choices. Maintainability and documentation. Security practices and professional conventions.
To keep this ethical and transparent:
Clearly label which rubric criteria are auto-scored and which are human-scored (Holmes et al., 2022). Provide a process for students to request review when automated feedback seems incorrect. Periodically audit AI-assisted scoring for bias or uneven impact (Baker & Hawn, 2021).
You can also frame AI as a learning partner rather than a grader by asking students to document how they used AI tools, rewarding reflective use, and setting clear boundaries around generated content (Kasneci et al., 2023).
Your Implementation Checklist
If you want a minimal, high-impact plan:
Finalize and share a detailed rubric aligned with professional practices (Reddy & Andrade, 2010). Require a brief student self-assessment using the rubric (Andrade & Du, 2007). Set up one reliable screencasting workflow and practice a short review (Mayer, 2014). Plan a tiered feedback scheme so your time is targeted (Shute, 2008). Create two or three short videos for common issues. Clearly explain how automated tools are used and how students can respond (Holmes et al., 2022). Reserve support time for students who are still developing core skills (Kuh, 2008; Winstone et al., 2017).
When feedback shows students what quality looks like, helps them see where they are, and gives them clear next steps, assessment becomes part of learning rather than an endpoint. In technical disciplines, that shift often determines whether students decide they don’t belong or begin to see themselves as capable, growing professionals.
References
Andrade, H., & Du, Y. (2007). Student responses to criteria-referenced self-assessment. Assessment & Evaluation in Higher Education, 32(2), 159–181. https://doi.org/10.1080/02602930600801928
Baker, R. S., & Hawn, A. (2021). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32(4), 1052–1092. https://doi.org/10.1007/s40593-021-00285-9
Borup, J., West, R. E., & Graham, C. R. (2013). The influence of asynchronous video communication on learner social presence: A narrative analysis of four cases. Distance Education, 34(1), 48–63. https://doi.org/10.1080/01587919.2013.770427
Cavalcanti, A. P., Barbosa, A., Carvalho, R., Freitas, F., Tsai, Y.-S., Gašević, D., & Mello, R. F. (2021). Automatic feedback in online learning environments: A systematic literature review. Computers and Education: Artificial Intelligence, 2, Article 100027. https://doi.org/10.1016/j.caeai.2021.100027
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
Holmes, W., Porayska-Pomsta, K., Holstein, K., et al. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504–526. https://doi.org/10.1007/s40593-021-00239-1
Hundhausen, C. D., Agrawal, A., & Agarwal, P. (2013). Talking about code: Integrating pedagogical code reviews into early computing courses. ACM Transactions on Computing Education, 13(3), Article 14. https://doi.org/10.1145/2499947.2499951
Ice, P., Curtis, R., Phillips, P., & Wells, J. (2007). Using asynchronous audio feedback to enhance teaching presence and students’ sense of community. Journal of Asynchronous Learning Networks, 11(2), 3–25. https://doi.org/10.24059/olj.v11i2.1724
Kasneci, E., Seßler, K., Küchemann, S., et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, Article 102274. https://doi.org/10.1016/j.lindif.2023.102274
Keuning, H., Jeuring, J., & Heeren, B. (2018). A systematic literature review of automated feedback generation for programming exercises. ACM Transactions on Computing Education, 19(1), Article 3. https://doi.org/10.1145/3231711
Kuh, G. D. (2008). High-impact educational practices: What they are, who has access to them, and why they matter. AAC&U. https://www.aacu.org/trending-topics/high-impact
Mayer, R. E. (2014). Cognitive theory of multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 43–71). Cambridge University Press. https://doi.org/10.1017/CBO9781139547369.005
Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. https://doi.org/10.1080/03075070600572090
Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448. https://doi.org/10.1080/02602930902862859
Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. https://doi.org/10.3102/0034654307313795
Winkelmes, M. A., Boye, A., & Tapp, S. (2016). Transparent design in higher education teaching and leadership. Stylus. https://www.routledge.com/Transparent-Design-in-Higher-Education-Teaching-and-Leadership-A-Guide-to-Implementing-the-Transparency-Framework-Institution-Wide-to-Improv/Winkelmes-Boye-Tapp/p/book/9781620368237 IUCAT https://iucat.iu.edu/catalog/20508845
Winstone, N. E., & Carless, D. (2020). Designing effective feedback processes in higher education. Routledge. https://www.routledge.com/Designing-Effective-Feedback-Processes-in-Higher-Education-A-Learning-Focused-Approach/Winstone-Carless/p/book/9780815361633 IUCAT – https://iucat.iu.edu/catalog/20522170
Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist, 52(1), 17–37. https://doi.org/10.1080/00461520.2016.1207538