Evidence-Based Classroom Assessment Techniques (CATS) for STEM Courses

Teaching a large lecture course in a STEM course can feel like steering a cargo ship; you’re moving a lot of people in the same direction, but small adjustments can be hard to see and manage in real time. Traditional assessments (midterms, finals, projects) may measure end-point achievement, but they don’t always help faculty understand how students are learning along the way. This is where classroom assessment techniques (CATs) https://vcsacl.ucsd.edu/_files/assessment/resources/50_cats.pdf come in: quick, research-backed methods that provide timely insights into student understanding, enabling instructors to adapt instruction while the course is still in motion.

Why CATs Matter in STEM Large-Enrollment Courses

Evidence from STEM education research underscores that formative assessment and feedback loops significantly improve student learning outcomes, especially in large courses where anonymity and disengagement can take hold. Studies show that structured opportunities for feedback (e.g., one-minute papers, peer assessments, low-stakes quizzes) can reduce achievement gaps and support retention in challenging majors.

At the same time, as Northwestern’s Principles of Inclusive Teaching https://searle.northwestern.edu/resources/principles-of-inclusive-teaching/note, students often struggle not only with course content but also with the “hidden curriculum” or unspoken rules about what “counts” as good work or participation https://cra.org/crn/2024/02/expanding-career-pipelines-by-unhiding-the-hidden-curriculum-of-university-computing-majors/ . Transparent communication about assessment criteria and expectations helps level the playing field.

High-Impact CATs for CS, Engineering, and Informatics

  • Algorithm Walkthroughs (Think-Alouds)
    Students articulate their reasoning step-by-step. Helps faculty identify gaps in procedural knowledge.

  • Debugging Minute Paper
    Prompt: “What was the most confusing bug/issue we discussed today, and why?” Surfaces common misconceptions in programming logic.

  • Concept Maps for Systems Thinking
    Students draw connections between components (e.g., CPU, memory, OS). Research shows concept mapping fosters transfer across domains.

  • Peer Review of HCI Prototypes
    Students exchange usability sketches with rubrics. Builds critique skills and awareness of user-centered design.

  • Low-Stakes Quizzing with Digital Dashboards
    LMS quizzes or polling tools provide immediate data on misconceptions while also scaffolding students’ goal monitoring.

Making CATs Inclusive in Large Lecture Halls

To avoid reinforcing inequities, instructors should:

  • Clarify criteria with rubrics for coding projects, design critiques, or participation.

  • Co-create ground rules for collaboration in labs and online forums, ensuring respectful and equitable engagement.

  • Balance rigor and empathy: challenge students while providing structures that acknowledge different starting points and prior knowledge.

Putting It into Practice

  • In a 250-student programming class, use a digital Muddiest Point poll after each lecture, then address top confusions in the next class.

  • In an HCI course, scaffold peer review CATs for wireframes inside the LMS, combining digital rubrics with analog small-group feedback.

  • In a systems engineering class, embed progress dashboards with reflective CAT prompts (“Where are you stuck? What resource might help?”). This makes metacognition visible and actionable.

Final Thought

Large-enrollment CS, engineering, informatics, and HCI courses don’t have to feel impersonal or assessment-heavy. By integrating classroom assessment techniques faculty can design courses that are responsive, transparent, and inclusive. The result: students who not only master disciplinary knowledge but also learn how to manage their own learning, a skill set essential for both the classroom and the future of work.

Further Reading:

  1. Angelo & Cross’s Classroom Assessment Techniques https://iucat.iu.edu/catalog/20750208
    50+ adaptable CATs. For large STEM courses, techniques like the “Muddiest Point” or “Background Knowledge Probe” are especially powerful.

  2. Nilson’s Teaching at Its Best https://iucat.iu.edu/catalog/16660002
    Offers frameworks for aligning CATs with learning objectives—critical in CS/engineering courses where problem-solving, debugging, and design thinking are central.

  3. Northwestern University, Principles of Inclusive Teaching https://searle.northwestern.edu/resources/principles-of-inclusive-teaching/; and Making Large Classrooms feel Smaller: https://searle.northwestern.edu/resources/our-tools-guides/learning-teaching-guides/making-large-classes-feel-smaller.html

Supporting non-majors in introductory computer courses

The article titled “Exploring Relations between Programming Learning Trajectories and Students’ Majors” https://dl.acm.org/doi/fullHtml/10.1145/3674399.3674497 investigates how students from various academic disciplines learn programming in a compulsory introductory programming course consisting of 75 students, with 40 majoring in CS and 35 in non-CS majors. “They were all freshmen without prior programming experience. Considering their similar scores of entrance exam to this university, it can be assumed that their levels of mathematical logic and computational thinking were roughly comparable”.

The authors note that “an increasing number of non-computer science students are now engaging in programming learning. However, they often struggle in early programming courses. The researchers analyzed data from students’ learning processes to understand how their major influences their learning journey in programming.

The study found that students’ backgrounds and areas of study can affect how they approach and progress in learning programming. They suggest:

  1. Making Programming Relevant: When teaching programming to students who aren’t majoring in computer science, it’s important to connect the lessons to things that are important to them. For example, showing how programming can be used in art, music, or business can make the subject more interesting, especially at the start of the course.

  2. Paying Extra Attention to Struggling Students: Teachers should keep a close eye on students who are not doing well or aren’t very interested in the course. These students might need extra help to keep up, so they don’t fall behind. Connecting them with teaching assistants, Luddy tutors, and additional resources early in the semester could be helpful.

  3. Using Tests to Track Progress: For computer science students, instructors can use quizzes and smaller tests throughout the semester to see how well they’re doing. This helps faculty know if they’re learning. However, for non-CS students who are doing well, these smaller tests might not show their full abilities, so the teacher needs to be extra careful when evaluating their skills. These students might be good at memorizing facts or completing basic tasks in the test, but that doesn’t mean they fully grasp the deeper concepts or could apply the knowledge in real-world situations. So, instructors need to be extra thoughtful and consider other ways to evaluate their skills, not just based on these smaller tests.

Example:

Imagine a student in a business major who is acing the quizzes in a programming class. They might be good at solving problems that are simple and similar to what they’ve studied, but the quizzes might not show how well they can use programming to solve real business problems. Faculty might need to look at other work, like projects or group activities, to better understand the student’s true abilities.

Implications for Teaching and Learning:

  • Tailored Instruction: Educators can design programming courses that consider the diverse backgrounds of students, offering different learning paths or support based on their major.Example: In a programming class, students from a data science major might already have some knowledge of coding, so the instructor could offer them more advanced challenges while giving students from a humanities background more basic programming tasks. This ensures that all students are working at a level that matches their prior knowledge, making learning more effective.

  • Early Support: Providing additional resources or guidance early in the course can help students who might struggle due to their major’s focus, ensuring they keep up with the material.Example: In the first few weeks of a programming course, an instructor might offer extra study sessions or online tutorials for students from non-technical majors (like business or social sciences). These students may find programming challenging, so additional support would help them catch up and build their confidence early in the course.

  • Encouraging Diverse Majors: Encouraging students from various disciplines to engage with programming can enrich their learning experience and broaden their skill set.Example: A university might organize workshops to show students from creative fields (like art or design) how programming can help them bring their ideas to life, such as creating interactive websites or digital art. Encouraging students from these fields to explore programming opens new possibilities for their careers and learning.

By understanding the relationship between a student’s major and their programming learning trajectory, educators can create more effective and supportive learning environments.

Classroom Assessment Techniques

A new (2024) version of the classic book, “Classroom assessment techniques : formative feedback tools for college and university teachers” is available in the IU Library:

https://iucat.iu.edu/catalog/20750208

Classroom Assessment Techniques (CATs) are simple, low-pressure ways to check how well students are understanding the material. These methods are efficient, student-centered strategies that provide valuable insights into learning progress. Instructors can use feedback from CATs to adjust activities, offer extra support, or change the pace of the class to better meet student needs. CATs are not just about assessment—they also enhance learning. Here’s how:

  • Focus Students’ Attention: Students often come to class distracted by other concerns. Starting with a quick CAT activity can help them focus and prepare to engage.

  • Spot Trouble Early: A simple check-in at the beginning of class can reveal concepts that need more explanation or clarification, ensuring everyone is on the same page.

The book is a practical, research-based handbookthat helps faculty assess student learning at the classroom level. It offers tools for formative assessment applicable in face-to-face, hybrid, and online learning environments. While we have discussed the previous edition and related resources in the past, the new edition integrates 30 years of research and classroom practice, providing updated and field-tested assessment techniques. The book divides up CATs into several categories

Categories of Classroom Assessment Techniques (CATs) – Chapters 8-17 (adapted/edited with technical examples):

  1. Knowledge Recall & Understanding

Empty Outline

  • Students are given a partially completed algorithm design process outline and must fill in the missing steps.

  • Helps students recall fundamental software development methodologies (e.g., Waterfall, Agile, Scrum).

RSQC2 (Recall, Summarize, Question, Connect, Comment)

  • After a lesson on supervised vs. unsupervised learning, students:

    • Recall key definitions.

    • Summarize the differences.

    • Question a potential challenge in real-world applications.

    • Connect the concept to clustering methods in AI.

    • Comment on ethical concerns in algorithmic bias.

  1. Application

Concept Maps

  • Students create a concept map linking usability principles (e.g., learnability, efficiency, satisfaction) to a real-world user interface (such as a mobile banking app).

RSQC2

  • After discussing autonomous systems, students create a summary matrix evaluating:

    • Sensors used in self-driving cars

    • How decision-making algorithms function

    • Challenges in real-world implementation

  1. Problem Solving

What’s the Principle?

  • Given a dataset and an incorrectly applied machine learning model, students must identify the underlying principle that was violated (e.g., overfitting, lack of feature normalization).

Peer Debugging Sessions

  • Students review a piece of malfunctioning codeand collaboratively apply debugging strategies.

  • Helps them develop problem-solving approaches to software engineering.

  1. Critical Thinking & Analysis

Blooming (Using Bloom’s Taxonomy)

  • Students analyze real-world accessibility failuresin a user interface, progressing through Bloom’s levels:

    • Understanding accessibility guidelines.

    • Applying them to UI analysis.

    • Analyzing gaps in existing designs.

    • Evaluating how these impact user experience.

    • Creating a revised design proposal.

Comparing AI Bias in Decision-Making

  • Students critique different AI models used in hiring processes, identifying bias and ethics-related concerns.

  1. Synthesis & Creative Thinking

Student-Generated Questions

  • Students create quiz questions related to data structures and algorithms, peer-reviewing each other’s questions for complexity and clarity.

Concept Maps for IoT Networks

  • Students visually map out an Internet of Things (IoT) system, including:

    • Sensors

    • Cloud processing

    • Data security mechanisms

  1. Attitudes & Values

Profiles of Admirable Individuals

  • Students select a UX designer (e.g., Don Norman, Jakob Nielsen) and analyze how their design philosophy aligns with current industry best practices.

Reflective Journal on Bias in AI

  • Students keep a weekly reflection log on how AI-driven systems impact marginalized communities, promoting ethical awareness.

  1. Self-Assessment of Learning

Goal Ranking & Matching

  • Students list their learning objectives in a course on AI-driven robotics and compare them with the instructor’s objectives to identify gaps in understanding.

Debugging Logs for Self-Assessment

  • Students track their own debugging process, identifying mistakes and reflecting on how they could improve their approach in future coding projects.

  1. Learning & Study Skills

Learner Autobiography

  • Students write a brief reflection on:

    • How they approach learning programming languages.

    • Their experiences with learning statistics & machine learning.

    • How they plan to strengthen their weak areas.

  1. Perceptions of Learning Activities & Assessments

Reading & Video Ratings

  • After a video lecture on prototyping techniques, students rate its clarity and usefulness, providing feedback for future content improvements.

Prototype Peer Reviews

  • Students rate each other’s engineering prototypes based on innovation, feasibility, and efficiency, providing constructive feedback.

  1. Perceptions of Teaching & Courses

Teacher-Designed Feedback Forms

  • Students provide mid-semester feedback on:

    • Pacing of technical concepts.

    • Usefulness of coding assignments.

    • Need for more real-world applications in AI ethics case studies.

Agile Retrospectives for Course Reflection

  • Inspired by Agile methodologies, students participate in sprint retrospectives, reflecting on:

    • What went well

    • What could be improved

    • Next steps for refining their coding workflows

Quick Tip: Deliver Assignment Instructions as a low-stakes quiz

There are several ways that you can provide students with directions for an assignment:

  • Provide a write up in a handout or post in Canvas. 

  • Read parts of the directions to the class and ask if anyone has questions. 

  • Create a quick video explaining the assignment. 

As an alternative, you can use the Canvas quiz or quick check function to walk students through assignment expectations step-by-step. Students have to affirmatively answer questions - ranging from a simple “Yes, I understand” to choosing between options - about different components of the assignment.

A question about the components of a final programming assignment, for example, might ask them to think about how many different smaller assignments they’ll need to complete over the semester. While this adds a little extra step, it helps reinforce the importance of carefully reading assignment instructions. Additionally, once you’ve created the quiz once, you can re-use it for every assignment. 

Delivering the assignment instructions as a quiz walks students through assignment expectations step-by-step and makes students affirm that they have read the instructions. Every small thing we can do to encourage students to slow down and read instructions carefully is helpful.

Exam Debrief

Dawn M. Wiggins, a faculty member in the Mathematics Department at Illinois Valley Community College, argues that exam debriefs can help students see how self-defeating behaviors can negatively affect their results on an exam.  However, the debrief she describes (including the questions she asked (see: https://oncourseworkshop.com/self-awareness/exam-debrief/)) goes beyond providing students with the correct answer on the test.

Why use an exam debrief?

Favero & Hendricks, H. (2016) explain that exam debriefs offer faculty the opportunity to confront study strategy issues as well as garner an understanding of misconceptions students may hold about the content. Wiggins shares, “I think there is a window of opportunity immediately following an exam to help students identify the things they did to prepare for the exam and the things that they could do better the next time”.  Further, exam debriefs offer students the opportunity to think critically about their experience on the exam, as well as gain a better understanding of their learning process.

What is the process for debriefing an exam?

Weimer (2018) has summarized Favero & Hendricks he exam debriefing (ED) process:

Part 1: Students looked carefully at the questions they missed and tried to determine why each question was missed. 
Part 2: Students then examined the questions to see if there was a pattern emerging. Did they miss questions for the same reason?
Part 3: Students prepared a brief description of how they studied for the exam, including the amount of time devoted to studying.
Part 4: Based on the information gleaned so far, students identified what changes they thought they could make that might help them better prepare for the next exam. They were given a list of areas where changes could be made:

  • time on task, 

    1. attending to detail, 

    2. using active learning strategies, and 

    3. general study habits. 

  • (Examples were given in each of these areas; see additional questions in the example linked above).

In the ED process students selected the behavior changes they believed they needed to make. All selected options from the active learning category in part, the authors believe, because those activities were demonstrated, modeled, and used in class. For example, many students reported using flashcards but only as devices that helped them memorize details like definitions. In class, Favero used an activity with flashcards in their human anatomy course that showed students how flashcards can be used more fruitfully to show relationships between, in this case, anatomical structure and function.

Suggestions from implementing an exam debrief (from McGill University)

  • Include a debrief questionnaire on the last page of the exam.

  • Distribute a debrief questionnaire when corrected exams are returned.

  • Allow class time to fill out a debrief questionnaire.

  • Make the debrief questionnaire an online assignment.

Additional Resources: