Classroom Assessment Techniques

A new (2024) version of the classic book, “Classroom assessment techniques : formative feedback tools for college and university teachers” is available in the IU Library:

https://iucat.iu.edu/catalog/20750208

Classroom Assessment Techniques (CATs) are simple, low-pressure ways to check how well students are understanding the material. These methods are efficient, student-centered strategies that provide valuable insights into learning progress. Instructors can use feedback from CATs to adjust activities, offer extra support, or change the pace of the class to better meet student needs. CATs are not just about assessment—they also enhance learning. Here’s how:

  • Focus Students’ Attention: Students often come to class distracted by other concerns. Starting with a quick CAT activity can help them focus and prepare to engage.

  • Spot Trouble Early: A simple check-in at the beginning of class can reveal concepts that need more explanation or clarification, ensuring everyone is on the same page.

The book is a practical, research-based handbookthat helps faculty assess student learning at the classroom level. It offers tools for formative assessment applicable in face-to-face, hybrid, and online learning environments. While we have discussed the previous edition and related resources in the past, the new edition integrates 30 years of research and classroom practice, providing updated and field-tested assessment techniques. The book divides up CATs into several categories

Categories of Classroom Assessment Techniques (CATs) – Chapters 8-17 (adapted/edited with technical examples):

  1. Knowledge Recall & Understanding

Empty Outline

  • Students are given a partially completed algorithm design process outline and must fill in the missing steps.

  • Helps students recall fundamental software development methodologies (e.g., Waterfall, Agile, Scrum).

RSQC2 (Recall, Summarize, Question, Connect, Comment)

  • After a lesson on supervised vs. unsupervised learning, students:

    • Recall key definitions.

    • Summarize the differences.

    • Question a potential challenge in real-world applications.

    • Connect the concept to clustering methods in AI.

    • Comment on ethical concerns in algorithmic bias.

  1. Application

Concept Maps

  • Students create a concept map linking usability principles (e.g., learnability, efficiency, satisfaction) to a real-world user interface (such as a mobile banking app).

RSQC2

  • After discussing autonomous systems, students create a summary matrix evaluating:

    • Sensors used in self-driving cars

    • How decision-making algorithms function

    • Challenges in real-world implementation

  1. Problem Solving

What’s the Principle?

  • Given a dataset and an incorrectly applied machine learning model, students must identify the underlying principle that was violated (e.g., overfitting, lack of feature normalization).

Peer Debugging Sessions

  • Students review a piece of malfunctioning codeand collaboratively apply debugging strategies.

  • Helps them develop problem-solving approaches to software engineering.

  1. Critical Thinking & Analysis

Blooming (Using Bloom’s Taxonomy)

  • Students analyze real-world accessibility failuresin a user interface, progressing through Bloom’s levels:

    • Understanding accessibility guidelines.

    • Applying them to UI analysis.

    • Analyzing gaps in existing designs.

    • Evaluating how these impact user experience.

    • Creating a revised design proposal.

Comparing AI Bias in Decision-Making

  • Students critique different AI models used in hiring processes, identifying bias and ethics-related concerns.

  1. Synthesis & Creative Thinking

Student-Generated Questions

  • Students create quiz questions related to data structures and algorithms, peer-reviewing each other’s questions for complexity and clarity.

Concept Maps for IoT Networks

  • Students visually map out an Internet of Things (IoT) system, including:

    • Sensors

    • Cloud processing

    • Data security mechanisms

  1. Attitudes & Values

Profiles of Admirable Individuals

  • Students select a UX designer (e.g., Don Norman, Jakob Nielsen) and analyze how their design philosophy aligns with current industry best practices.

Reflective Journal on Bias in AI

  • Students keep a weekly reflection log on how AI-driven systems impact marginalized communities, promoting ethical awareness.

  1. Self-Assessment of Learning

Goal Ranking & Matching

  • Students list their learning objectives in a course on AI-driven robotics and compare them with the instructor’s objectives to identify gaps in understanding.

Debugging Logs for Self-Assessment

  • Students track their own debugging process, identifying mistakes and reflecting on how they could improve their approach in future coding projects.

  1. Learning & Study Skills

Learner Autobiography

  • Students write a brief reflection on:

    • How they approach learning programming languages.

    • Their experiences with learning statistics & machine learning.

    • How they plan to strengthen their weak areas.

  1. Perceptions of Learning Activities & Assessments

Reading & Video Ratings

  • After a video lecture on prototyping techniques, students rate its clarity and usefulness, providing feedback for future content improvements.

Prototype Peer Reviews

  • Students rate each other’s engineering prototypes based on innovation, feasibility, and efficiency, providing constructive feedback.

  1. Perceptions of Teaching & Courses

Teacher-Designed Feedback Forms

  • Students provide mid-semester feedback on:

    • Pacing of technical concepts.

    • Usefulness of coding assignments.

    • Need for more real-world applications in AI ethics case studies.

Agile Retrospectives for Course Reflection

  • Inspired by Agile methodologies, students participate in sprint retrospectives, reflecting on:

    • What went well

    • What could be improved

    • Next steps for refining their coding workflows