As you design quizzes, projects, and exams, it’s worth pausing to ask: What am I really trying to assess? Too often, assessments measure peripheral skills like memorization, rather than the intended learning outcomes. For example, a timed coding exam may end up evaluating typing speed and syntax recall more than algorithmic thinking or problem-solving strategy. Similarly, a multiple-choice exam on HCI principles may privilege memorization over the ability to apply design heuristics to new contexts.
Evidence-based practices to align assessments with your goals:
Backwards Design (Wiggins & McTighe, 2005)
Start with the outcome: Do you want students to demonstrate abstraction, debugging, empathy for users, or system-level thinking?
Then design an assessment that directly elicits that performance.
CS Example: Backward design: Integrating active learning into undergraduate computer science courses (2023) https://www.tandfonline.com/doi/epdf/10.1080/2331186X.2023.2204055?needAccess=true
Constructive Alignment (Biggs, 1996)
Ensure that learning activities, assessments, and outcomes are in sync. For instance, if collaboration is a stated goal, include a group design critique, not just individual tests.
Example: Reflections on applying constructive alignment with formative feedback for teaching introductory programming and software architecture (2016): https://dl-acm-org.proxyiub.uits.iu.edu/doi/pdf/10.1145/2889160.2889185
Authentic Assessment (Herrington & Herrington, 2007; )
Use real-world tasks (e.g., designing a database for a case study client, creating a usability test plan, or simulating an engineering design review). Research shows authentic assessments better support transfer of learning to workplace contexts. https://www.sciencedirect.com/science/article/pii/S0191491X24001044
Reduce Construct-Irrelevant Barriers
If the skill being assessed is debugging, for example, provide starter code so students aren’t penalized for setup. If the goal is conceptual understanding, consider allowing open-book resources so recall doesn’t overshadow reasoning.
Students also struggle not because the concepts are beyond their ability, but because the expectations of the assessment are unclear.
For example:
A programming assignment asks students to “optimize” code, but it’s unclear whether grading is based on correctness, runtime efficiency, readability, or documentation.
A human–computer interaction (HCI) project requires a prototype, but is the emphasis on creativity, usability testing, or fidelity of the mockup?
An informatics paper asks for “analysis,” but it’s unclear whether success depends on critical thinking, proper use of data, or following citation conventions.
When assessments lack clarity, students must guess what matters. This shifts the focus from demonstrating learning to playing a hidden “what does the professor want?” game.
Why It Matters (Evidence-Based):
Cognitive Load: Ambiguous assessments create unnecessary cognitive load—students waste energy interpreting instructions instead of applying knowledge (Sweller, 2011).
Equity Impact: Lack of clarity disproportionately disadvantages first-generation and other structurally disadvantaged students, who may not have tacit knowledge about faculty expectations (Winkelmes et al., 2016).
Misalignment: As mentioned above, vague assessments often misalign with course outcomes, undermining constructive alignment (Biggs, 1996).
What Faculty Can Do:
State the Core Construct: Ask yourself: Am I assessing correctness, creativity, reasoning, or communication? Then state it explicitly.
Communicate Priorities: If multiple criteria matter, indicate their relative weight (e.g., correctness 50%, efficiency 30%, documentation 20%).
Provide a Sample Response: A brief example—annotated to show what “counts”—helps students see what you value.
Check for Hidden Criteria: If you penalize for style, clarity, or teamwork, ensure that’s written down. Otherwise, students perceive grading as arbitrary.
Faculty Reflection Prompt:
Pick one upcoming assignment and ask yourself: If I gave this to a colleague in my field, would they immediately know what I was assessing? Or would they have to guess? If the latter, refine the task or rubric until the answer is obvious.
Takeaway: Unclear assessments don’t just frustrate students, they distort what is being measured. By clarifying exactly what skill or knowledge is under the microscope, faculty ensure assessments are fair, transparent, and aligned with learning outcomes. Before finalizing any assignment or test, ask yourself: Am I measuring the skill that truly matters, or something adjacent? That small moment of reflection can make assessments more equitable, meaningful, and aligned with the professional practices of your discipline.