Evidence-Based Classroom Assessment Techniques (CATS) for STEM Courses

Teaching a large lecture course in a STEM course can feel like steering a cargo ship; you’re moving a lot of people in the same direction, but small adjustments can be hard to see and manage in real time. Traditional assessments (midterms, finals, projects) may measure end-point achievement, but they don’t always help faculty understand how students are learning along the way. This is where classroom assessment techniques (CATs) https://vcsacl.ucsd.edu/_files/assessment/resources/50_cats.pdf come in: quick, research-backed methods that provide timely insights into student understanding, enabling instructors to adapt instruction while the course is still in motion.

Why CATs Matter in STEM Large-Enrollment Courses

Evidence from STEM education research underscores that formative assessment and feedback loops significantly improve student learning outcomes, especially in large courses where anonymity and disengagement can take hold. Studies show that structured opportunities for feedback (e.g., one-minute papers, peer assessments, low-stakes quizzes) can reduce achievement gaps and support retention in challenging majors.

At the same time, as Northwestern’s Principles of Inclusive Teaching https://searle.northwestern.edu/resources/principles-of-inclusive-teaching/note, students often struggle not only with course content but also with the “hidden curriculum” or unspoken rules about what “counts” as good work or participation https://cra.org/crn/2024/02/expanding-career-pipelines-by-unhiding-the-hidden-curriculum-of-university-computing-majors/ . Transparent communication about assessment criteria and expectations helps level the playing field.

High-Impact CATs for CS, Engineering, and Informatics

  • Algorithm Walkthroughs (Think-Alouds)
    Students articulate their reasoning step-by-step. Helps faculty identify gaps in procedural knowledge.

  • Debugging Minute Paper
    Prompt: “What was the most confusing bug/issue we discussed today, and why?” Surfaces common misconceptions in programming logic.

  • Concept Maps for Systems Thinking
    Students draw connections between components (e.g., CPU, memory, OS). Research shows concept mapping fosters transfer across domains.

  • Peer Review of HCI Prototypes
    Students exchange usability sketches with rubrics. Builds critique skills and awareness of user-centered design.

  • Low-Stakes Quizzing with Digital Dashboards
    LMS quizzes or polling tools provide immediate data on misconceptions while also scaffolding students’ goal monitoring.

Making CATs Inclusive in Large Lecture Halls

To avoid reinforcing inequities, instructors should:

  • Clarify criteria with rubrics for coding projects, design critiques, or participation.

  • Co-create ground rules for collaboration in labs and online forums, ensuring respectful and equitable engagement.

  • Balance rigor and empathy: challenge students while providing structures that acknowledge different starting points and prior knowledge.

Putting It into Practice

  • In a 250-student programming class, use a digital Muddiest Point poll after each lecture, then address top confusions in the next class.

  • In an HCI course, scaffold peer review CATs for wireframes inside the LMS, combining digital rubrics with analog small-group feedback.

  • In a systems engineering class, embed progress dashboards with reflective CAT prompts (“Where are you stuck? What resource might help?”). This makes metacognition visible and actionable.

Final Thought

Large-enrollment CS, engineering, informatics, and HCI courses don’t have to feel impersonal or assessment-heavy. By integrating classroom assessment techniques faculty can design courses that are responsive, transparent, and inclusive. The result: students who not only master disciplinary knowledge but also learn how to manage their own learning, a skill set essential for both the classroom and the future of work.

Further Reading:

  1. Angelo & Cross’s Classroom Assessment Techniques https://iucat.iu.edu/catalog/20750208
    50+ adaptable CATs. For large STEM courses, techniques like the “Muddiest Point” or “Background Knowledge Probe” are especially powerful.

  2. Nilson’s Teaching at Its Best https://iucat.iu.edu/catalog/16660002
    Offers frameworks for aligning CATs with learning objectives—critical in CS/engineering courses where problem-solving, debugging, and design thinking are central.

  3. Northwestern University, Principles of Inclusive Teaching https://searle.northwestern.edu/resources/principles-of-inclusive-teaching/; and Making Large Classrooms feel Smaller: https://searle.northwestern.edu/resources/our-tools-guides/learning-teaching-guides/making-large-classes-feel-smaller.html

Pedagogical Tips for the Start of the Semester

The first weeks of the semester are a unique window to shape not only what students will learn, but how they will learn. In STEM courses, where concepts can be abstract, skill levels vary wildly, and technologies evolve quickly, intentional, evidence-based practices can help you set students up for long-term success.

Below are a few strategies with examples and tools you can implement immediately.

Design an Inclusive, Transparent Syllabus

Evidence base: Transparent teaching research (Winkelmes et al., 2016) shows that when students understand the purpose, tasks, and criteria for success, they perform better.

Implementation tips:

  • Purpose statements: For every major assignment, include a short note on why it matters and how it connects to industry or future coursework.
    Example: “This database schema project builds skills in relational modeling, which are directly relevant to backend software engineering interviews.”

  • Clear expectations: Break down grading policies, late work policies, and collaboration guidelines into plain language, avoiding overly technical or legalistic phrasing.

  • Accessibility & flexibility: Link to tutoring labs, office hours, online learning resources, and note-taking tools. Indicate whether assignments can be resubmitted after feedback.

  • Create a one-page “Quick Reference” sheet covering key policies (late work, collaboration, grading)

  • Norm-setting: Add a “Community Norms” section that covers respectful code reviews, how to ask questions in class, and expectations for group work. In large classes, it’s vital to set expectations for respectful online discussions, effective use of the Q&A forum (e.g., checking if a question has already been asked), and guidelines for group work if applicable (e.g., conflict resolution strategies).

Establish Psychological Safety Early

Evidence base: Google’s Project Aristotle (2015) and Edmondson’s (1999) work on team learning show that psychological safety, where students feel safe to take intellectual risks, is essential for high performance.

Implementation tips:

  • Low stakes start: In week one, run short, open-ended coding challenges that allow multiple solutions. Make it clear that mistakes are part of the process.

  • Start with anonymous polls about programming experience to acknowledge the diversity of backgrounds in the room.

  • Instructor vulnerability: Share a personal example of a bug or failed project you learned from. This normalizes challenges in programming. In a large lecture, you can briefly mention common misconceptions students often have with a new concept, and how to navigate them.

  • Model Constructive Feedback: When providing feedback on early assignments (even low-stakes ones), focus on growth and learning. When addressing common errors in a large class, frame it as an opportunity for collective learning rather than pointing out individual mistakes.

  • Multiple communication channels: Set up a Q&A platform (InScribe) where students can post questions anonymously.

Use Early Analytics for Intervention

Evidence base: Freeman et al. (2014) found that early course engagement strongly predicts later success, allowing for timely support.

Implementation tips:

  • Student Engagement Roster (SER): https://ser.indiana.edu/faculty/index.html During the first week of class,  consider explaining the SER to your students and tell them how you will be using it. If students are registered for your class and miss the first class, report them as non-attending in SER.  It will allow outreach that can help clarify their situation. Here’s a sample text you could put into your syllabus:
    This semester I will be using IU’s Student Engagement Roster to provide feedback on your performance in this course. Periodically throughout the semester, I will be entering information on factors such as your class attendance, participation, and success with coursework, among other things. This information will provide feedback on how you are doing in the course and offer you suggestions on how you might be able to improve your performance.  You will be able to access this information by going to One.IU.edu and searching for the Student Engagement Roster (Faculty) tile.

  • Use Canvas Analytics:

  • Identify struggling students. “Submissions” allows you to view if students submit assignments on-time, late, or not at all.

    1. See grades at a glance. “Grades” uses a box and whisker plot to show the distribution of grades in the course.

    2. See individual student data. “Student Analytics” shows page view, participations, assignments, and current score for every student in the course.

  • Track early submissions: Note which students complete the first assignments or attend early labs

  • Personal outreach: Email or meet with students who are slipping to connect them with tutoring, peer mentors, or study groups.

  • Positive nudges: Celebrate early wins (e.g., “I noticed you submitted the optional challenge problem. Great initiative!”).

  • Proactive Outreach (with TA Support): If you identify students who are struggling, send personalized emails offering support and directing them to available resources (e.g., tutoring, office hours with TAs). Consider delegating some of this outreach to TAs in large courses.

  • Announcements Highlighting Resources: Regularly remind the entire class about available support resources, study strategies, and upcoming deadlines through announcements.

Key Implementation Strategies for Success

  • Start Small and Build Don’t attempt to implement all strategies simultaneously. Choose 2-3 that align with your teaching style and course structure, then gradually incorporate additional elements.

  • Leverage Your Teaching Team In large courses, TAs are essential partners. Invest time in training them on consistent feedback practices, student support strategies, and early intervention protocols.

  • Iterate Based on Data Use student feedback, performance analytics, and your own observations to refine your approach throughout the semester. What works in one context may need adjustment in another.

  • Maintain Connection at Scale Even in large courses, students need to feel seen and supported. Use technology strategically to maintain personal connection while managing the practical demands of scale.

Conclusion

By implementing these research-backed strategies, faculty can create learning environments where diverse students thrive, engagement remains high, and learning outcomes improve significantly.

The investment in implementing these practices pays dividends not only in student success but also in teaching satisfaction and course sustainability. As you prepare for the new semester, consider which strategies best align with your course goals and student population, then take the first step toward transforming your large enrollment course into a dynamic, supportive learning community.

Remember: even small changes, consistently applied, can create significant improvements in student learning and engagement. Start where you are, use what you have, and do what you can to create the best possible learning experience for your students.

References

  1. Winkelmes, M. A., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K. H. (2016). A teaching intervention that increases underserved college students’ success. Peer Review, 18(1/2), 31–36. Association of American Colleges and Universities.

  2. Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. https://doi.org/10.2307/2666999

  3. Google Inc. (2015). Project Aristotle: Understanding team effectiveness. Retrieved from https://rework.withgoogle.com/intl/en/guides/understanding-team-effectiveness

  4. Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111(23), 8410–8415. https://doi.org/10.1073/pnas.1319030111

The Tech Faculty Imperative: Leading with Inclusive Design and Dual Title II Compliance

This article was written in collaboration with:

Michele Kelmer, MS Ed.
Director of Faculty Engagement and Outreach
UITS Learning Technologies

Michael Mace, MS Ed.
Manager
UITS Assistive Technology and Accessibility Centers

Cara Reader, PhD
University ADA Coordinator
Director of Compliance, Training, and ADA
Indiana University – Office of Civil Rights Compliance

As technological advancements reshape education, faculty in computing, engineering, data science, and information technology sit at the intersection of innovation and inclusion. But with this influence comes a responsibility: ensuring the digital environments we create are accessible for all learners.

This is more than compliance—it’s about shaping a future where every student, regardless of ability or background, can thrive. Two federal statutes—Title II of the Americans with Disabilities Act (ADA) and Title II of the Higher Education Act (HEA)—along with recent executive orders provide a powerful framework for technology faculty to lead transformative change in education.

Why Accessibility? Because There Are Students in Your Classes with Disabilities.

The data makes this clear:

  • According to 2022 data from the Centers for Disease Control and Prevention (CDC), around 28% of the US public reports having one or more disabilities, including physical, mental, and emotional disabilities. This includes 23.8% of individuals ages 18-44 and 34% of military veterans.

  • In a 2019-2020 survey of college students by the National Center for Education Statistics, 21% of undergraduates and 11% of graduate students reported having a disability. These percentages were similar for traditional and adult students and across disciplines of study, and they increase each year.

There are four main reasons why you may not know who your students with disabilities are:

  1. Most disabilities are invisible. You can’t always look at someone and know they have a mental health, learning, chronic health, physical, hearing, vision, or neurological disability.

  2. Students don’t disclose. Less than 50% of students report their physical disabilities, and less than 30% report mental health, learning, or neurological disabilities. Most students who do not disclose cite the fear of stigma from peers, pushback on accommodation requests by instructors, and the general hassle of documentation.

  3. Students may have a disability but don’t have documentation. They may not have been formally diagnosed due to the cost of testing, lack of adequate health care, or cultural norms. ADHD and autism, for example, can be diagnosed later in life.

  4. Students with new acute or chronic health conditions or injuries may not consider themselves as having a disability, even if it impairs their learning for a semester or more. Being diagnosed and treated for conditions like cancer, multiple sclerosis, or major injuries can significantly impact a student’s ability to manage coursework.

Based on 2024 data, any given 100 college students could include:

  • 30% diagnosed with anxiety and/or depression

  • 20% with sleep difficulties like insomnia or sleep apnea

  • 12% attention deficit hyperactivity disorder (ADHD)

  • 10% who experience migraines or other severe headaches

  • 4% with specific learning disabilities including dyslexia and dyscalculia

  • 4% with autism

  • 2% who are blind or have low vision

  • 2% with a trauma-related disability including post-traumatic stress disorder (PTSD)

  • 2% who are Deaf or hard of hearing

It’s common for people to have overlapping disabilities, so while this isn’t to say everyone has a disability, the point is that it’s extremely unlikely that no one in your classes has a disability.

Understanding Title II: ADA + HEA

Accessibility isn’t just the right thing to do for your students; digital accessibility, like physical accessibility provided by ramps and curb cuts, is now the law.

Title II of the ADA (1990, updated 2024): Prohibits discrimination by public entities, including public colleges and universities. In April 2024, the Department of Justice released new rules requiring digital content and services to be accessible to people with disabilities. This includes:

  • Course content in Canvas (your Learning Management System (LMS))

  • Department websites and internal platforms

  • Educational technologies used in class

  • Videos, documents, and simulations

  • Social Media Posts

Key Deadline:

April 2026 for institutions serving >50,000 

The purpose of this update is to help ensure that people with any of a wide range of disabilities can easily access the same web content and online services provided by state and local government and public educational institutions that those without a disability can. Your online courses and anything you put within your LMS are considered web content.

This web content must meet the new accessibility standards if:

  • students or the public can access it online,

  • it’s currently being used (not archival content), and

  • it’s part of the work you do for your institution.

For something to be considered accessible, it must be:

  • Equally integrated: provided at the same time and not separate.

  • Equally effective: provides equal opportunity or outcome.

  • Substantially equivalent in ease of use: should not be more difficult.

According to the Title II update, content in Spring 2026 courses and beyond must be accessible, whether or not you have a student with an accommodation request. There will no longer be an option to wait for an accommodation request to make your course site meet basic accessible guidelines. Accommodations apply when the basics of accessibility are insufficient to meet the specific need of the student. You will still receive accommodation requests for extended time on assessments or specialized accommodations such as a sign language interpreter, a Braille textbook, or tactile graphics as needed.

Title II of the HEA: Requires teacher preparation programs (and increasingly, faculty across disciplines) to use evidence-based pedagogical practices and report on outcomes like teaching effectiveness and alignment with workforce demands.

What Tech Faculty Can Do: Inclusive Teaching in Action

Here’s how you can align your pedagogy with Title II ADA, Title II HEA, and federal priorities—with real-world examples to guide you.

  1. Design Digitally Accessible Content from the Start

  • Use alternative text (alt-text) for all images, charts, and graphs:

  • Example: In a software engineering course, use: “UML diagram showing user login process, including ‘Enter Credentials’, ‘Verify’, and ‘Authenticate’.” This applies to images embedded in presentations, documents, and web pages.

  • Caption all video and transcribe all audio content:

  • Example: A data structures professor records weekly screencasts with auto-captioning, edited for accuracy and posted with transcripts on Canvas. For students who are deaf or hard of hearing, this is essential. Providing a full transcript also benefits students who prefer to read or who need to quickly search for specific information within the content.

  • Structure documents for readability and navigation: When creating lecture notes, assignments, or syllabi in Word, PowerPoint, or PDF, use proper heading structures (e.g., H1, H2, H3), bullet points, and numbered lists—not just bold or color. This allows screen readers to navigate the document logically and helps all students process information more easily. Avoid using color alone to convey meaning (e.g., “red text indicates a critical warning”) as this can be inaccessible to color-blind individuals.

  • Use accessibility checkers in Word, Adobe Acrobat, or Google Docs. IU recommends this practice across all digital materials.

  1. Evaluate the Accessibility of Tools and Platforms

  • Check for WCAG 2.1 AA compliance before adopting new software, simulations, or online learning platforms:

  • Example: Before adopting a new online code editor, the faculty requests a VPAT (Voluntary Product Accessibility Template) and only proceeds after reviewing it with IT accessibility staff. If a vendor cannot provide evidence of compliance, consider alternative solutions or work with your institution to ensure reasonable accommodations can be made.

  • Test for keyboard navigation and screen reader compatibility:

  • Example: In a web development course, the professor makes part of the final project require full keyboard navigation and ARIA labels.

  • Leverage built-in LMS accessibility tools like Canvas Accessibility Checker or Anthology Ally.

  • Example: When uploading a new module to Canvas, a professor runs the accessibility checker to identify any images without alt-text or poorly contrasted text, rectifying these issues before publishing.

  1. Implement Inclusive Pedagogical Practices (Title II HEA + ADA)

  • Use Universal Design for Learning (UDL) to offer multiple means of engagement and representation: Provide information in various formats (e.g., video, text, simulation) and allow students to demonstrate their learning in diverse ways (e.g., flexible assessment like a prototype + presentation or GitHub repo + write-up).

  • Example: In an IoT capstone project, students can present via slide deck, interactive demo, or video walkthrough—with guidelines for accessibility built into the rubric. This accommodates different learning styles and abilities.

  1. Track Outcomes and Improve with Data

  • Align assignments to real-world certifications (e.g., AWS, CompTIA, Python Institute), and track student success to inform redesigns.

  • Use learning analytics in GitHub, Jupyter Notebooks, or Canvas to see where engagement or comprehension gaps occur.

Moving Forward: Build a Culture of Accessibility

Implementing Title II effectively isn’t a one-time project; it’s an ongoing commitment that requires a cultural shift towards proactive accessibility. For technological faculty, this means:

  • Continuous Improvement: Regularly audit your courses with accessibility in mind each semester. Ask students for anonymous feedback on digital barriers.

  • Collaborate: Partner with your institution’s accessibility services office and instructional designers. Join or form a cross-departmental working group on inclusive STEM teaching.

  • Educate Yourself and Others: Complete self-paced training or attend workshops on accessibility and UDL. Share accessible templates with your colleagues.

Tech Faculty: You Are Equity Catalysts

By aligning your teaching with Title II of the ADA and HEA, you’re doing more than following the law. You’re building a future where every student—regardless of disability, background, or learning style—can succeed in STEM and computing fields.

Additional resources:

IU Knowledgebase documents:

IU Expand Training Courses

Web resources

Supporting non-majors in introductory computer courses

The article titled “Exploring Relations between Programming Learning Trajectories and Students’ Majors” https://dl.acm.org/doi/fullHtml/10.1145/3674399.3674497 investigates how students from various academic disciplines learn programming in a compulsory introductory programming course consisting of 75 students, with 40 majoring in CS and 35 in non-CS majors. “They were all freshmen without prior programming experience. Considering their similar scores of entrance exam to this university, it can be assumed that their levels of mathematical logic and computational thinking were roughly comparable”.

The authors note that “an increasing number of non-computer science students are now engaging in programming learning. However, they often struggle in early programming courses. The researchers analyzed data from students’ learning processes to understand how their major influences their learning journey in programming.

The study found that students’ backgrounds and areas of study can affect how they approach and progress in learning programming. They suggest:

  1. Making Programming Relevant: When teaching programming to students who aren’t majoring in computer science, it’s important to connect the lessons to things that are important to them. For example, showing how programming can be used in art, music, or business can make the subject more interesting, especially at the start of the course.

  2. Paying Extra Attention to Struggling Students: Teachers should keep a close eye on students who are not doing well or aren’t very interested in the course. These students might need extra help to keep up, so they don’t fall behind. Connecting them with teaching assistants, Luddy tutors, and additional resources early in the semester could be helpful.

  3. Using Tests to Track Progress: For computer science students, instructors can use quizzes and smaller tests throughout the semester to see how well they’re doing. This helps faculty know if they’re learning. However, for non-CS students who are doing well, these smaller tests might not show their full abilities, so the teacher needs to be extra careful when evaluating their skills. These students might be good at memorizing facts or completing basic tasks in the test, but that doesn’t mean they fully grasp the deeper concepts or could apply the knowledge in real-world situations. So, instructors need to be extra thoughtful and consider other ways to evaluate their skills, not just based on these smaller tests.

Example:

Imagine a student in a business major who is acing the quizzes in a programming class. They might be good at solving problems that are simple and similar to what they’ve studied, but the quizzes might not show how well they can use programming to solve real business problems. Faculty might need to look at other work, like projects or group activities, to better understand the student’s true abilities.

Implications for Teaching and Learning:

  • Tailored Instruction: Educators can design programming courses that consider the diverse backgrounds of students, offering different learning paths or support based on their major.Example: In a programming class, students from a data science major might already have some knowledge of coding, so the instructor could offer them more advanced challenges while giving students from a humanities background more basic programming tasks. This ensures that all students are working at a level that matches their prior knowledge, making learning more effective.

  • Early Support: Providing additional resources or guidance early in the course can help students who might struggle due to their major’s focus, ensuring they keep up with the material.Example: In the first few weeks of a programming course, an instructor might offer extra study sessions or online tutorials for students from non-technical majors (like business or social sciences). These students may find programming challenging, so additional support would help them catch up and build their confidence early in the course.

  • Encouraging Diverse Majors: Encouraging students from various disciplines to engage with programming can enrich their learning experience and broaden their skill set.Example: A university might organize workshops to show students from creative fields (like art or design) how programming can help them bring their ideas to life, such as creating interactive websites or digital art. Encouraging students from these fields to explore programming opens new possibilities for their careers and learning.

By understanding the relationship between a student’s major and their programming learning trajectory, educators can create more effective and supportive learning environments.

“A Map Makes You Smarter. GPS Does Not.”: A Story About AI, Work, and What Comes Next with Jose Antonio Bowen

Jose Antonio Bowen is introduced as a Renaissance thinker with a jazz soul. His background includes leadership roles at Stanford, Georgetown, and SMU, as well as being the president of Johnstreet College. He is also a jazz musician who has played with legends, a composer with a Pulitzer-nominated symphony, and the author of “Teaching Naked,” 30% off with the code TNT30 at Wiley “Teaching Change,” and “Teaching with AI.” 30% off Teaching Change or Teaching with AI with Code HTWN at JH.

He provided a workshop for us on AI Assignment and Assessments, where he mentioned:

“A map makes you smarter. GPS does not.”

It was such a small, quiet moment, but it cracked open something bigger. Because this wasn’t just about directions. It was about how we’re all starting to think less, remember less, and—if we’re not careful—become less, all thanks to the technology we depend on.

The Decline of Entry-Level Everything

Dr. Bowen shared that Shell, a global energy giant, had laid off nearly 38% of a particular workforce group. Internships? Vanishing. Entry-level jobs? Replaced.

Replaced by what?

Artificial Intelligence

Tasks that used to belong to interns or fresh graduates—writing reports, creating slide decks, analyzing data—are now handled by machines that don’t take lunch breaks or need supervision.

And that’s where the real twist came in: the people who still have jobs? They’re not the ones who can do the task better than AI. They’re the ones who can think better than AI. Who can improve, refine, and oversee what AI produces.

If AI is writing the first draft, the humans left in the room better know how to write the final one—with nuance, clarity, and insight.

Offloading Our Minds, One Task at a Time

Back to that GPS quote. Dr. Bowen called it “cognitive offloading”—how we gradually stop using certain mental muscles because tech is doing the lifting.

We used to memorize phone numbers, navigate with paper maps, even mentally calculate tips at restaurants. Now? We ask Siri.

The scary part isn’t that we’re forgetting how to do these things. It’s what happens when we offload creativity, problem-solving, and thinking itself.

Because if AI can be creative—can write poems, code apps, design marketing plans—what do we do? What’s left for us?

Creativity, Reimagined

But here’s where things got interesting. Dr. Bowen isn’t anti-AI. In fact, he practically gushed about it.

He showed how AI can be used to spark creativity, not stifle it.

He explained how students could upload a 700-page textbook and have the AI turn it into a podcast. A nine-minute podcast. With baseball analogies, if that’s what helps them learn.

He talked about using AI to create personalized assignments: instead of a generic math problem about trains, give a politics student a question about voter turnout rates. Suddenly, they care. Suddenly, they’re engaged.

Because AI isn’t replacing the teacher—it’s becoming the chalk, the blackboard, the entire toolset that a smart educator can use to make learning come alive.

Prompt Like a Pro

Here’s another nugget that stuck with me: prompting isn’t coding. It’s storytelling.

Don’t just ask the AI to “fix your proposal.” Ask it to “transform your proposal into something your provost will love.”

Use emotion. Use intent. Give context. AI, it turns out, responds best when it knows what you’re really trying to say.

The 70% Problem

Still, AI isn’t perfect. Dr. Bowen introduced what he called the “70% problem.”

AI can do a lot of things—but only up to a C-level standard. That’s fine for a rough draft. It’s dangerous for a final product.

If students rely on AI to do the work, and they can’t take it past that 70% mark, then what happens when employers expect more?

The solution? Raise the bar.

What used to be acceptable for a B or C should now earn an F—unless the student can make the AI’s work better, smarter, more human.

From Tools to Teaching Assistants

The future of education, the he argued, is not about banning AI—it’s about designing with it.

He showed how teaching assistants could use AI notebooks filled with chemistry texts to answer student questions on the fly.
How AI can test business plans, simulate presidential decisions, or offer critiques from the perspective of a political opponent.
How students can train AI to “be” Einstein and ask it about thermodynamics at their own pace, in their own language.

AI isn’t replacing teachers—it’s becoming part of the classroom, like textbooks once were.

The Arms Race

Of course, there’s a darker side. AI can cheat. It can take online courses for students, fake typing patterns, even simulate human error.

Dr. Bowen called it an “arms race” between those building smarter AI and those trying to prevent it from being misused.

But even in this, he saw hope.

If educators embrace AI—not as an enemy but as a creative partner—they can design assignments AI can’t complete alone. They can build simulations, storytelling challenges, and editing tasks that require a human mind.

Because at the end of the day, that’s what this moment demands: humans who think more deeply, ask better questions, and create things worth remembering.

Final Words

The session ended with a simple truth:

“AI raises the floor. You must raise the ceiling.”

Whether you’re a student, a teacher, a manager, or a job-seeker, AI is now the baseline.

It will write the first draft, sketch the first idea, solve the first problem.

But it’s still up to us to bring the brilliance.

AI can produce work at a “C” level, which is problematic if students can only perform at that level. Instructors need to raise their standards and expectations. Assignments that would have been considered a “C” should now be evaluated as an “F” if they only meet the level of quality that AI can produce.

Implications

Students need to surpass AI capabilities to be competitive in the job market, especially in fields like coding and writing.

And maybe—just maybe—it’s time we all learned to read the map again.

Bridging the Gap: What Tech Practitioners Really Want from Computer Science Education

In the spring of 2024, the Computing Research Association (CRA) asked a simple but powerful question: What do industry professionals think about the way we teach computer science today?  as part of a “Practitioner to Professor (P2P)‘ survey that the CRA-Education / CRA-Industry working group is doing.

The response was overwhelming. More than 1,000 experienced computing practitioners—most with over two decades of experience—shared their honest thoughts on how well today’s CS graduates are being prepared for the real world.

These weren’t just any professionals. Over three-quarters work in software development. Many manage technical teams. Most hold degrees in computer science, with Bachelor’s and Master’s being the most common. Half work for large companies, and a majority are employed by organizations at the heart of computing innovation.

So, what did they say?

The Call for More—and Better—Coursework

One of the loudest messages was clear: students need more coursework in core computer science subjects. Respondents recommended about four additional CS courses beyond what’s typical today. Algorithms, computer architecture, and theoretical foundations topped the list.

But it wasn’t just CS classes that practitioners wanted more of. They also suggested expanding foundational courses—especially in math, writing, and systems thinking. It turns out that the ability to write clearly, think statistically, and understand how complex systems interact is as critical as knowing how to code.

It’s Not Just About Programming

When it came to programming languages, the responses painted a nuanced picture. Practitioners agreed: learning to code isn’t the end goal—learning to think like a problem-solver is.

They valued depth over breadth. Knowing one language well was seen as more important than dabbling in many. But they also stressed the importance of being adaptable—able to pick up new languages independently and comfortable working with different paradigms.

Familiarity with object-oriented programming? Definitely a plus. But what mattered most was a student’s ability to approach problems critically, apply logic, and build solutions—regardless of the language.

The Soft Skills Shortfall

One of the most striking critiques was aimed not at technical training, but at the lack of soft skills being taught in undergraduate programs.

Soft skills, they argued, can be taught—but many universities simply aren’t doing it well. Oral communication courses were highlighted as a critical need. And interestingly, several respondents felt that liberal arts programs were doing a better job than engineering-focused ones in nurturing communication, collaboration, and leadership.

Asked to identify the most important communication skills, respondents pointed to the ability to speak confidently in small technical groups, write solid technical documentation, and explain ideas clearly to leaders and clients—both technical and non-technical.

Math Is Still a Must

Despite the rise of high-level frameworks and automation, the industry’s love affair with math is far from over. In fact, 65% of respondents said they enjoyed or pursued more math than their degree required.

Why? Because math is the backbone of emerging fields like AI, machine learning, and data science. It sharpens analytical thinking, cultivates discipline, and builds a foundation for lifelong adaptability.

The most important math subjects? Statistics topped the list, followed by linear algebra, discrete math, calculus, and logic.

Foundations First

The survey didn’t just surface high-level trends—it got specific.

In algorithms, the emphasis was on conceptual thinking, not just implementation. Students should deeply understand how algorithms work, why they matter, and how to analyze them.

In computer architecture, digital logic and memory hierarchy were considered essential. These are the building blocks that enable students to understand modern computing systems, from the ground up.

And when it came to databases? Practitioners wanted a balance: students should learn both the theory (like relational algebra and normalization) and the practice (like SQL and indexing). Real-world readiness depends on both.

Toward a Better Future for CS Education

What makes this survey so impactful is its timing and intent. As technology continues to reshape every industry, there’s a growing urgency to close the gap between academia and the workforce. The P2P Survey is part of a broader movement to do just that.

Endorsed by leading organizations—ABET, ACM, CSAB, and IEEE-CS—this initiative creates a powerful feedback loop between universities and the industry they serve.

So, what’s next? A full report is expected later this year. But the message is already loud and clear: today’s students need a curriculum that not only teaches them how to code, but prepares them to lead, adapt, and thrive in a complex, evolving world.

Midterm Feedback via Google Illuminate | GenAI Essentials | Upcoming Opportunities

It’s a great time to think about receiving midsemester feedback on your course. I have written about the topics in the past on my blog, however, this time I used a new resource that might be helpful for multiple purposes:

Google Illuminate is an AI tool that creates podcast-style audio summaries of research papers. It’s designed to make complex academic information more accessible and engaging.

How it works

  1. Input a research paper, PDF, orURL into Illuminate*, or search for a topic.

  2. Illuminate generates an audio summary that explains the key points of the document.

  3. You can ask questions about the document, and Illuminate will provide a text output.

  4. You can convert the text output into a podcast answer.

*I do NOT recommend inputting copywritten information into this tool. However, it is optimized for computer science topics, and supports research papers hosted on arXiv.org

I used some of my previous blog posts to create a podcast that cover several aspects of creating developing and analyzing mid-semester feedback using the following prompt:

Create a relaxed and spontaneous conversation with a laid-back and curious host and a lively, fun, and relatable guest. They’ll dive into the topic in a free-flowing, casual style that feels like you’re eavesdropping on a chat between friends.

I used 4 resources from my blog:

Screenshot of what is now Google Notebook that shows a prompt for picking an audio for a podcast generated from my blog posts.

A 4-minute conversational podcast related to providing mid-semester feedback was generated that provides a decent overview of the topic is generated with a text transcript.

screenshot of the podcast interface

You might use this tool in your class to generate an overview to dense topics that students can listen to and/or read.

Resource to share with students:

GenAI Essentials: Practical AI Skills Across Disciplines (Student-Facing)

https://expand.iu.edu/browse/learningtech/courses/genai-essential-skills

 

This course, developed by the the Learning Technologies division of University Information Technology Services (UITS) covers:

  • Prompt Engineering – Crafting precise prompts to generate accurate and useful AI outputs.

  • Evaluating AI-Generated Content – Assessig reliability, credibility, and biases of AI-produced information.

  • Ethics and Limitations of GenAI – Understanding responsible AI use, ethical considerations, and potential risks.

  • Information Literacy in a GenAI Age – Applying verification strategies and library resources to fact-check AI-generated sources.

  • Studying and Learning with GenAI – Using AI tools for note-taking, summarization, and personalized learning support

Resource for you

The faculty facing version of the course  https://iu.instructure.com/enroll/M7FE9Ecovers the same topics, but also includes assignment templates and rubrics that you can incorporate into your own course:

screenshot from the Canvas Gen AI Essentials at IU Course

Classroom Assessment Techniques

A new (2024) version of the classic book, “Classroom assessment techniques : formative feedback tools for college and university teachers” is available in the IU Library:

https://iucat.iu.edu/catalog/20750208

Classroom Assessment Techniques (CATs) are simple, low-pressure ways to check how well students are understanding the material. These methods are efficient, student-centered strategies that provide valuable insights into learning progress. Instructors can use feedback from CATs to adjust activities, offer extra support, or change the pace of the class to better meet student needs. CATs are not just about assessment—they also enhance learning. Here’s how:

  • Focus Students’ Attention: Students often come to class distracted by other concerns. Starting with a quick CAT activity can help them focus and prepare to engage.

  • Spot Trouble Early: A simple check-in at the beginning of class can reveal concepts that need more explanation or clarification, ensuring everyone is on the same page.

The book is a practical, research-based handbookthat helps faculty assess student learning at the classroom level. It offers tools for formative assessment applicable in face-to-face, hybrid, and online learning environments. While we have discussed the previous edition and related resources in the past, the new edition integrates 30 years of research and classroom practice, providing updated and field-tested assessment techniques. The book divides up CATs into several categories

Categories of Classroom Assessment Techniques (CATs) – Chapters 8-17 (adapted/edited with technical examples):

  1. Knowledge Recall & Understanding

Empty Outline

  • Students are given a partially completed algorithm design process outline and must fill in the missing steps.

  • Helps students recall fundamental software development methodologies (e.g., Waterfall, Agile, Scrum).

RSQC2 (Recall, Summarize, Question, Connect, Comment)

  • After a lesson on supervised vs. unsupervised learning, students:

    • Recall key definitions.

    • Summarize the differences.

    • Question a potential challenge in real-world applications.

    • Connect the concept to clustering methods in AI.

    • Comment on ethical concerns in algorithmic bias.

  1. Application

Concept Maps

  • Students create a concept map linking usability principles (e.g., learnability, efficiency, satisfaction) to a real-world user interface (such as a mobile banking app).

RSQC2

  • After discussing autonomous systems, students create a summary matrix evaluating:

    • Sensors used in self-driving cars

    • How decision-making algorithms function

    • Challenges in real-world implementation

  1. Problem Solving

What’s the Principle?

  • Given a dataset and an incorrectly applied machine learning model, students must identify the underlying principle that was violated (e.g., overfitting, lack of feature normalization).

Peer Debugging Sessions

  • Students review a piece of malfunctioning codeand collaboratively apply debugging strategies.

  • Helps them develop problem-solving approaches to software engineering.

  1. Critical Thinking & Analysis

Blooming (Using Bloom’s Taxonomy)

  • Students analyze real-world accessibility failuresin a user interface, progressing through Bloom’s levels:

    • Understanding accessibility guidelines.

    • Applying them to UI analysis.

    • Analyzing gaps in existing designs.

    • Evaluating how these impact user experience.

    • Creating a revised design proposal.

Comparing AI Bias in Decision-Making

  • Students critique different AI models used in hiring processes, identifying bias and ethics-related concerns.

  1. Synthesis & Creative Thinking

Student-Generated Questions

  • Students create quiz questions related to data structures and algorithms, peer-reviewing each other’s questions for complexity and clarity.

Concept Maps for IoT Networks

  • Students visually map out an Internet of Things (IoT) system, including:

    • Sensors

    • Cloud processing

    • Data security mechanisms

  1. Attitudes & Values

Profiles of Admirable Individuals

  • Students select a UX designer (e.g., Don Norman, Jakob Nielsen) and analyze how their design philosophy aligns with current industry best practices.

Reflective Journal on Bias in AI

  • Students keep a weekly reflection log on how AI-driven systems impact marginalized communities, promoting ethical awareness.

  1. Self-Assessment of Learning

Goal Ranking & Matching

  • Students list their learning objectives in a course on AI-driven robotics and compare them with the instructor’s objectives to identify gaps in understanding.

Debugging Logs for Self-Assessment

  • Students track their own debugging process, identifying mistakes and reflecting on how they could improve their approach in future coding projects.

  1. Learning & Study Skills

Learner Autobiography

  • Students write a brief reflection on:

    • How they approach learning programming languages.

    • Their experiences with learning statistics & machine learning.

    • How they plan to strengthen their weak areas.

  1. Perceptions of Learning Activities & Assessments

Reading & Video Ratings

  • After a video lecture on prototyping techniques, students rate its clarity and usefulness, providing feedback for future content improvements.

Prototype Peer Reviews

  • Students rate each other’s engineering prototypes based on innovation, feasibility, and efficiency, providing constructive feedback.

  1. Perceptions of Teaching & Courses

Teacher-Designed Feedback Forms

  • Students provide mid-semester feedback on:

    • Pacing of technical concepts.

    • Usefulness of coding assignments.

    • Need for more real-world applications in AI ethics case studies.

Agile Retrospectives for Course Reflection

  • Inspired by Agile methodologies, students participate in sprint retrospectives, reflecting on:

    • What went well

    • What could be improved

    • Next steps for refining their coding workflows

Instructor Talk

Studies by Seidel et al. (2015) and Harrison et al. (2019),  have demonstrated how Instructor Talk plays a crucial role in shaping classroom environments, influencing student engagement, learning attitudes, and potentially mitigating stereotype threats. Instructor talk is defined as any language used by an instructor that is not directly related to course content but instead shapes the learning environment.

Seidel et al. (2015) identified five major categories of non-content talk:

  1. Building the Instructor/Student Relationship– Encouraging respect, boosting self-efficacy, and offering advice for student success.

  2. Establishing Classroom Culture – Setting expectations, fostering a sense of community, and making students feel comfortable in the learning process.

  3. Explaining Pedagogical Choices – Justifying teaching methods to help students understand why certain approaches are used.

  4. Sharing Personal Experiences – Providing personal anecdotes or relating to student experiences.

  5. Unmasking Science – Discussing the nature of science and emphasizing diversity within the field.

Harrison et al. (2019) added a new category:“Negatively Phrased Instructor Talk.” This includes statements that may discourage students, undermine confidence, or convey unhelpful attitudes about learning.

Positively phrased Instructor Talk includes language that motivates, supports, and encourages students, helping to create an inclusive and productive learning environment.

Examples of Positively Phrased Instructor Talk:

Building the Instructor/Student Relationship (Encouraging and Inclusive Language)

  • “Debugging can be frustrating, but every programmer goes through it—even the best software engineers. You’re developing a valuable skill by troubleshooting.”

  • “There are many ways to solve this problem. If your approach works, it’s valid! Computer science is about creativity as much as logic.”

  • “If you’re stuck, that’s a good sign—you’re thinking critically! Take a step back, break the problem into smaller pieces, and try again.”

Establishing Classroom Culture (Fostering a Positive Learning Environment)

  • “In this class, collaboration is encouraged! Working with others will help you see different approaches and learn more effectively.”

  • “Asking questions is a sign of an engaged learner. Feel free to speak up—there are no bad questions in coding!”

  • “Mistakes are part of learning to program. The best way to improve is to experiment, test, and debug!”

Explaining Pedagogical Choices (Justifying Learning Strategies to Reduce Resistance)

  • “We use pair programming because research shows it helps students learn faster and develop teamwork skills.”

  • “I emphasize problem-solving over memorization because in real-world programming, you’ll be looking up syntax all the time—what matters is knowing how to think through problems.”

  • “This assignment is designed to help you build a strong foundation. Once you grasp these basics, you’ll be able to tackle much more complex projects.”

Sharing Personal Experiences (Relating to Students)

  • “When I first learned recursion, it completely confused me! But breaking it down into base cases and recursive steps helped me understand it.”

  • “I once spent an entire weekend debugging a program because of a missing semicolon. Now I always double-check my syntax!”

Unmasking Computer Science (Encouraging Diverse Perspectives & Scientific Thinking)

  • “There’s no single type of person who becomes a great programmer—some of the best developers come from all kinds of backgrounds.”

  • “Computer science isn’t just about writing code. It’s about solving problems and thinking critically—skills that are valuable in any field.”

Examples of Negatively Phrased Instructor Talk:

Building the Instructor/Student Relationship (Discouraging Students)

  • “This is just how programming works—either you get it, or you don’t.”

  • “If you’re struggling with loops, maybe computer science isn’t for you.”

  • “Some of you clearly didn’t put in the effort, and it shows in your code.”

Establishing Classroom Culture (Creating Anxiety or an Unwelcoming Environment)

  • “If you can’t get this assignment working, you’ll probably fail the course.”

  • “I’m not here to hold your hand—figure it out on your own.”

  • “Real programmers don’t need to ask for help. If you need help, you’re not thinking hard enough.”

Explaining Pedagogical Choices (Undermining Learning Strategies)

  • “I don’t really believe in these ‘new’ teaching methods, but the department requires me to use them.”

  • “Honestly, I don’t see the point of teaching theory—you’ll just learn everything on the job anyway.”

  • “You just need to memorize this syntax and move on. Understanding isn’t really necessary.”

Sharing Personal Experiences (Self-Effacing or Confusing Students)

  • “I never really understood object-oriented programming myself, but here’s the textbook definition.”

  • “Back in my day, we had to learn this without any online tutorials. You have it easy!”

Unmasking Computer Science (Excluding or Dismissing Certain Groups)

  • “Let’s be honest, some people just don’t have the logical thinking required for coding.”

  • “There aren’t many women in AI, but that’s just the way the field is.”

  • “If you’re not naturally good at math, you’re going to struggle a lot in this class.”

Findings revealed that Instructor Talk was present in every class session, ranging from six to 68 instances per class session. The study by Seidel et al. (2015) suggests that Instructor Talk can impact student motivation, reduce resistance to active learning, and help mitigate stereotype threat. The introduction of negatively phrased Instructor Talk suggests that some instructor behaviors may unintentionally harm student learning and should be carefully examined. The authors recommend that educators reflect on their non-content talk to enhance student engagement and learning outcomes. While Harrison et al. (2019)validated its applicability across multiple courses and identified new challenges related to negative instructor language. Both studies emphasize the importance of non-content communication in higher education, particularly in STEM courses.

Harrison, C. D., Nguyen, T. A., Seidel, S. B., Escobedo, A. M., Hartman, C., Lam, K., … & Tanner, K. D. (2019). Investigating instructor talk in novel contexts: Widespread use, unexpected categories, and an emergent sampling strategy. CBE—Life Sciences Education, 18(3), ar47. https://doi.org/10.1187/cbe.18-10-0215

Seidel, S. B., Reggi, A. L., Schinske, J. N., Burrus, L. W., & Tanner, K. D. (2015). Beyond the biology: A systematic investigation of noncontent instructor talk in an introductory biology course. CBE—Life Sciences Education, 14(4), ar43. https://doi.org/10.1187/cbe.15-03-0049

Quick Tip: Wise Feedback

The article “Breaking the Cycle of Mistrust: Wise Interventions to Provide Critical Feedback Across the Racial Divide,” by Yeager et al. https://www.apa.org/pubs/journals/releases/xge-a0033906.pdf , introduces the concept of wise feedback; a strategy that helps instructors frame feedback in a way that communicates that students can meet high expectations and gives concrete direction for how to meet the expectations. 

The research comprised three studies focusing on middle and high school students. In the first study, students were divided into two groups: one receiving “wise feedback” and the other serving as a control. For the wise feedback group, comments on their essay drafts were paired with a note that said, “I’m giving you these comments because I have very high expectations, and I know that you can reach them.” In contrast, the control group’s note read, “I’m giving you these comments so that you’ll have feedback on your paper.” Students who received wise feedback were more likely to act on the suggestions and produced higher-quality revisions. This approach was particularly impactful for students from diverse backgrounds.

Adapting this to programming, the following steps can help provide effective feedback (from UC Berkley’s Greater Good in Education Center https://ggie.berkeley.edu/practice/giving-wise-feedback):

  • When reviewing a developer’s code or project, include the following in your feedback:

    • Start with specific actionable feedback:

      • “Your implementation meets the basic functionality, but I’ve added comments suggesting optimizations for improving performance and readability.”

      • “While your API design fulfills the requirements, the comments I left highlight ways to make the endpoints more efficient and user-friendly.”

    • State high expectations:

      • “This project is a step toward building production-ready code, which requires adherence to best practices for maintainability and scalability.”

      • “Writing clean and modular code here will prepare you for working on large, collaborative codebases in professional settings.”

    • Express confidence in the programmer’s ability to succeed:

      • “Based on your previous work, I’m confident you’ll be able to implement the suggested changes effectively.”

      • “Your problem-solving skills from earlier tasks show that you have what it takes to refine this code to meet higher standards.”

  • Additional tips to consider when using wise feedback in programming:

    • Use this type of feedback for tasks that represent meaningful learning opportunities and require the developer’s best effort, such as debugging complex issues or designing scalable solutions.

    • Incorporate this practice into a broader culture of trust, where developers feel valued and supported by their peers and mentors.

    • Avoid overpraising incomplete or substandard work, as this can unintentionally reinforce the belief that their efforts are not being taken seriously or that expectations are low.

    • While this approach benefits all developers, it is particularly impactful for those new to the field, who may struggle with imposter syndrome or worry about being judged unfairly based on stereotypes or biases.

This research emphasizes the importance of trust in the student-teacher relationship. It demonstrates that wise feedback, by directly addressing and mitigating mistrust, can significantly improve academic outcomes for underrepresented students. The findings have broad implications for educational practice by suggesting that through the lens of social-psychological theory, the study underscores how early interventions can create lasting positive effects on trust, motivation, and performance. Furthermore, the Science of “Wise Interventions” explores how social-psychological approaches can address educational disparities, reinforcing the effectiveness of strategies likewise feedback in fostering positive student outcomes: https://studentexperiencenetwork.org/wp-content/uploads/2018/08/The-Science-of-Wise-Interventions.pdf