• About
  • Teaching
    • Overview
    • Recent Publications
    • Presentations
    • Projects
    • Vita
  • Blog
    • Contact
Menu

Akesha M. Horton

Street Address
City, State, Zip
Phone Number

Akesha M. Horton

  • About
  • Teaching
  • Scholarship
    • Overview
    • Recent Publications
    • Presentations
    • Projects
    • Vita
  • Blog
  • Connect
    • Contact

Making Thinking Visible in the Age of AI

March 13, 2026 Akesha Horton
Woman sits at desk. Her lap top is open. Her coffee cup has the word FOCUS on front of it and is filled with hot coffee as indicated by the steam. She appears to be thinking while writing in her notebook.

In a previous post, I walked through Fink’s Taxonomy of Significant Learning and argued that the dimensions most resistant to AI offloading are Human Dimension, Caring, and Learning How to Learn. If you have not read that post yet, it is worth starting there. This post picks up where that one left off.

Knowing which dimensions to target is necessary. The harder question is this: once you have redesigned an assignment to require Integration, Caring or Learning How to Learn, how do you actually know whether the student’s thinking got there? How do you make that thinking visible, not just to yourself, but to the student?

Why Thinking Must Be Seen

Let me start with an honest admission. Long before AI arrived, most of us were already assessing products and hoping the thinking happened somewhere in the middle.

A student submits a system design proposal. We grade the proposal. But did the student genuinely wrestle with tradeoffs? Did they consider the user population? Did they revise their mental model partway through? We have no idea, because the process was invisible to us.

Bransford’s foundational work on how people learn keeps returning to the same finding: learning is the result of thinking, not the result of submitting. Students arrive with preconceptions already formed. If we do not actively engage those preconceptions, new information slides off. They perform for the test and revert to old models the moment the course ends.

AI has not created this problem. It has simply removed our excuse for not solving it.

When a student can generate a convincing network architecture diagram in thirty seconds, or produce a well-structured post-mortem without ever having reflected on anything, the gap between product and thinking becomes impossible to ignore. The question is no longer “did the student submit something good?” It is “did the student actually think?”

Eight Thinking Moves That Matter

Ron Ritchhart, Mark Church, and Karin Morrison spent years researching what happens in classrooms where deep learning consistently occurs. Their conclusion, documented in Making Thinking Visible, is that those classrooms share one quality: the teachers have found ways to make the thinking process explicit, observable, and routine.

They identify eight types of thinking that matter most in deep learning:

  • Observing closely and describing what is there

  • Building explanations and interpretations

  • Reasoning with evidence

  • Making connections

  • Considering different viewpoints and perspectives

  • Capturing the heart and forming conclusions

  • Wondering and asking questions

  • Uncovering complexity and going below the surface

Read that list through the lens of Fink’s dimensions. Making connections is Integration made visible. Wondering and asking questions is Learning How to Learn in action. Considering different viewpoints is Human Dimension surfacing in real time. Capturing the heart and forming conclusions is Caring given a concrete form.

🗂️

Interactive chart To see the full picture of how each thinking type maps to Fink’s dimensions, explore the Fink-Ritchhart Connection Chart. Hover over any card to see which dimensions a thinking type activates and why the connection exists. It is a practical audit tool to keep open when you are reviewing an assignment or selecting a routine.

→ Open the Fink-Ritchhart Connection Chart

Fink tells us which dimensions produce significant learning. Ritchhart gives us the thinking moves that actually get students there.

The bridge between them is the thinking routine: a short, structured, repeatable cognitive scaffold that makes the reasoning process visible to both the instructor and the student, before, during, or after an assignment. The key word is repeatable. A thinking routine used once is an activity. Used consistently, it becomes a habit of mind.

What It Looks Like in Practice

The most common objection I hear from computing faculty is that thinking routines feel like they belong in a humanities classroom. They do not. Here is what they look like in technical contexts, alongside what an AI-generated response to the same prompt would typically produce.

Connect-Extend-Challenge in a Cybersecurity Course

After students analyze a new class of vulnerabilities, rather than simply asking them to summarize what they learned, ask three questions: What connections do you see between this attack surface and techniques you have encountered before? How does it extend your mental model of how systems fail? What does it challenge in your current assumptions about secure design?

🎯 Fink dimension:Integration


What AI output looks like: A fluent, well-organized paragraph connecting the vulnerability class to common attack taxonomies, citing OWASP or MITRE ATT&CK — with no personal frame of reference and no indication of cognitive struggle or surprise.

What authentic visible thinking looks like: A response that names a specific system or course context the student is connecting to, identifies a genuine point of confusion or revision in their thinking, and asks a follow-up question they actually want answered.

Think-Puzzle-Explore Before a Systems Design Assignment

Before students begin a major design task — architecting a distributed data pipeline or specifying a real-time embedded system — ask them to spend ten minutes on three prompts: What do you already think you know about this problem space? What puzzles you about it? What would you want to explore before committing to a design direction?

🎯 Fink dimension:Learning How to Learn


What AI output looks likeA generic overview of the problem domain, a list of standard considerations (latency, fault tolerance, scalability), and a suggested exploration path drawn from documentation.


What authentic visible thinking looks likeIdiosyncratic puzzles specific to this student’s prior experience, honest uncertainty about where to start, and questions that reflect what they personally do not yet understand rather than what the internet says is hard.

I Used to Think / Now I Think at Project Completion

At the end of a software engineering project, before students submit their final documentation, ask them to complete two sentences: “I used to think [X] about [the problem, the technology, the team process]” and “Now I think [Y].” Require them to explain what changed their thinking.

🎯 Fink dimensions: Caring&Learning How to Learn


What AI output looks like: A polished reflection arc that describes growth in general terms, references course concepts correctly, and lands on a tidy conclusion about professional development.


What authentic visible thinking looks likeSomething specific — a named moment in the project where a decision backfired, a teammate conversation that reframed the problem, a line of code that revealed a misunderstanding the student did not know they had. Specificity is the signal.

Design Principle: Assess the Process

When you cannot assess the thinking directly, make the process the assessment.

This does not mean adding reflection questions as an afterthought. It means designing the assignment so that the thinking process is where the grade actually lives.

In an introductory programming course, this might mean asking students to annotate their code not with what it does but with why they made the choices they made. Not “initialize array here” but “I chose an array over a linked list because access patterns here are random and I wanted O(1) lookup, but I am not sure this holds if the input size grows.” That annotation is a window into reasoning that the code itself cannot provide.

In a networking or operating systems course, it might mean asking students to document their debugging process rather than just their solution. What did they try first? What did that tell them? What did they have to revise? The process log is where the learning lives. The solution is just evidence that the process concluded.

In a capstone or project course, it might mean maintaining a design decision log throughout the semester. Every significant choice — whether architecture, data model, technology selection, or tradeoff resolution — gets a brief written rationale. When students defend their work at the end of the semester, they are not reconstructing decisions from memory. They are curating a record of their own thinking.

AI and the Cost of Offloading

I want to name this clearly because it gets lost in conversations about academic integrity.

The concern with AI offloading is not primarily that students are cheating. It is that they are forfeiting the experiences that produce the outcomes we most care about.

A student who uses AI to generate their debugging rationale has not practiced the metacognitive regulation that Fink’s Learning How to Learn dimension is built on. They have not developed the habit of monitoring their own understanding and adjusting. They have not had the experience of being genuinely stuck and finding their way through. That experience is not a side effect of learning computing. It is the mechanism by which computing is learned.

⚠️

When we assign AI without designing for visible thinking, we are not just making assessment harder. We are removing the conditions under which the most durable and significant learning occurs.

Getting Started: Three Design Questions

Before assigning any major project, ask yourself three questions.

🧭

1. What thinking do I actually want to see?Name the specific thinking moves from Ritchhart’s list that the assignment should require. Use the Fink-Ritchhart chart to check which Fink dimensions those moves activate. If Human Dimension, Caring, or Learning How to Learn are absent, you have identified your highest-risk area for offloading.

2. Where will that thinking show up in the student’s work?If the answer is “in the final product,” reconsider the design. The final product is where AI performs best. The thinking needs a dedicated, structured space: a routine checkpoint, an annotation layer, a decision log, an oral defense.

3. How will you know it is genuine?Look for specificity. Genuine thinking produces idiosyncratic responses: a named moment of confusion, a connection no one else in the class would make, a question that only makes sense given this student’s prior experience. Generic fluency is the signal that the thinking may have been skipped.

Use the Project Zero Thinking Routine Toolbox to select a routine that fits your course context. It is free, searchable by purpose, and includes facilitation guidance. In most cases, a well-chosen routine adds less than fifteen minutes to the student’s workload and gives you far more information than the artifact alone.

📥

Free resources to go with this post

  • Fink-Ritchhart Interactive Chart — hover and filter to audit any assignment

  • Connect-Extend-Challenge Template — discipline-specific prompts for CS, engineering, and HCI

  • Process Depth Rubric — for assessing how students think, not just what they submit


The goal is not to make every assignment a reflection exercise. It is to build enough visibility into your course design that you can actually see whether the learning you care about is happening.

Because if you cannot see the thinking, you cannot teach it.

And in the age of AI, if you cannot see it, your students may have already learned to skip it.

What thinking routines are you already using in your courses, or curious about adapting for a technical context? Share your approach or questions in the comments.

Subscribe

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
Featured
449d1a27-4ebd-4849-8db9-eda734f34f52_598x264.webp
Mar 13, 2026
Making Thinking Visible in the Age of AI
Mar 13, 2026
Mar 13, 2026
substackBeyondBlooms.png
Mar 8, 2026
Beyond Bloom’s: What Fink’s Taxonomy Means for Computing Courses, AI and Assessments
Mar 8, 2026
Mar 8, 2026
Mar 6, 2026
Don't Judge Me: How I Used Claude & the Real Housewives to Learn to Program
Mar 6, 2026
Mar 6, 2026


Tags HigherEd, FacultyDevelopment, GenerativeAI, AIinEducation, SignificantLearning, CourseDesign, MakingThinkingVisible, InstructionalDesign
Comment

Beyond Bloom’s: What Fink’s Taxonomy Means for Computing Courses, AI and Assessments

March 8, 2026 Akesha Horton
teacher with a full class of college students discussing AI & Deep Learning

If you have been in higher education for more a while, you have probably encountered Bloom’s Taxonomy: Remember, Apply, Analyze, Evaluate, Create. If you are new or need a quick refresh; review this brief primer.

The cognitive hierarchy that tells you whether you are asking students to think deeply or just recall facts.

Bloom’s is genuinely useful. But L. Dee Fink, in his 2013 book Creating Significant Learning Experiences, noticed something important. Bloom’s only addresses one dimension of how humans learn. It tells you how cognitively demanding a task is. It says almost nothing about whether students will care about it, connect it to anything real, or know how to keep learning on their own after the course ends.

For computing and STEM faculty specifically, this gap has real consequences. We spend enormous energy designing cognitively rigorous assessments and then wonder why students who passed the exam still struggle to function in an internship. Or why a technically strong student falls apart in a team environment. Or why a graduating senior freezes up when asked to learn a new framework on their own.

Bloom’s tells you how hard the thinking is. Fink’s tells you whether it matters to the person doing the thinking.

The Six Dimensions of Fink’s Taxonomy

Fink’s taxonomy is not a hierarchy. It is an interactive web where every dimension strengthens the others. Here is what each one looks like in the context of a computing or STEM course.

DimensionWhat It MeansIn a Computing CourseFoundational Knowledge (FK)Understanding and remembering key concepts, facts, and principlesData structures, algorithm analysis, language syntaxApplication (AP)Critical thinking, creative thinking, practical skills, managing projectsImplementing algorithms, debugging, system designIntegration (IN)Connecting ideas across subjects, disciplines, and life contextsSeeing how OS concepts relate to security; connecting theory to production codeHuman Dimension (HD)Learning about oneself and others, including identity, perspective, and collaborationCode review etiquette, equity in technical interviews, accessibility awarenessCaring (CA)Developing new feelings, interests, and values; becoming genuinely investedCaring about software quality, open-source ethics, end-user impactLearning How to Learn (LL)Metacognition, self-direction, and inquiry skills; becoming a self-improving practitionerReading documentation, developing a debugging mindset, knowing when to ask for help

Why This Matters Right Now

Here is the uncomfortable truth about generative AI and course design. AI is extraordinarily good at Foundational Knowledge. It can explain recursion, walk through a sorting algorithm, or produce syntactically correct code on demand. If your assessments primarily target FK, students do not need to engage with the material. They can simply delegate the work.

But AI cannot learn to care about software quality on your student’s behalf. It cannot develop their debugging mindset. It cannot give them the experience of genuinely connecting theory to a problem that matters to them personally. The upper dimensions of Fink’s taxonomy, specifically HD, CA, and LL, are exactly where human learning is irreplaceable. They are also exactly where most computing assessments underinvest.

The AI Audit

For each of your major assessments, ask yourself: which Fink dimensions does this assignment actually require the student to engage? If you only see FK and AP, and especially if you only see FK, you have found your highest-risk assignment for AI substitution. That is your starting point for redesign. A good place to start is by auditing the verbs you are already using. The Computing Verb Atlas lets you search any verb and immediately see which Bloom’s level and Fink dimensions it activates, making the audit process much faster. [Open the tool.]

Screenshot of the Taxonomy Verb FinderBloom's + Fink's  mapped across both Bloom's Revised Taxonomy and Fink's Significant Learning dimensions for computing and STEM courses.

Practical Examples: Fink in a CS Course

The Difference One Question Makes

Consider a standard data structures assignment: “Implement a binary search tree with insert, delete, and search operations.”

This prompt targets Foundational Knowledge and Application. It misses Integration, Human Dimension, Caring, and Learning How to Learn entirely. It is also almost entirely delegatable to AI.

Now add one question: “Describe a real system you use daily that likely relies on a tree structure. How does your implementation compare to what you would expect in production? What surprised you?”

That one addition requires Integration (connecting BST theory to the real world), Human Dimension (drawing on the student’s own experience), Caring (investment in a system they actually use), and Learning How to Learn (comparing a learning exercise to production reality). AI can generate plausible-sounding text in response to that question. But the student still has to have had the experience to write something genuine.

Caring in a Systems Course

Caring does not mean students have to love your subject. Fink defines it as developing new interests, values, or feelings, including professional values. In a software engineering course, Caring might look like this: Does the student give any thought to writing readable code? Do they consider the next person who will maintain their work? Are they forming a genuine perspective on open-source licensing?

These are not soft skills. They are what separates a developer from a professional.

Learning How to Learn in Every Course

The LL dimension is the most underrepresented in computing curricula and arguably the most important one for long-term career success. The useful life of a specific technology stack is measured in years. The ability to independently pick up a new one is what sustains a career over decades.

What does LL look like as an assessment? It might be a reflection on which resources a student used to solve a difficult problem and why they chose those resources. It could be a post-mortem where students analyze their own debugging process. It can be as simple as asking students to document what they tried before asking for help, making their problem-solving process visible rather than just its final output.

The goal is not students who have learned your course. It is students who know how to keep learning after it ends.

A Note on Integration

A woman stands in front of a white board that has a concept map on it.

Integration is the dimension most likely to shift how a student sees your discipline altogether. It is the moment when a student realizes that the graph algorithms from their CS course are the reason their navigation app works. Or that the ethics discussion in their intro course was not a detour but actually a preview of every technical decision they will make professionally.

Adding Integration to an assignment rarely requires a redesign. It often requires one additional prompt: “How does this connect to something outside this course?” The specificity of the answer will tell you more about a student’s genuine understanding than the code they submitted.

Getting Started: The Fink Audit

Before the next post, try this exercise. Take your next major assignment and map it against the six dimensions. Which dimensions does it genuinely require? Which are completely absent? Then ask yourself: what is the smallest possible change that would add one missing dimension without increasing your grading burden?

References
Anderson, L.W. and Krathwohl, D.R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing. Longman.
ACM Committee for Computing Education in Community Colleges (CCECC). Bloom’s for Computing: Enhancing Bloom’s Revised Taxonomy with Verbs for Computing Disciplines (draft report).
Fink, L.D. (2003). Creating significant learning experiences. Jossey-Bass.

Featured
449d1a27-4ebd-4849-8db9-eda734f34f52_598x264.webp
Mar 13, 2026
Making Thinking Visible in the Age of AI
Mar 13, 2026
Mar 13, 2026
substackBeyondBlooms.png
Mar 8, 2026
Beyond Bloom’s: What Fink’s Taxonomy Means for Computing Courses, AI and Assessments
Mar 8, 2026
Mar 8, 2026
Mar 6, 2026
Don't Judge Me: How I Used Claude & the Real Housewives to Learn to Program
Mar 6, 2026
Mar 6, 2026

Subscribe

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!
Tags HigherEdFacultyLife #CourseDesign #SignificantLearning #ComputingEducation #InstructionalDesign #BloomsTaxonomy #TeachingAndLearning, HigherEd #FacultyLife #CourseDesign #SignificantLearning #ComputingEducation #InstructionalDesign #BloomsTaxonomy #TeachingAndLearning, HigherEdFacultyLife, CourseDesign, SignificantLearning, ComputingEducation, InstructionalDesign, BloomsTaxonomy, TeachingAndLearningHigherEd, FacultyLife, TeachingAndLearning
1 Comment