I Put Two AI Tools to the Test on the Same Task: Here's What I Learned

I had a PDF of workshop slides that needed to become a polished, accessible PowerPoint. 

screen shot of a presentation slide for creating a research poster. The slide is not ADA compliant.

Instead of rebuilding it from scratch myself, I decided to test two different AI tools on the same task: Microsoft Copilot and Claude. Same source file. Same accessibility requirements. Very different results. Here is what happened with each one, what I learned from the comparison, and what it means for anyone using AI tools in their teaching and design workflows.

Part 1: What Happened with Microsoft Copilot

The Prompt

"Redesign this content into a PowerPoint presentation that is both aesthetically pleasing and strictly follows Section 508 Accessibility guidelines. Crucial Requirement for Titles: Every slide must use the official PowerPoint 'Title' placeholder for its heading. Do not use floating text boxes for slide titles. The title text must be visible in 'Outline View' to ensure screen readers can identify the slide's purpose. Layout & Design: > * Use the standard 'Title and Content' layout for all body slides. * Ensure a high contrast ratio between text and background. * Mark all decorative background elements or repetitive shapes as 'Decorative' within the file metadata. * Provide descriptive Alt Text for all functional icons or data-driven images. Please generate the .pptx file now using Indiana University's official palette.

What Copilot Did

poster_presentation_tips.pptx

A person looking at a graph

Behind the scenes, Copilot parsed the PDF content, performed a web search to pull IU's official hex codes (Crimson #990000 and Cream #EDEBEB from IU's branding resources), then built the slide deck using Python. The entire process, reading the prompt, searching for brand colors, parsing the PDF, generating the file, took only a few seconds.

That's the part that's genuinely impressive. A task that would take 30 to 60 minutes to complete manually, longer if you're applying accessibility standards carefully, was done almost instantly.

Where It Got Complicated

The IU colors were retrieved correctly, but they didn't fully appear in the slides. This is worth understanding because it's not a bug. It's an architectural limitation.

Copilot built the file programmatically, which means it set colors at the object level rather than through the slide master. No theme-level color changes, no branded backgrounds, no full visual overhaul. It also made a deliberate accessibility trade-off: IU Crimson as a background color risks low contrast with dark text, so Copilot defaulted to safer neutral styling instead. When accessibility and branding competed, it chose accessibility. That's technically the right call, even if the result isn't visually what you'd expect.

Screenshot of Same Powerpoint redesigned  by CoPilot with IU Colors


The takeaway is that understanding why Copilot made those choices matters just as much as knowing how to use the tool. The foundation was there and could be refined from that starting point. When I updated the prompt to have IU Colors integrated; it added crimson rectangles that had to be tagged as decorative images. Click Iu_poster_tips.pptx to see the slide deck.

Part 2: What Happened with Claude

The Prompt

I uploaded the same PDF and gave Claude a single instruction:

"Redesign this so that it is a PowerPoint that is aesthetically pleasing and meets Accessibility guidelines as defined by Section 508 of the Rehabilitation Act." Then I waited. (NOTE: If you choose to use Claude to create a PowerPoint, make sure it is not being used for content that contains sensitive material such as student data, etc.)

What Claude Did

Click title_abstract_writing_accessible_v2 1.pptx to see the slide deck.

Within about two minutes, Claude read all 16 pages of content, selected a cohesive teal and gold color palette, and rebuilt all 13 slides from scratch as a fully formatted .pptx file,  complete with icon cards, numbered sections, a title slide, and consistent layouts throughout.

A polished PowerPoint title slide with a dark teal background, bold white title text reading "How to Write Titles & Abstracts," an orange accent bar, and "Indiana University — Luddy School of Informatics, Computing, and Engineering" in the footer.

Honestly, I liked what it produced. The design was clean, intentional, and visually stronger than the original PDF. The color choices were bold but readable, the layouts were structured, and it felt like something I would actually use in a workshop setting.

The Process and the Gotchas

After reviewing the slides, I made one follow-up request: mark any shapes that weren't icons as decorative. Claude updated the file. But when I opened the PowerPoint and ran the built-in Accessibility Checker, it flagged 125 items still needing attention.

Screenshot of Accessibility Assistant Reviewing the 51 flagged shapes — most were design elements, not content.

Sample Accessibility Assistant

All 125 items were decorative shapes: squares, rectangles, and design elements that carry no content meaning. Claude's update had been a step in the right direction, but PowerPoint's built-in checker caught what remained.

The fix itself was fast. I clicked through the flagged items, identified which six were actual images needing alt text, wrote descriptions for those, and let PowerPoint automatically mark the remaining 119 as decorative. Less than a minute of cleanup.

One More Snag: The Title Compliance Issue

When I uploaded the finished file to Microsoft 365 and ran the Accessibility Checker again, my titles were flagged as non-compliant. The issue? Claude had used floating text boxes instead of official PowerPoint placeholder elements. The Accessibility Checker didn't recognize them as proper slide titles.

To fix it properly, I rebuilt the presentation using python-pptx, which let me force the use of Title and Content and Title Slide layouts assigning each title a proper placeholder identifier the Accessibility Checker actually recognizes. I also resolved a z-order issue where background design elements were occasionally rendering on top of the text.

I'm currently working on translating this coding-based fix back into a simple prompt-based workflow for anyone who doesn't want to touch Python. I'll share that when I have it worked out.

Putting It Together: What the Comparison Reveals

Both tools got me to a usable, accessible PowerPoint faster than I could have done it manually. But they got there differently, and those differences matter depending on what you're trying to accomplish.

Copilot is fast, integrated directly into Microsoft 365, and appropriately cautious. It prioritized accessibility compliance over branding, which is technically correct even if the visual result felt incomplete. If you're working within a campus ecosystem where Microsoft tools are the standard, Copilot is a solid starting point;  just go in knowing you'll likely need to refine the visual branding manually afterward.

Claude made bolder design choices and produced a more visually polished output from a single prompt. The teal and gold palette, the icon cards, the numbered sections;  those were all Claude's decisions, and they were good ones. The accessibility gaps it left behind were real but fixable in minutes using PowerPoint's own tools.

Neither tool alone was the complete solution. Both made changes to the original text. In this case it was an improvement, but you should check to make sure your text retains your intended meaning. Together AI generation followed by a quick accessibility checker review. The process that would have taken hours manually was done in under 10 minutes.

The Practical Takeaway

Running the accessibility checker at the end should be standard practice regardless of what tool you use to generate your file. Think of it as a spell check for inclusion. The checker doesn't just catch AI mistakes; it catches human ones too. Build it into your workflow and it becomes fast and routine.

AI tools are genuinely useful for this kind of work. They're not plug-and-play replacements for design judgment or accessibility expertise, but they dramatically lower the time cost of getting to a first draft that's already doing most of the right things. That's worth paying attention to

Quick Post - What Would It Look Like to Love Our Students Into Computing?

Reflections on the SIGCSE TS 2026 Keynote: Love, Learning, and Computing Education

The Pipeline Isn't Neutral

Traditional computing education often uses a "pipeline" metaphor, focusing on attracting and retaining students for technical careers. However, this approach overlooks crucial aspects like meaningful journeys, student belonging, and inclusivity. The keynote advocates for identity-affirming care in computing education. When students can fully engage, they contribute in ways that narrow, identity-stripping classrooms cannot foster.

Identity Stripping Classrooms. When students must hide who they are, learning suffers, and talent is lost. Shows a person who's image looks to be breaking up in fragments

"Whatever Love" in Practice

The keynote highlights practical examples of this philosophy:

  • Kapor Center's Culturally Responsive-Sustaining CS Education Framework: This framework redefines equity-centered computing, viewing students' cultural backgrounds as assets rather than obstacles.

  • Ricarose Roque's work on family and community-centered computing: Projects like Family Creative Learning demonstrate that designing for connection and joy opens computing to a broader audience, leading to a different kind of rigor.

  • Kylie Peppler's scholarship on tools and materials: Peppler's research shows that educational materials are not neutral; they carry cultural histories and implicit messages about belonging. Arts-integrated toolkits can broaden participation and improve learning outcomes.

  • Jayne Everson's ICER 2025 paper, Dreaming of Difference: This paper emphasizes student voice, revealing that secondary students desire distributed accountability, autonomy, community, and collaboration. They seek a redefined rigor that acknowledges their whole selves.

  • Mara Kirdani-Ryan's dissertation on Identity Fragmentation: This work addresses the feeling students have of needing to suppress parts of their identity to fit into CS, identifying it as an environmental issue, not a student problem.

  • Adrienne Gifford's work on language, culture, and CS classroom practice: Gifford's projects, such as Wordplay, illustrate how valuing students' linguistic and cultural identities as intellectual resources can transform research and teaching.

Implications for Learning Design

For those in computing education, the keynote prompts a critical self-assessment: Do our choices in courses, assessments, language, and tools affirm student belonging? This has direct implications for:

  • Authentic assessment in the age of generative AI: Process-oriented evaluation, portfolio work, and student involvement in assessment design are acts of dignity, valuing how students think, not just what they produce.

  • Faculty development: Educators must understand that their choices are not neutral; they either affirm or diminish students.

  • AI in education: The tools we integrate are not neutral actors. "Whatever love" demands interrogating their underlying assumptions before introducing them to students.

A Different Kind of Rigor

The keynote doesn't ask us to lower our standards. It asks us to raise them — to hold ourselves to a higher standard of care, design, and accountability to the people we serve.

What would it mean to design a CS course as an act of love? Not love as sentimentality, but love as Roque and Peppler and Everson and Gifford are practicing it: grounded in evidence, committed to dignity, willing to be uncomfortable in service of something better.

I think that's the question worth sitting with.

Want to explore these ideas further? Watch the full SIGCSE TS 2026 keynote on YouTube, and dig into the scholars cited: the Kapor Center's CS Education Framework, Ricarose Roque's work, Kylie Peppler's research, and Jayne Everson's ICER 2025 paper.