Teaching for Integrity in the Age of AI: From Compliance to Culture

Inspired by Chapter 2 of The Opposite of Cheating: Teaching for Integrity in the Age of AI by Tricia Bertram Gallant and David Rettinger

Academic integrity is not a checklist or compliance form. It is a living culture shaped by what we model, how we design, and the conversations we hold with our students. Gallant and Rettinger remind us that integrity is cultivated through transparency, design, and dialogue, not surveillance or punishment. The real challenge now is how to teach integrity in an age where AI is everywhere.

The U.S. Department of Education’s 2023 report, “Artificial Intelligence and the Future of Teaching and Learning”, encourages educators to treat AI as a design opportunity for advancing human-centered learning, not a threat to academic honesty. Recent data highlight the urgency of this work. According to the Higher Education Policy Institute’s 2025 Student AI Survey, 92% of undergraduates report using generative AI tools, up from 66% the year before. A Guardian report found that AI-related misconduct cases have tripled since 2023. The takeaway is clear: integrity education has to evolve alongside AI literacy.

2025 Snapshot: AI & Academic Integrity

Use and Attitudes

  • Over 85% of undergraduates use GenAI tools (Inside Higher Ed, HEPI 2025)

  • 61% of students want clear, course-level AI policies

  • 33% of students were concerned about being accused of plagiarism or cheating (Campus Technology)

  • While 45% believe using AI for editing is “acceptable academic support”

Institutional Responses

Faculty Trends

  • A significant gap exists between student and faculty adoption: only 61% of faculty report using AI in teaching, and of those, a large majority (88%) do so minimally (ASEE AI Training 2025 led by Drs. Adita Jori and Andrew Patz).

  • 82% of instructors use GenAI for feedback or rubric design (EDUCAUSE 2025 AI Landscape Study)

  • Detection tools now use watermarking and metadata tracing, but false positives remain a major concern (arXiv 2025)

Model Integrity

Students notice how we work. They learn from the way we check our sources, document decisions, and acknowledge mistakes. Modeling integrity starts with transparency.

As the EDUCAUSE 2025 AI Landscape Study notes, many universities are investing in training that helps faculty engage AI responsibly. Modeling integrity now means showing how to use AI intentionally, not avoid it.

This aligns with findings from Gu and Yan’s 2025 meta-analysis, which showed that students benefit most when teachers scaffold AI use and talk openly about it. When instructors frame AI as a learning partner, not a shortcut, students develop stronger judgment and accountability.

Make Integrity Explicit

Integrity should show up as often in our discussions as it does in our policies. When we talk about it before projects, during collaborations, and after challenges, students begin to see ethics as part of the learning process.

Tricia Bertram Gallant and David Rettinger emphasize that ethical behavior thrives when it’s designed into the experience. Singer-Freeman, Verbeke, and Barre (2025) found that students across all academic levels want clear guidance on what’s acceptable AI use. If we make expectations explicit, we replace anxiety with understanding.

A recent MDPI review on Generative AI and Academic Ethics reinforces this point, noting that while GenAI can enhance engagement and efficiency, it also increases risks to originality and ethical reasoning.

Use Clear, Simple Language (The Social Institute)

Students need to understand A.I. policies to be able to follow them. That means avoiding jargon and overly technical language.

Instead of: “A.I. assistance must align with established academic integrity principles.”

Say: “You may use A.I. for brainstorming ideas, but not for writing entire sections of code or essays.”

Establish consistent Rules Across Departments or Schools

One of the biggest sources of confusion is inconsistent enforcement when it comes to A.I. rules. Departments or schools can develop a universal A.I. guidelines that applies to all instructors, rather than allowing individual educators to set conflicting rules. Over half of students (58%) report that their school or program has a policy, but a substantial number (28%) say it differs, with some courses or professors having a policy and some not (Forbes 2025). Consider creating an instructor handbook outlining departmental or school-wide A.I. best practices to make sure they are consistently communicated to students.

Frame Integrity Positively

Instead of framing integrity around rules, frame it around growth. Students respond better when they see ethical choices as part of their professional development.

A Packback editorial on academic integrity in 2025 argues that punitive detection systems often erode trust and discourage learning. When faculty shift from surveillance to conversation, integrity becomes something students take ownership of, not something they fear.

Clarify Expectations

Ambiguity creates rationalization. In the age of AI, clarity is an act of fairness.

The National Centre for AI’s 2025 student study found that first-year students, in particular, feel confused about when and how AI use is acceptable. Faculty can address this by defining boundaries early and discussing examples. Transparency about tools, citations, and documentation helps students learn discernment.

Research from arXiv’s 2025 watermarking study adds that while detection tools are improving, they still make errors. Open conversations about what these systems can and cannot do build trust and understanding. Institutions like MIT and Duke University (22 minute mark) provide sample policy language for faculty to adapt. These statements define what “appropriate help” means and require students to cite AI contributions when used. Clarity transforms anxiety into accountability.

Normalize Conversations About Ethics

Ethics belongs in everyday learning. Conversations about bias, authorship, and data use should happen alongside technical instruction.

A 2025 study on synthetic media ethics found that students value open discussions about deepfakes and misinformation but often lack the frameworks to evaluate them. Integrating these discussions into our teaching helps students connect ethics to both academic and professional practice.

Use the Syllabus as a Moral Document

The syllabus sets the tone for integrity. Transparent grading policies, clear AI statements, and flexible revision options communicate fairness and care.

Universities are redesigning their syllabi and assessments to support “authentic learning” instead of reactive policing. The University of Melbourne’s Assured Learning model and the UCL Education AI Initiativeare leading examples, focusing on oral exams, reflective portfolios, and transparent assessment design.

Respond to Misconduct Constructively

When integrity violations occur, they can become moments for growth. Reflection, accountability, and dialogue teach more than punishment ever could.

The Packback 2025 Integrity Report encourages “growth-oriented remediation,” noting that many flagged cases stem from confusion, not intention. At Indiana University, we can uphold policy while still approaching each case as a learning opportunity.

Building a Culture of Integrity

Integrity thrives when it’s shared across the institution. Faculty, staff, and students each play a role.

The University of New South Wales’ 2025 partnership with OpenAI illustrates this shift: giving staff controlled access to ChatGPT within a responsible use framework. When universities model integrity through their own practices, students learn that ethics is not a barrier to innovation—it’s the framework that sustains it.

Final Thought

Teaching for integrity in the age of AI is about creating conditions where honesty becomes the natural choice. When we model transparency, design for trust, and engage in open dialogue, we teach more than content—we teach character.

As Amanda McKenzie, Director of Academic Integrity at the University of Waterloo, Canada, shares, “Integrity is not the opposite of cheating. It’s the presence of purpose.” When that purpose runs through our teaching, policies, and partnerships, we do more than protect academic standards. We prepare students to lead with integrity in a world increasingly shaped by AI.

Building a Framework for Academia-Industry Partnerships and AI Teaching and Learning Podcasts

In March, I shared the an overview of the  “Practitioner to Professor (P2P)‘ survey that the CRA-Education / CRA-Industry working group analyzed. They recently released a report titled Breadth of Practices in Academia-Industry Relationships which explores a range of engagement models from research partnerships and personnel exchanges to master agreements and regional innovation ecosystems.

Key Findings and Observations

The report organizes its findings from the workshop into three categories: observations, barriers, and common solutions:

  • Observations A major theme was the critical need to embed ethical training into AI and computing curricula through both standalone courses and integrated assignments. It was noted that while academia is best suited to drive curriculum development, input from industry is essential to ensure the content remains relevant to real-world applications.

  • Barriers Key barriers to successful collaboration were identified, including cultural differences and misconceptions between academic and industry partners. For instance, industry’s focus on near-term goals can clash with academia’s long-term vision. A significant practical barrier is the prohibitive cost of cloud and GPU hardware, which limits students’ experience with cloud and AI development tools.

  • Common Solutions Effective solutions include the fluid movement of personnel between organizations through internships, co-ops, sabbaticals, and dual appointments. Streamlined master agreements at the institutional level also help facilitate research collaborations by reducing administrative friction.

Strategies for Research Collaboration

The report outlines a multi-level approach to enhancing research partnerships:

  • Individuals Faculty and industry researchers can initiate relationships through internal seed grants, sabbaticals in industry, dual appointments, and by serving on industry advisory boards.

  • Departments Departmental leaders can foster collaboration by strategically matching faculty expertise with industry needs, offering administrative support, and building a strong departmental brand with local industry.

  • University Leadership Senior leaders can address systemic barriers by creating a unified, institution-wide strategy, developing flexible funding models, and implementing master agreements to streamline partnerships.

  • Regional Ecosystems The report emphasizes the importance of universities partnering with local industries and startups to build thriving regional innovation ecosystems, which can drive economic development and secure government support.

Education and Workforce Development 

With the rise of generative AI, the report highlights an urgent need for universities and industry to partner on education.

  • Curriculum Adaptation Computing curricula need to be updated to include foundational concepts in DevOps and scalable systems, which are often not part of the core curriculum. While AI literacy is essential, the report suggests a balance, with 80% of instruction remaining focused on core computer science skills. Ethical reasoning should be integrated throughout the curriculum, not just in a single course.

  • Workforce Programs To meet industry demands for job-ready graduates, the report advocates for university-industry partnerships in co-op programs, internships, and capstone projects. It also points to the need for universities to offer flexible programs like certificates and online courses to help upskill and reskill the existing workforce.

Recommendations

The report concludes with five main recommendations for universities, industry, and government:

  1. Enhance research impact by combining academia’s long-term vision with real-world problems from industry. This can be achieved by embedding faculty in industry and industry researchers in universities.

  2. Leverage the convening power of universities to build partnerships that benefit the wider community, using mechanisms like industrial advisory boards and research institutes.

  3. Accelerate workforce development by aligning university programs with regional innovation ecosystems and having industry invest in talent through fellowships and internships.

  4. Deliver industry-relevant curricula grounded in core computing principles, and collaborate with industry experts to co-design courses in high-demand areas like AI and cloud computing.

  5. Establish new incentives and metrics to recognize and reward faculty for their contributions to industry partnerships in promotion and tenure evaluations.

AI Teaching and Learning Podcasts:What If College Teaching Was Redesigned With AI In Mind?

https://learningcurve.fm/episodes/what-if-college-teaching-was-redesigned-with-ai-in-mind

A former university president is trying to reimagine college teaching with AI in mind, and this year he released an unusual video that provides a kind of artist’s sketch of what that could look like. For this episode, I talk through the video with that leader, Paul LeBlanc, and get some reaction to the model from longtime teaching expert Maha Bali, a professor of practice at the Center for Learning and Teaching at the American University in Cairo.

The Opposite of Cheating Podcast

https://open.spotify.com/show/5fhrnwUIWgFqZYBJWGIYml

(Produced by the authors of the book with the same name) the podcast shares the real life experiences, thoughts, and talents of educators and professionals who are working to teach for integrity in the age of AI. The series features engaging conversations with brilliant innovators, teachers, leaders, and practitioners who are both resisting and integrating GenAI into their lives. The central value undergirding everything is, of course, integrity!

Teaching in Higher Ed podcast, “Cultivating Critical AI Literacies with Maha Bali”.

https://teachinginhighered.com/podcast/cultivating-critical-ai-literacies/

In the episode, host Bonni Stachowiak and guest Maha Bali, a Professor of Practice at the American University in Cairo, explore the complexities of integrating artificial intelligence into higher education.

Bali advocates for a critical pedagogical approach, rooted in the work of Paulo Freire, urging educators to actively experiment with AI to understand its limitations and biases. The discussion highlights significant issues of cultural and implicit bias within AI systems. Bali provides concrete examples, such as AI generating historically inaccurate information about Egyptian culture, misrepresenting cultural symbols, and defaulting to stereotypes when prompted for examples of terrorism.

The Actual Intelligence podcast

speakswith Dr. Robert Neibuhr from ASU regarding his recent article in Insider Higher Ed: “A.I and Higher Ed: An Impending Collapse.” Full Podcast: https://podcasts.apple.com/us/podcast/is-higher-ed-to-collapse-from-a-i/id1274615583?i=1000725770519

with Bill Gates having just said that A.I. will replace most teachers within ten years, it seems essential that professional educators attune to the growing presence of A.I. in education, particularly its negative gravitational forces.