Responsible AI in Teaching, Assessment, and Academic Integrity

Audience: Faculty

Format: Faculty development seminar or panel session

Educator-Led • Policy-Aligned • Pedagogy First

Make pedagogical decisions about AI
without prescribing one model.

This seminar supports faculty in navigating authorship, assignment design, transparency, and academic integrity in a way that protects learning goals and preserves professional judgment. The point is not enforcement by tool, it is clarity by design, communicated in language students can follow.

Audience: faculty
Format: seminar or panel session
Stance: learning goals determine policy
Why this exists: This is politically sensitive because it touches fairness, trust, and professional autonomy. It requires an educator, not an IT trainer, and it requires shared language that reduces conflict.

Guiding Principles

Clarity, fairness, autonomy, and learning-centered boundaries.

Learning Goals First

AI allowances and restrictions should map to the skill being assessed, not to the existence of a tool.

Transparency Over Suspicion

Disclosures and process evidence reduce conflict more effectively than detection or guesswork.

Consistency With Flexibility

Departments share a baseline language, while faculty retain discipline-specific choices and rationale.

Seminar posture: We build a shared framework for decisions, then participants adapt it to course context without being forced into one stance.

Core Topics

Practical faculty decisions with clear student-facing language.

  • Define authorship as a spectrum (idea, outline, draft, revision, final).
  • Clarify what students must do themselves for the learning goal to be valid.
  • Use discipline norms, not universal slogans, to define “ownership.”
Student-facing move: “Tell me what you did, what AI did, and what you changed.”
  • Require process artifacts (planning notes, checkpoints, drafts, reflections).
  • Use lived context, course readings, lab data, field observations, or unique prompts.
  • Build oral defense, annotation, or decision logs into grading, proportionally.
Resilient pattern
“Claim + evidence + why this evidence fits this claim (course-specific).”
Resilient pattern
“Version history + revision rationale + what changed after feedback.”
  • Low-stakes disclosure: brief tool-use note, what it helped with.
  • High-stakes disclosure: structured log (tool, purpose, prompt summary, edits made).
  • Normalize disclosure as academic honesty, not as confession.
  • Focus on the learning evidence in the work, not vibes about style.
  • Use checkpoints and process requirements to reduce ambiguity.
  • Adopt a restorative first response when boundaries were unclear.
Instructor move: “Show me how you arrived here. If you can, you are learning, and we can grade fairly.”
  • Map each assessment to the skill it measures, then decide what assistance is compatible.
  • Write student-facing policy language in plain terms (allowed, limited, not allowed).
  • Ensure policies support equity and do not punish students for access differences.

Transparency and Disclosure Models

Pick a model that matches the stakes and learning goal.

Model A: Simple Statement

One sentence disclosure describing tool use and purpose. Best for low-stakes formative work.

Model B: Structured Log

Tool, purpose, prompt summary, and what the student changed. Best for medium-stakes submissions.

Model C: Process Evidence

Drafts, checkpoints, reflections, and brief defense. Best for high-stakes or capstone assessments.

Key benefit: Disclosure models reduce accusations because they replace suspicion with visible process evidence.

Designing AI-Resilient Assignments

Assignments that reveal learning even when tools exist.

Local Context Anchors

Require course-only readings, local data, lab results, or class discussions that AI cannot invent without being obvious.

Checkpointed Process

Break into staged submissions: proposal, outline, draft, revision notes, and final. Grade the process lightly but consistently.

Decision Logs

Students explain key choices (sources selected, claims removed, revisions made) and justify trade-offs.

Oral Defense Lite

Short, respectful check-ins that confirm understanding. Use randomly selected prompts or annotated passages.

Assignment design rule: If a student can only succeed by thinking in your course context, the assignment is resilient.

Evaluating Student AI Use Fairly

Consistency, evidence, and a defensible process.

  • State expectations in advance, in student language.
  • Use assignment design evidence (checkpoints, drafts) rather than detection claims.
  • Separate “policy violation” from “learning gap” and respond proportionally.
  1. Clarify: ask for process evidence and disclosure.
  2. Teach: correct misunderstanding of allowed use.
  3. Revise: allow a restorative resubmission when appropriate.
  4. Escalate: only when there is clear evidence and consistent policy language.