Responsible AI in Teaching, Assessment, and Academic Integrity
Audience: Faculty
Format: Faculty development seminar or panel session
Make pedagogical decisions about AI
without prescribing one model.
This seminar supports faculty in navigating authorship, assignment design, transparency, and academic integrity in a way that protects learning goals and preserves professional judgment. The point is not enforcement by tool, it is clarity by design, communicated in language students can follow.
Guiding Principles
Clarity, fairness, autonomy, and learning-centered boundaries.
Learning Goals First
AI allowances and restrictions should map to the skill being assessed, not to the existence of a tool.
Transparency Over Suspicion
Disclosures and process evidence reduce conflict more effectively than detection or guesswork.
Consistency With Flexibility
Departments share a baseline language, while faculty retain discipline-specific choices and rationale.
Core Topics
Practical faculty decisions with clear student-facing language.
- Define authorship as a spectrum (idea, outline, draft, revision, final).
- Clarify what students must do themselves for the learning goal to be valid.
- Use discipline norms, not universal slogans, to define “ownership.”
- Require process artifacts (planning notes, checkpoints, drafts, reflections).
- Use lived context, course readings, lab data, field observations, or unique prompts.
- Build oral defense, annotation, or decision logs into grading, proportionally.
“Claim + evidence + why this evidence fits this claim (course-specific).”
“Version history + revision rationale + what changed after feedback.”
- Low-stakes disclosure: brief tool-use note, what it helped with.
- High-stakes disclosure: structured log (tool, purpose, prompt summary, edits made).
- Normalize disclosure as academic honesty, not as confession.
- Focus on the learning evidence in the work, not vibes about style.
- Use checkpoints and process requirements to reduce ambiguity.
- Adopt a restorative first response when boundaries were unclear.
- Map each assessment to the skill it measures, then decide what assistance is compatible.
- Write student-facing policy language in plain terms (allowed, limited, not allowed).
- Ensure policies support equity and do not punish students for access differences.
Transparency and Disclosure Models
Pick a model that matches the stakes and learning goal.
Model A: Simple Statement
One sentence disclosure describing tool use and purpose. Best for low-stakes formative work.
Model B: Structured Log
Tool, purpose, prompt summary, and what the student changed. Best for medium-stakes submissions.
Model C: Process Evidence
Drafts, checkpoints, reflections, and brief defense. Best for high-stakes or capstone assessments.
Designing AI-Resilient Assignments
Assignments that reveal learning even when tools exist.
Local Context Anchors
Require course-only readings, local data, lab results, or class discussions that AI cannot invent without being obvious.
Checkpointed Process
Break into staged submissions: proposal, outline, draft, revision notes, and final. Grade the process lightly but consistently.
Decision Logs
Students explain key choices (sources selected, claims removed, revisions made) and justify trade-offs.
Oral Defense Lite
Short, respectful check-ins that confirm understanding. Use randomly selected prompts or annotated passages.
Evaluating Student AI Use Fairly
Consistency, evidence, and a defensible process.
- State expectations in advance, in student language.
- Use assignment design evidence (checkpoints, drafts) rather than detection claims.
- Separate “policy violation” from “learning gap” and respond proportionally.
- Clarify: ask for process evidence and disclosure.
- Teach: correct misunderstanding of allowed use.
- Revise: allow a restorative resubmission when appropriate.
- Escalate: only when there is clear evidence and consistent policy language.

