Ethical AI and Critical Evaluation Studio

Audience: Faculty and graduate students

Format: Seminar or discussion-based workshop

Trust Through Rigor, not Moralizing

Strengthen judgment,
in the language of your discipline.

This studio deepens critical evaluation and disciplinary reflection. Participants examine bias, hallucinations, power, authorship, and epistemology through real failures and structured case discussion. The outcome is practical: better decisions, clearer reasoning, and a shared vocabulary for uncertainty.

Purpose: deepen judgment
Mode: case discussion
Standard: defensible evaluation
Why this exists: It reinforces trust. It signals that the university takes ethics seriously, without turning the session into moral performance or fear-based training.

Core Topics

Bias, hallucination, power, and discipline-specific consequences.

Bias in Training Data

What gets represented, what gets erased, and how bias propagates into “normal” outputs.

Hallucinations and Confidence Errors

Overconfident claims, fabricated citations, and the difference between fluency and truth.

Power, Authorship, and Epistemology

Who is authorized to know, who is credited, and what counts as legitimate evidence.

Discipline-Specific Risks and Benefits

Different fields carry different harms, tolerances, and verification standards.

Case Studies and Real Failures

Real-world breakdowns as learning artifacts, analyzed without sensationalism.

Claims Under Uncertainty

How to talk about confidence, evidence, and verification in a way students can imitate.

Studio Flow

A seminar structure that produces shared judgment, not compliance.

Phase 1: Frame the Question

Define the discipline context, the type of claim, the stakes, and what “good evidence” looks like here.

Context first
Stakes named

Phase 2: Examine the Output

Separate fluency from support. Identify uncertainty, missing assumptions, and plausible failure routes.

Assumptions
Evidence gap

Phase 3: Verification and Accountability

Decide how the claim can be checked, who is responsible, and what disclosures are appropriate.

Check methods
Disclosure

Phase 4: Decide and Document

Make a defensible decision, record the reasoning, and name the boundary conditions for future use.

Decision log
Boundaries
Studio rule: We do not punish uncertainty. We punish unexamined confidence.

Case Studio

Use real failures to train judgment and build shared language.

  • Identify where confidence exceeds support.
  • List what evidence would be required to validate the claim.
  • Design a student-facing verification routine that is teachable.
  • Audit which perspectives are elevated, muted, or erased.
  • Decide what “balanced” means in your discipline.
  • Write a prompt and rubric language that surfaces bias, rather than hiding it.
  • Locate which parts of reasoning are being outsourced.
  • Define the “non-negotiable cognition” students must still perform.
  • Design guardrails that preserve learning goals without banning tools.
  • Define what “acceptable error” means in this context.
  • Specify required verification steps before outputs can be used.
  • Establish disclosure rules and human responsibility, clearly and consistently.

Critical Evaluation Toolkit

A practical method that participants can reuse and teach.

  • Factual claim: requires sources and verifiable references.
  • Interpretive claim: requires warrants and disciplinary framing.
  • Normative claim: requires values stated explicitly and supported reasoning.
  • Procedural claim: requires step validation and context-specific constraints.
  • Identify omissions and default assumptions.
  • Check whether “neutral” language hides a viewpoint.
  • Compare against a trusted baseline source or disciplinary canon.
  • Require uncertainty labeling, in plain language.
  • List verification steps and the fastest safe check.
  • Decide whether the task is “assistable” or “must be human.”
  • Define what must be disclosed, and what does not.
  • Provide a simple disclosure statement participants can adapt.
  • Require a brief “verification note” in high-stakes contexts.

Outcomes

Trust built through shared criteria and disciplined thinking.

Shared Vocabulary for Uncertainty

Faculty and graduate students can name confidence, evidence, and limits without stigma.

Discipline-Aware Evaluation

Participants adapt verification standards to the epistemology and risk profile of their fields.

Case-Based Judgment

Real failures become usable learning artifacts, not headlines, and not fear campaigns.

Defensible Decisions

Participants can explain why an AI use is acceptable, unacceptable, or conditionally allowed.

Closing message: This studio does not ask for agreement. It builds the conditions for trust by making reasoning visible.