Foundational AI Literacy for Faculty and Staff

Audience: All faculty and professional staff

Format: 60–90 minute workshop or self-paced module

Shared Language • Reduced Fear • Safer Practice

Build a calm baseline
before AI becomes a daily assumption.

This course establishes a shared vocabulary and a practical mental model for generative AI. The aim is not to create experts, it is to prevent chaos, reduce hype, and set clear expectations so later training feels grounded and safe.

Audience: All faculty and professional staff
Format: 60–90 minutes or self-paced
Focus: clarity, limits, acceptable use
Why this exists: This session prevents early misuse and uncertainty. It creates a baseline so follow-up sessions can move faster without feeling unsafe or speculative.

Desired Outcomes

Shared language, clearer judgment, fewer avoidable errors.

Common Vocabulary

Staff can explain what generative AI is, what it is not, and why it can sound confident while being wrong.

Safer Decision-Making

Staff can spot likely hallucinations, know when verification is required, and avoid over-trusting outputs.

Aligned Expectations

Staff understand institutional acceptable use, privacy boundaries, and what to do when unsure.

Baseline guarantee: By the end, everyone can answer, “What is this system doing,” “Where can it go wrong,” and “What is acceptable here.”

60–90 Minute Workshop Agenda

Works as live delivery or as module chapters.

  • Set tone: “We are building shared language, not chasing tools.”
  • Two myths to retire: “AI is magic,” and “AI is useless.”
  • Clarify stakes: student learning, privacy, integrity, and reputation.
  • Generative AI predicts likely text (or images) based on patterns.
  • It does not “know” in the human sense, it synthesizes.
  • Why it can be fluent while wrong, and why that matters in schools.
Anchor phrase: “Fluent does not mean factual.”
  1. Prompt: the instruction and context you provide.
  2. Patterning: the system searches its learned patterns, not your local documents unless provided.
  3. Prediction: it chooses likely next words repeatedly to build an answer.
  4. Result: coherent language that may still contain errors or invented details.
Practical takeaway: better inputs reduce error, but they do not eliminate it.

Often Useful For

Drafting, summarizing provided text, generating examples, differentiating explanations, brainstorming, rewriting for tone and readability, creating checklists, and tutoring practice prompts.

High-Risk Zones

Facts without sources, citations, legal or policy interpretations, medical guidance, student records, confidential information, and anything where an error creates harm.

  • Hallucinations: plausible-sounding inventions, especially names, dates, quotes, policies, and citations.
  • Overconfidence: a polished answer can hide uncertainty.
  • Verification rule: if it matters, confirm it outside the model.
  • Bias can appear through data imbalances, stereotypes, missing perspectives, and unsafe generalizations.
  • “Training data” does not equal “approved curriculum” or “truth.”
  • Uncertainty is normal, staff need habits for checking and correcting.
Staff habit: ask, “Who is missing,” “What is assumed,” and “What would I verify.”
  • Define what staff may do, what requires approval, and what is prohibited.
  • Privacy first: do not enter identifiable student data into unapproved tools.
  • Professional integrity: document AI assistance when it meaningfully shapes deliverables.
  • Escalation path: when unsure, pause and ask the designated point of contact.
Simple rule: if you would not paste it into a public website, do not paste it into an AI tool.
  • Each participant chooses one low-risk use they will try within two weeks.
  • Each participant names one boundary they will not cross.
  • Optional: a follow-up micro-session focused on approved classroom workflows.
Self-paced conversion: Each segment becomes a short page with one example, one reflection prompt, and a two-question check.

Core Topics (Course Content)

Exactly what you listed, expanded into teachable chunks.

What generative AI is and is not

What it does: generates language and media by pattern-based prediction. What it does not do: guarantee truth, provide lived judgment, or replace professional responsibility.

How AI systems produce outputs (plain language)

Prompt in, patterns consulted, probability chooses the next word, repeated until a full response appears. The result is coherent language, not confirmed facts.

Capabilities, limitations, hallucinations

Useful for drafting, rewriting, and idea generation. Risky for factual claims without sources. Hallucinations appear as confident, plausible details that are false.

Bias, data sources, uncertainty

Bias can surface through training data imbalance and common stereotypes. Staff need habits for checking, correcting, and ensuring representation and fairness.

Institutional expectations and acceptable use

Clear boundaries, especially around privacy and student information. Documentation norms, and a simple escalation process when unsure.

Why this baseline matters

Without a shared foundation, staff will either avoid tools entirely or use them inconsistently. This session stabilizes the system before deeper training begins.

Workshop Activities

Light lift, high clarity, no tech pressure.

  • Present 10 scenario cards (lesson ideas, parent email draft, grading commentary, IEP notes, student discipline notes).
  • Teams sort into: OK, OK with safeguards, Not OK.
  • Debrief: focus on privacy and verification.
  • Show a short “AI answer” containing 3 subtle factual errors and one invented citation.
  • Participants underline what they would verify and why.
  • Debrief with a simple rule: verify names, numbers, policies, and quotes.
  1. Prompt A: vague request (observe generic output).
  2. Prompt B: add context and constraints (observe improvement).
  3. Prompt C: require citations and a “confidence note” (observe safer framing).
Goal: build respect for specificity, not dependence on the tool.
Facilitator option: If live AI use is not allowed, run these activities with printed examples and still hit the same outcomes.

Norms and Safeguards

A stable baseline depends on predictable guardrails.

Privacy First

Do not enter identifiable student data or sensitive records into unapproved tools. Default to caution.

Verification Habit

If it matters, verify it. Names, dates, policies, quotes, citations, and numbers require checking.

Professional Responsibility

AI can assist drafting, but accountability stays with the educator and staff member who uses it.

Implementation note: Place your district or school-specific acceptable use language here as a short bulleted list, then link to the full policy.