Foundational AI Literacy for Faculty and Staff
Audience: All faculty and professional staff
Format: 60–90 minute workshop or self-paced module
Build a calm baseline
before AI becomes a daily assumption.
This course establishes a shared vocabulary and a practical mental model for generative AI. The aim is not to create experts, it is to prevent chaos, reduce hype, and set clear expectations so later training feels grounded and safe.
Desired Outcomes
Shared language, clearer judgment, fewer avoidable errors.
Common Vocabulary
Staff can explain what generative AI is, what it is not, and why it can sound confident while being wrong.
Safer Decision-Making
Staff can spot likely hallucinations, know when verification is required, and avoid over-trusting outputs.
Aligned Expectations
Staff understand institutional acceptable use, privacy boundaries, and what to do when unsure.
60–90 Minute Workshop Agenda
Works as live delivery or as module chapters.
- Set tone: “We are building shared language, not chasing tools.”
- Two myths to retire: “AI is magic,” and “AI is useless.”
- Clarify stakes: student learning, privacy, integrity, and reputation.
- Generative AI predicts likely text (or images) based on patterns.
- It does not “know” in the human sense, it synthesizes.
- Why it can be fluent while wrong, and why that matters in schools.
- Prompt: the instruction and context you provide.
- Patterning: the system searches its learned patterns, not your local documents unless provided.
- Prediction: it chooses likely next words repeatedly to build an answer.
- Result: coherent language that may still contain errors or invented details.
Often Useful For
Drafting, summarizing provided text, generating examples, differentiating explanations, brainstorming, rewriting for tone and readability, creating checklists, and tutoring practice prompts.
High-Risk Zones
Facts without sources, citations, legal or policy interpretations, medical guidance, student records, confidential information, and anything where an error creates harm.
- Hallucinations: plausible-sounding inventions, especially names, dates, quotes, policies, and citations.
- Overconfidence: a polished answer can hide uncertainty.
- Verification rule: if it matters, confirm it outside the model.
- Bias can appear through data imbalances, stereotypes, missing perspectives, and unsafe generalizations.
- “Training data” does not equal “approved curriculum” or “truth.”
- Uncertainty is normal, staff need habits for checking and correcting.
- Define what staff may do, what requires approval, and what is prohibited.
- Privacy first: do not enter identifiable student data into unapproved tools.
- Professional integrity: document AI assistance when it meaningfully shapes deliverables.
- Escalation path: when unsure, pause and ask the designated point of contact.
- Each participant chooses one low-risk use they will try within two weeks.
- Each participant names one boundary they will not cross.
- Optional: a follow-up micro-session focused on approved classroom workflows.
Core Topics (Course Content)
Exactly what you listed, expanded into teachable chunks.
What generative AI is and is not
What it does: generates language and media by pattern-based prediction. What it does not do: guarantee truth, provide lived judgment, or replace professional responsibility.
How AI systems produce outputs (plain language)
Prompt in, patterns consulted, probability chooses the next word, repeated until a full response appears. The result is coherent language, not confirmed facts.
Capabilities, limitations, hallucinations
Useful for drafting, rewriting, and idea generation. Risky for factual claims without sources. Hallucinations appear as confident, plausible details that are false.
Bias, data sources, uncertainty
Bias can surface through training data imbalance and common stereotypes. Staff need habits for checking, correcting, and ensuring representation and fairness.
Institutional expectations and acceptable use
Clear boundaries, especially around privacy and student information. Documentation norms, and a simple escalation process when unsure.
Why this baseline matters
Without a shared foundation, staff will either avoid tools entirely or use them inconsistently. This session stabilizes the system before deeper training begins.
Workshop Activities
Light lift, high clarity, no tech pressure.
- Present 10 scenario cards (lesson ideas, parent email draft, grading commentary, IEP notes, student discipline notes).
- Teams sort into: OK, OK with safeguards, Not OK.
- Debrief: focus on privacy and verification.
- Show a short “AI answer” containing 3 subtle factual errors and one invented citation.
- Participants underline what they would verify and why.
- Debrief with a simple rule: verify names, numbers, policies, and quotes.
- Prompt A: vague request (observe generic output).
- Prompt B: add context and constraints (observe improvement).
- Prompt C: require citations and a “confidence note” (observe safer framing).
Norms and Safeguards
A stable baseline depends on predictable guardrails.
Privacy First
Do not enter identifiable student data or sensitive records into unapproved tools. Default to caution.
Verification Habit
If it matters, verify it. Names, dates, policies, quotes, citations, and numbers require checking.
Professional Responsibility
AI can assist drafting, but accountability stays with the educator and staff member who uses it.

