Prompt Engineering as Academic Literacy
Audience: Faculty, instructional designers, advanced staff users
Format: Hands-on workshop (2-3 hours)
Prompting is academic literacy,
not a parlor trick.
This workshop teaches staff to write prompts as deliberate instructions that carry intent, context, and constraints. Participants practice iterative refinement, evaluate outputs for accuracy and bias, and apply discipline-aware patterns that deliver immediate value without crossing policy lines.
Desired Outcomes
Skill, judgment, and discipline-aware transfer.
Prompt Structure Mastery
Participants can write prompts that clearly communicate intent, context, constraints, and desired evidence.
Iteration as a Method
Participants refine prompts through deliberate cycles, improving clarity, relevance, and verifiability.
Evaluation Literacy
Participants can detect uncertainty, check for bias, and verify claims before using outputs professionally.
2–3 Hour Workshop Agenda
Hands-on throughout, minimal lecture.
- Prompting is instruction design: it encodes purpose, audience, and evidence.
- Prompts should be auditable, another staff member should be able to see the intent.
- Focus: “thinking on the page,” not chasing clever shortcuts.
What are we trying to produce, and for whom (student, parent, colleague, designer)?
What information must be included (standards, text excerpts, constraints, audience needs)?
Tone, length, reading level, allowed sources, formatting, and non-negotiables.
How the output should justify itself (steps, citations required, “show assumptions,” compare alternatives).
- Quick build: participants convert a vague prompt into a constrained one using the framework.
- Share-out: identify which constraint improved the output most.
- Round 1: Write a baseline prompt (simple, direct).
- Round 2: Add constraints (format, audience, length, tone).
- Round 3: Add evaluation requirements (flags, verification, bias check).
- Round 4: Add discipline context (content structure, reasoning norms).
- Accuracy scan: highlight claims that need verification (names, dates, numbers, policies, quotes).
- Bias scan: look for overgeneralizations, missing perspectives, deficit framing, stereotype risk.
- Uncertainty scan: note where the model sounds certain but lacks evidence.
“List the claims that require verification, and label each as low, medium, or high risk.”
“Identify assumptions, missing perspectives, and any language that could reinforce stereotypes.”
“Explain X for a specific audience, include examples and common misconceptions.”
“Identify strengths, weaknesses, and revision moves, then provide a revised version.”
“Compare A and B using a fixed lens, then justify which is best for a defined purpose.”
“Use the reasoning norms of this field (evidence types, argument structure, uncertainty).”
- Participants build 3 prompts: one explanation, one critique, one comparison.
- Each prompt includes: intent, context, constraints, evaluation requirements.
- Share-out: swap prompts and test with a partner, revise once.
Core Topics (Expanded)
Prompts as structured thinking and disciplined evaluation.
Intent, Context, Constraints
Prompts encode purpose, audience, and the rules of the task. Constraints make evaluation possible and reduce drift.
Iteration and Refinement
Prompting improves through cycles, change one variable at a time, measure output quality, then refine.
Evaluating for Accuracy and Bias
Participants learn verification habits and bias checks so outputs are not treated as automatically safe.
Prompting for Explanation, Critique, Comparison
Higher-order academic moves become repeatable prompt patterns, built for clarity, not shortcuts.
Discipline-Aware Prompting
Prompts reflect the reasoning norms of a field, evidence types, uncertainty language, and argument structure.
Policy-Aligned Value
Practical usefulness appears without risky inputs. Participants learn safe context strategies and red lines.
Prompt Studio
Reusable frameworks and add-ons.
Base Prompt Template
Role: You are a (discipline role).
Task: Produce (artifact).
Audience: (who).
Constraints: (length, tone, reading level, format).
Context: (paste allowed info).
Evaluation: list claims to verify and note assumptions.
Refinement Loop
1) Identify what failed (clarity, structure, accuracy, tone).
2) Add one constraint to fix it.
3) Require the model to show its assumptions.
4) Repeat once, then stop and verify externally.
Evaluation Add-Ons
Accuracy: “Mark what you are uncertain about.”
Bias: “Flag stereotype risk or missing perspectives.”
Comparison: “Use a fixed rubric and justify trade-offs.”
Discipline-Aware Prompting Examples
Patterns that transfer across fields.
Prompt: “Compare (Text A) and (Text B) using the lens of (theme). Provide a thesis, 3 evidence points from each text (quote placeholders if not provided), and one counterclaim. Then list what would need verification if quotes are not provided.”
Why it works: it forces structure, not vibe, and it makes evidence requirements explicit.
Why it works: it produces instruction-ready scaffolds and inserts uncertainty signals.
Why it works: it uses a rubric lens and requires an improved artifact.
Why it works: it prevents hallucinated details by forbidding new facts.

