AI-Integrated Curriculum Design
This pathway pairs each core lesson on AI-integrated curriculum design with a “.1” mastery studio. Core lessons introduce structures, ethics, and workflows; mastery studios use real artifacts, critique, and student work scenarios to evidence alignment with The Aletheian Design Theory of Learning.
Learners connect AI use to ADTL pillars, clarifying where AI can support Cognitive Design, Cultural Connection, and Aesthetic Experience—and where it can undermine them if misused.
- Articulate a working definition of AI for curriculum design (beyond “answer generator”).
- Locate AI inside ADTL: how it can scaffold each pillar, and what the risks are.
- Identify core ethical boundaries for AI in education (equity, transparency, data, authorship).
- Short case vignettes of AI use in planning, feedback, and differentiation.
- Whole-group sort: “Aligned with ADTL,” “Neutral,” or “In conflict with ADTL.”
- Mini-lecture and diagram: “AI as co-designer” vs “AI as replacement.”
- Small-group discussion of one case per ADTL pillar (Cognitive, Cultural, Aesthetic).
- Draft a short “AI stance statement” for their own context (what AI will and will not be used for).
- Completed stance statement referencing at least two ADTL pillars.
- Accurate classification of AI scenarios as aligned, neutral, or conflicting with ADTL.
Learners evaluate detailed AI-in-the-classroom scenarios and co-design a practical ethics checklist to guide their work.
- Analyze 2–3 rich scenarios of AI use (e.g., grading, lesson generation, student-facing tools).
- Identify benefits, risks, and where ADTL pillars are supported or threatened.
- Decide how each scenario should be modified—or rejected—to align with ADTL.
- Protocol-based discussion centered on equity, transparency, authorship, and student agency.
- Compare approaches across grade levels and content areas.
- Participant can clearly argue for or against a scenario with ADTL-based reasoning.
- Participant co-constructs an “AI Ethics in Curriculum Design” checklist for their context.
- Participant commits to at least two specific guardrails for their own use of AI.
Learners map their existing planning process and intentionally insert AI steps that honor ADTL rather than flatten it.
- Visualize current curriculum planning workflow (from standards to lesson artifacts).
- Identify where AI can provide draft options, examples, or variations.
- Differentiate between AI-supported ideation and AI-written final products.
- Quick mapping of “as-is” planning process (individually or by team).
- Mini-lesson on planning checkpoints: standards, ADTL alignment, tasks, visuals, assessment.
- Demonstration of AI as assistant at specific checkpoints (brainstorming tasks, generating exemplars, etc.).
- Overlay the planning map with “AI touchpoints” labeled as draft, refine, or check.
- Draft 2–3 prompts they could realistically re-use in their planning process.
- Completed “AI-Infused Planning Map” for one course or unit.
- Reusable prompt templates aligned to planning checkpoints.
Learners present their redesigned workflows and use peer critique to ensure AI supports—not dilutes—design quality and ethics.
- Participants walk peers through their AI-Infused Planning Map step by step.
- They highlight where AI contributes drafts, checks alignment, or widens option sets.
- They explain how human judgment and ADTL remain central at each stage.
- Peers identify potential bottlenecks, ethical risks, or overreliance points.
- Group suggests alternative AI touchpoints or rebalancing of human vs AI work.
- Participant can justify each AI touchpoint using ADTL language.
- Participant revises workflow based on critique to improve clarity and safeguards.
- Participant identifies how the workflow will be documented for future use or teammates.
Learners practice designing lessons where AI assists with options and drafts, while the educator curates, rewrites, and aligns to ADTL and local context.
- Translate standards and outcomes into ADTL-aligned design prompts for AI.
- Generate multiple task and activity options using AI.
- Evaluate AI-generated lesson ideas for quality, equity, and feasibility.
- Model live: starting with a standard, crafting a structured prompt, and critiquing the AI output.
- Small-group work: each group selects a standard and designs their own prompt sequence.
- Comparison of AI outputs before and after prompt refinement.
- Draft 1–2 “lesson co-design” prompt templates that embed ADTL language (pillars, ALC, student identities).
- Highlight AI-generated elements that need revision vs. those worth keeping.
- Set of AI-generated lesson ideas marked up with keep / tweak / discard annotations.
- Prompt templates clearly referencing Cognitive Design, Cultural Connection, and Aesthetic Experience.
Learners share AI-assisted lesson drafts, then revise them through structured critique, focusing on student experience and evidence of learning.
- Participants present a lesson draft where AI contributed ideas or language.
- They show original AI output and their revised, ADTL-aligned version.
- They explain specific changes made to serve their real students.
- Peers use an ADTL-based protocol (Cognitive / Cultural / Aesthetic) to respond.
- Group identifies where AI defaulted to generic, biased, or shallow choices—and how they were corrected.
- Participant can name where AI’s suggestions were adopted, adapted, or rejected—and why.
- Participant can anticipate likely student responses and adjust the design accordingly.
- Participant commits to a practice of documenting AI contributions for transparency.
Learners explore how AI can assist in rubric creation, exemplar generation, and feedback drafting, without outsourcing judgment or inflating grades.
- Use AI to draft rubrics aligned to clearly stated outcomes and ADTL pillars.
- Generate sample responses and exemplars to clarify expectations.
- Design feedback prompts where AI drafts language the teacher then personalizes and verifies.
- Demonstration: using AI to create a first-draft rubric from an assignment description.
- Group critique of rubric language for clarity, bias, and ADTL alignment.
- Model: AI-drafted feedback comments revised by the teacher for voice and precision.
- Participants bring one assignment and generate a rubric draft using AI.
- They mark rubric rows and descriptors that need revision or expansion.
- Revised rubric that clearly articulates quality and aligns with ADTL.
- Set of feedback sentence starters co-written with AI and adapted for their context.
Learners apply their AI-generated rubrics and feedback templates to authentic or anonymized student work, then refine both tools and protocols based on what they notice.
- Participants use their rubric to score 2–3 pieces of student work (real or sample).
- They generate AI-supported feedback drafts, then edit for accuracy and tone.
- They document where the rubric or AI feedback fell short.
- Compare scores and feedback across participants using the same work sample.
- Discuss where AI over-smoothed feedback or missed important nuances.
- Participant refines rubric language to better match actual student work.
- Participant adjusts AI feedback prompts to yield more precise, actionable drafts.
- Participant defines clear rules for when AI may or may not be used in feedback.
Learners explore how AI can help generate leveled texts, scaffolds, and extensions while avoiding deficit framing or “easy work” traps for certain students.
- Connect AI uses to MTSS tiers (universal, targeted, intensive) without labeling students publicly.
- Use AI to generate variations in reading level, modality, or support, not in intellectual expectation.
- Identify language patterns that risk bias or deficit assumptions.
- Examples of AI-generated differentiated passages and tasks for a single objective.
- Whole-group analysis: which versions preserve rigor, which slip into “busywork.”
- Mini-lesson on designing scaffolds that honor student agency and identity.
- Choose one upcoming lesson and design 2–3 AI-supported differentiated pathways.
- Label each pathway with MTSS tier and ADTL pillar focus.
- Drafted differentiated resources (e.g., scaffolded questions, alternate texts, graphic organizers).
- Reflection on how each pathway preserves cognitive demand while adjusting support.
Learners bring AI-assisted differentiation plans and use critique to test for rigor, bias, and student dignity, revising resources and protocols accordingly.
- Participants present a differentiation plan for one lesson or mini-unit.
- They share AI-generated resources alongside their own edits and annotations.
- They identify which students might receive which pathways and why.
- Peers analyze for rigor, representation, and potential stigmatizing patterns.
- Group suggests additional scaffolds or extensions that could be AI-supported.
- Participant revises at least one resource to strengthen rigor or inclusivity.
- Participant defines clear rules for how students are offered—not assigned—supports.
- Participant names specific data sources that will inform when to adjust pathways.
Learners design a short “learning campaign” (2–3 connected lessons or one mini-unit) where AI is thoughtfully integrated across planning, materials, and assessment.
- Plan a multi-lesson sequence using the Aletheian Learning Cycle.
- Identify where AI supports design, differentiation, and feedback within the campaign.
- Align all AI use with previously established ethics and ADTL pillars.
- Review of key tools from previous lessons (workflow maps, prompt templates, ethics checklist).
- Individual or team planning time with facilitator and AI support on call.
- Early peer sharing of campaign sketches for quick feedback.
- Design a 2–3 lesson AI-integrated campaign with clear outcomes and ADTL annotations.
- Mark where AI will be used, what it will produce, and how it will be checked.
- Working draft of an AI-integrated Aletheian Learning Campaign.
- Notes on anticipated student experiences and evidence of learning.
Learners present their AI-integrated campaigns as portfolio-ready artifacts, including sample prompts, AI outputs, revisions, and predicted student evidence of learning.
- Participants share campaign maps, key lesson artifacts, and AI interaction examples.
- They narrate one “student’s journey” through the campaign, including differentiated supports.
- They explain how they will collect and interpret student work as design evidence.
- Peers offer feedback on coherence, ethics, and the visibility of ADTL pillars.
- Group identifies which parts of each campaign could be added to an Aletheia Design Portfolio.
- Participant can clearly trace AI’s role through the campaign without it becoming the focal point.
- Participant can show how student work will reveal strengths and gaps in both design and AI use.
- Participant refines at least one campaign element based on critique and documents the change.