Building Simple AI Agents and Chatbots (Use-Case Driven)
Audience: Advanced faculty, departmental staff, innovation teams
Format: Multi-session lab or bootcamp
Build agents that serve a real job,
with scope you can defend.
This multi-session lab teaches agent development as operational workflow design. Participants define a use case, map dialog flows, select an appropriate platform, then test, debug, and iterate with a focus on user experience and failure modes. The work stays grounded, ethical, and collaborative.
Core Topics
From scope to flow to iteration, with safety built in.
Defining Use Cases and Scope
Clarify user, job-to-be-done, boundaries, success criteria, and “out of scope” decisions.
Designing Dialog Flows
Map paths, intents, branching, clarifying questions, and handoff points to humans.
Selecting Platforms
Choose the right level of tooling, simple builders first, escalate only when the use case demands it.
Testing and Debugging
Build a test set, capture failure patterns, refine prompts, and reduce uncertainty with iteration loops.
User Experience and Failure Modes
Design for “I don’t know,” safe refusal, human handoff, and graceful recovery when users go off-script.
Collaboration and Governance
Document decisions, define owners, establish update cycles, and align with institutional guardrails.
Bootcamp Flow
A repeatable arc that produces a defendable prototype.
Choose one workflow, name the user, define inputs and outputs, identify risk points, and write the “no” list (what the agent must not do).
Draft intents, required questions, branching paths, and handoff rules. Build the first flow in a simple format that others can review.
Select an appropriate platform and implement the flow. Establish a prompt style, a knowledge boundary, and a consistent response format.
Run a test set, capture failure patterns, refine prompts and rules, and add guardrail behaviors for uncertainty and sensitive requests.
Improve the user experience, write a short user guide, define ownership and update cycles, and finalize a deployment plan that aligns with policy and learning goals.
Use-Case Labs
Pick a real workflow, then build and test it.
- Define what the assistant can collect and what it must refuse.
- Design clarifying questions that reduce back-and-forth.
- Implement handoff triggers for complex or sensitive cases.
- Test for misrouting, overconfidence, and privacy risk.
- Define the source boundary (what documents are allowed).
- Require citations or section references in responses.
- Design “uncertainty language” and escalation paths.
- Test for hallucinated policy and outdated guidance.
- Define accepted inputs (sanitized notes or approved transcript sources).
- Design a consistent output schema (decisions, actions, owners, due dates).
- Add a “verify before send” checklist to every output.
- Test for missing owners, false commitments, and tone mismatch.
- Define allowed topics and what must be answered by a human.
- Design responses that link to official course materials.
- Implement “I don’t know, here’s where to look” behavior.
- Test for misinformation, bias, and accessibility issues.
User Experience and Failure Modes
Make the agent safe when users go off-script.
Safe Refusal
When requests are out of scope or risky, the agent refuses clearly and offers safe alternatives.
Uncertainty Language
Responses explicitly mark uncertainty, assumptions, and what must be verified by a human.
Human Handoff
Clear triggers that route the user to the correct person, office, or process without delay.
Recovery Paths
When the user is unclear, the agent asks targeted questions and restates understanding.
Ethics and Collaboration
Grounded, defensible, and aligned with institutional trust.
Use-Case Boundaries
Define what the agent will never do, and publish the boundary in plain language.
Data Discipline
Sanitize inputs, avoid sensitive identifiers, and follow approved systems for protected data.
Shared Ownership
Assign owners, version changes, and schedule review cycles, so the bot does not drift or decay.

