AI Use Case Management in Practice — for SaaS Operators
A self-paced course for the people inside a SaaS company who own the 'should we / how do we use this AI?' decision — typically Heads of AI Governance, CISO/DPO-adjacent leads, Heads of Platform, COOs. Two-hour core plus optional deep-dives. Covers deployment models, registry design that piggybacks on existing TPRM/DPIA processes, disclosure and AI-content labelling, role-based enablement, surviving model churn, and the EU AI Act as it actually applies in 2026. Three interactives: a deployment-model selector, a use-case intake → registry-row generator, and an AI Act risk-tier classifier. Running case study: Loopwell — 'the AI-native operations platform,' allegedly.
Sign in with the email from your invitation to open the course materials.
Three foundational shifts before any tooling or paperwork: the unit of governance is the use case (not the model, not the vendor); internal AI, customer-facing AI, and AI-you-sell are three different risk surfaces; and most of the fear in the room is a mix of rational and folkloric, which calls for proportionate — not panicked — responses.
- 1.1What 'an AI use case' actually is — and why 'we use AI' isn't one The unit of governance is the use case, not the model or the vendor. How to slice a fuzzy initiative into 3–4 governable use cases. Introduces the running Loopwell case study.
- 1.2Internal AI, customer-facing AI, AI you sell — three different risk surfaces Where SaaS companies get burned by treating these as one. Which stakeholders, which contracts, and which controls differ across the three.
- 1.3The fear map — rational fears, folklore, and proportionate responses Catalogue of the most common executive and employee fears about AI: job loss, hallucination liability, IP leakage, 'the model trained on our data', shadow AI, vendor lock-in, regulator surprise. Which are real, which are folklore, and what the proportionate response is.
The spectrum from public consumer chat to in-house fine-tunes, with the data flow, contracts, and operating cost shape that each implies — plus an interactive selector that maps a use case to a deployment model and the new governance surface that agentic systems introduce.
- 2.1The deployment spectrum + selector Public SaaS API → enterprise tenant → hyperscaler-hosted → dedicated throughput → self-hosted open weights → in-house fine-tunes. Per model: data flow, contract terms, cost shape, the team you need to operate it. Interactive: a deployment-model selector driven by data sensitivity, latency, reversibility, and customer constraints.
- 2.2Agentic systems — when the model takes actions Why agents change the governance surface. Tool permissions, MCP, the blast-radius question. What 'human-in-the-loop' actually means once a model can hit your APIs.
You already evaluate vendors, you already keep a RoPA, you already do DPIAs. A separate 'AI register' dies inside a year. The registry is a view over those existing artifacts plus a few AI-specific fields — and the intake form lives where the rest of your intake already lives.
- 3.1Why standalone AI registers die — and what to do instead The failure modes of bolt-on AI registers. How to express the registry as a view over TPRM, RoPA, and DPIA, with only a handful of AI-specific fields added at intake.
- 3.2Intake fields + the registry row The ~12 fields that actually matter at intake (purpose, deployment model, data classes, decisioning role, human-in-the-loop point, training opt-out, AI Act tier, model+version lock, vendor-disappearance fallback, owner, review cadence, kill switch). Interactive: an intake form that produces a filled Loopwell registry row.
The user-facing and data-facing duties that decide whether a use case is defensible: when users must know they're talking to AI, when content needs an AI label, what 'no training on your data' actually guarantees, and how to evidence the EU AI Act's AI-literacy duty without building a training-industrial complex.
- 4.1Disclosure + AI-content labelling EU AI Act Art. 50 transparency duties. C2PA and watermark provenance. Concrete UI patterns: when a chatbot must declare itself, what counts as a 'reasonably informed' user, and when generated content needs a label.
- 4.2Data leakage, training opt-outs, IP, and the Art. 4 AI-literacy duty Reading vendor terms honestly. What 'no training' actually guarantees and the human-review carve-outs most enterprises miss. The AI-literacy obligation in proportion — what counts, what doesn't, how to evidence it.
One lesson with expandable role cards. Per function (Engineering, SRE, IT/desktop, Customer Success, Sales, Finance, People Ops): the two or three high-leverage use cases, the deployment model that fits, the starter tools, and the pitfall specific to that role.
- 5.1Role enablement matrix — per-function playbook Expandable role cards for the seven functions where AI lands hardest in a SaaS company. Each card: high-leverage use cases, fitting deployment model, starter tool category, role-specific pitfall.
The EU AI Act in 2026 as it actually applies — written against fresh research rather than recalled — plus an interactive tier classifier that turns the regulation into a decision a SaaS operator can defend.
- 6.1EU AI Act in 2026 + risk-tier classifier Where the AI Act actually is in 2026: which duties are live, which are still coming, what the AI Office has actually produced, and whether the timeline is moving. Risk tiers (prohibited / high-risk / limited-risk / minimal-risk) walked through Loopwell use cases. Interactive: a tier classifier.
A capstone where the learner takes six proposed Loopwell use cases through the whole pipeline — classify, register, pick deployment, draft disclosures, write the one-pager for the board.
- 7.1Capstone — Loopwell's Q3 AI portfolio review Six proposed AI use cases land on Priya's desk before the next board meeting. Classify them, register them, pick deployment models, draft the disclosures, and write the one-page board summary.
Optional lessons for learners who want the full picture: each deployment model in detail, full per-role chapters, surviving model churn with evals as the load-bearing artifact, the GPAI provider-vs-deployer trap, GDPR/NIS2/DORA/Data Act interaction, cross-border data residency, and the board/auditor/engineer defence cheat sheet.
- 8.1Each deployment model in detail Optional. Data flow diagrams, typical contract clauses, cost shape, and operating-team requirements for each of the six deployment models.
- 8.2Full role chapters — Engineering, SRE, IT, CS, Sales, Finance, People Ops Optional. The full version of each per-role section, with worked Loopwell examples and tool comparisons.
- 8.3Surviving model churn — evals and the two-vendor posture Optional. Why your model choice from last quarter is already not the best one, and how to design for substitutability without paying the migration cost every release. Building cheap eval sets that actually catch regressions.
- 8.4GPAI: provider vs deployer — the trap door Optional. Most SaaS companies are deployers. Fine-tuning, rebranding, or 'substantial modification' can flip you into provider obligations. How to spot the moment that happens.
- 8.5GDPR, NIS2, DORA, Data Act — where AI governance is new vs. just GDPR with a hat on Optional. The interaction map between the AI Act and the regulations a SaaS company already lives under.
- 8.6Cross-border — serving non-EU from the EU and vice versa Optional. Data residency, model residency, and transfer mechanisms. Where 'where does the inference happen?' becomes a contractual question.
- 8.7Board / auditor / engineer defence cheat sheet Optional. Three audiences you'll defend the AI program to, and the answer shape each one needs. A printable one-pager.
Loading content…
Could not load this lesson.
Welcome Back
Sign in to track your progress
Don't have an account?