Team & Transformation
How people actually learn what AI does and does not do: hands-on exercises with live models, surfacing the failure modes that fixed demos hide.
No practices in this stage match the current filters.
-
WhenYou’re teaching a team, a class, or a client how generative AI behaves and want them to internalize how prompt choices shape output, not just hear about it.
UseTight word-limit prompt exercises where participants compete or compare outputs under shrinking constraints. Force the group to reach four-word or five-word prompts and vote on which output best meets a target criterion. The constraint surfaces which words actually carry signal and which are theatrical.
EvidenceIn a 5-day workshop with 16 teen learners, a card-game format that shrank prompt length from six to four words across rounds drove measurable shifts in AI literacy. Recognition of bias as harmful rose from 44% to 81% post-game. Learners discovered that “diverse” as a literal prompt word did not produce diversity, and articulated the gap publicly during peer-evaluation rounds.
-
WhenYou’re designing AI literacy training, onboarding, or workshops for any audience, technical or otherwise.
UseLive AI tools in the exercises, not pre-built scenarios or scripted demos. The unpredictability of real model output is the curriculum. Surprise moments (a “diverse” prompt returning all white faces, a confidently wrong answer, a refusal) generate the discussion that fixed scenarios cannot replicate.
EvidenceBoth transformational games at the University of Washington workshop used a live text-to-image generator (DeepAI) rather than pre-rendered images. Researchers explicitly credited the unpredictability of real outputs with creating the surprise moments that drove learner reasoning. The same logic generalizes to corporate AI training, where canned demos miss the failure modes participants will actually encounter.