Play the bias,
see the bias.
Two card games that put a text-to-image AI generator in teenagers' hands. They discover for themselves how every word in a prompt can reproduce or challenge societal bias.
Kids use AI daily.
They don't see the bias.
Nearly half of teens use AI for companionship. But most have no idea how these systems reflect and amplify societal biases. Teaching AI ethics without boring everyone is genuinely hard.
AI literacy isn't just knowing how AI works. It's understanding what it reflects and amplifies: whose perspectives get encoded, whose get erased, and when that matters. Most educational approaches frame bias as purely negative, missing the nuance that some forms of bias are contextually necessary for AI to function at all.
AI ethics is abstract, politically charged, and easily turns into a morality lecture. Young people disengage. The researchers hypothesized that transformational games, games intentionally designed to shift how players think, could make this learning stick through play, competition, and social interaction.
The games target four specific competencies: (1) recognize bias in GenAI outputs, (2) connect those biases to real-world social inequities, (3) understand that bias may sometimes be necessary, and (4) evaluate when bias becomes harmful. Goals 3 and 4 are the hard ones. They cover the nuanced territory most curricula avoid.
Compete, create,
deceive.
Three game mechanics, peer evaluation, constrained creativity, and social deduction, each designed to encourage different kinds of critical thinking about AI.
Pairs compete to write the shortest prompt that generates the most diverse image. The word limit shrinks each round, from six words to five to four, forcing players to wrestle with which words actually matter for inclusive representation.
- Round flow: Draw occupation card → write prompt → generate image → vote on diversity
- Constraint: Word limit shrinks each round (6 → 5 → 4 words)
- Competition: All four players vote on which pair's image is more diverse
- Learning goals: #1 Recognize bias · #2 Connect to real-world inequities
Four players collaboratively build a prompt word by word, but one is secretly trying to inject bias without getting caught. After the image generates, the group debates and votes on who the saboteur was. Think Mafia, but for AI ethics.
- Round flow: Each player adds 2 words → generate image → evaluate → vote on agent
- Deception: Secret agent subtly increases bias in word choices
- Deduction: Group analyzes the prompt to identify suspicious word choices
- Learning goals: #3 Bias can be necessary · #4 When bias is harmful
Voting on outputs forces group deliberation. Players can't just passively observe. They must articulate why one image is more diverse than another.
Word limits force precision. With only four words, every choice matters. Players discover that "diverse" as a prompt word doesn't actually produce diversity.
Playing the villain teaches differently than observing. Secret agents must think about what word cues trigger bias, a form of adversarial AI auditing.
Attitudes shifted.
Then, nuance emerged.
Pre/post questionnaires and 360 minutes of recorded gameplay revealed measurable shifts in how teens perceived AI bias, along with surprisingly sophisticated ethical reasoning.
"The quality was good, they only used one gender per prompt because that's usually the people that are thought to be in those positions."P14, age 17. After recognizing gender bias in doctor images
"When you ask for multiple races or a race, it gives all White people."P9. Discovering systematic racial bias in outputs
"It really depends on how impactful the bias is… if it's about what gender is mostly considered for a job, that's very harmful. But if the bias is about what color shoes best suit a style, that doesn't matter."P12. Distinguishing harmful from benign bias
"I feel like we're giving AI its own opinions, but it's a man-made thing, so it's not its own thing."P13. Questioning AI agency and human accountability
Participants didn't just label bias as "bad." They developed contextual reasoning, arguing that bias can be harmful when it reinforces stereotypes about jobs or races, but potentially helpful when it flags threats or moderates content. One learner pointed out that a bias toward charity "could help people." This dynamic, context-dependent understanding of bias is what the researchers were aiming for. It is not something prior approaches have consistently produced.
How to build games
that teach AI ethics.
What worked, what didn't, and what to steal for your own educational game designs.