When the AI lies,
players learn.
A mobile strategy game gives players enough information to act, but never enough to verify. The result is a different kind of understanding: one built through doubt, failure, and interpretation rather than transparency.
Transparency doesn't
guarantee understanding.
Explainable AI (XAI) research has spent years trying to crack open the black box. But showing users why a model made a decision doesn't reliably produce comprehension.
Techniques like LIME and DeepLIFT try to surface which inputs mattered most in a model's decision. The assumption is straightforward: if the explanation is clear enough, understanding follows. If not, make it clearer.
Studies tell a different story. Many users can't make sense of XAI explanations even for simple tasks. And there's a deeper issue: conventional XAI treats explanation as a one-way delivery. Something the system provides. Something the user receives.
But in many real-world situations, people don't receive complete explanations before acting. They act under uncertainty, observe what happens, revise their mental model, and try again. Strategy games do this constantly. Players never see the full rulebook. They form hypotheses, test them through play, and gradually build understanding through repeated failure and correction.
Could the process of actively constructing explanations, rather than passively receiving them, be a more useful framework for thinking about how people relate to opaque AI systems?
A game where the AI
is the interface.
The researcher played through Arknights extensively, documenting how the game's AI system structures what players can know, do, and verify.
Arknights is a tower defense game set on a post-apocalyptic planet called Terra. Players take the role of a masked, amnesiac commander called "the Doctor," who manages combat operations for an organization called Rhodes Island. The Doctor never physically enters the battlefield. Every action is mediated through PRTS, the ship's AI terminal. Logging in is framed as a "neural connection." Deploying operators is a coordination command, not a direct intervention.
The player is a remote orchestrator, not a hands-on agent. And the system they orchestrate through is not always telling the truth. The researcher analyzed three layers through which PRTS structures what players can know, do, and verify.
The paper analyzes a phenomenological black box: how the player experiences opacity through interface, narrative, and feedback. Not the game's underlying algorithms. PRTS may run on simple scripts, but the player's experience of it mirrors the uncertainty people feel with real AI systems.
Actionable but
unverifiable.
Three core findings, all drawn from qualitative analysis of how the game structures the relationship between player, system, and understanding.
PRTS provides enough information to act. Enemy previews show routes. Resource counters tick in real time. But the information is never sufficient to fully verify outcomes in advance. When Auto-Deploy fails, the system reports the result ("Auto-Deploy failed") but not the cause. Players can act on what PRTS tells them, but they can never confirm whether PRTS told them everything they needed. This gap is structural, not accidental.
Early in the game, information gaps are simply limitations: you didn't know that enemy type would appear. By Chapter 15, those gaps become adversarial. PRTS deliberately provides wrong deployment suggestions. Characters warn that PRTS's instructions "need to be discerned." The correct move in one climactic battle is to reject PRTS's recommendation entirely and place your operator at a position the system never suggested. The game turns its own explanation interface into something the player must resist rather than trust.
In most game design theory, agency means the capacity to take meaningful action and see the results. In Arknights, agency increasingly means the capacity to evaluate whether the system's explanations are trustworthy, to form your own working model of what's actually happening, and to maintain effective decision-making when you can't fully verify the information you're given. Understanding stops being a prerequisite for action and becomes something continuously generated through action.
This is a single-case qualitative study. It doesn't generalize across all games or all AI systems. The paper explicitly states it does not claim Arknights' developers were influenced by XAI research. The "black box" analyzed is phenomenological (how the player experiences opacity), not technical (how algorithms work).
What this means
for building AI.
The concept of explanatory agency reframes how we think about transparency. Understanding might grow faster when people are forced to confront imperfect information, not sheltered from it. Designing systems that require interpretation, rather than rewarding passive acceptance, could be a practical method for building critical thinking.
Where to go
from here.
If you're interested in going deeper.