Practices Library
AI research moves faster than anyone reads. Good findings get buried with the rest. The Practices Library is the cleanup. Each entry is one rule, pulled from a synthesized paper, written so you could try it tomorrow morning. Practices are organized by where in the work the rule matters: the decisions before any code, the architecture, the build, the evaluation, the production, the team, the governance. Pick a stage to start.
Discovery
Decisions made before code is written: what kind of project this is, where the model carries weight, and how much trust the work warrants.
2 practices- Vibe coding
- Trust calibration
- AI in games
Architecture
How the system is shaped: agent topology, planning, step-level evaluation, and the structural choices that determine whether compute is spent well.
3 practices- Multi-agent systems
- Budget-aware reasoning
- Tool use
Build
Implementation patterns: how to scaffold around probabilistic output, surface failure modes early, and keep human judgment in the loop while the code lands.
7 practices- Trust calibration
- Code generation
- Budget-aware reasoning
Evaluation
How you measure whether the system actually works: equal-compute baselines, instrumentation that catches silent over-budgeting, and benchmarks that survive scrutiny.
2 practices- Evals
- Compute normalization
- Test-time compute
Deployment & Operations
Production rollout, cost monitoring, observability, model rotation, and the day-to-day work of keeping deployed systems within bounds.
Coming soon- Coming soon
Team & Transformation
How people actually learn what AI does and does not do: hands-on exercises with live models, surfacing the failure modes that fixed demos hide.
2 practices- Education
- Prompting
- AI ethics
Governance
How AI use stays accountable: policy, audit trails, model approvals, risk frameworks, and the human checks that keep autonomous systems within bounds.
Coming soon- Coming soon