Skip to main content
Back to Blog

A One-Page AI Usage Policy That Actually Works

8 min read

Professionalai-assisted-devpolicyteam-practicegovernancemethod

Most AI policies I've read fall into one of two categories: a legal-adjacent paragraph that reads like a disclaimer, or a twelve-page guide that nobody opens after onboarding. Both fail the same way: they do not change what engineers actually do.

A policy that works has a smaller ambition. It takes the invisible, load-bearing habits your team already should have and makes them explicit — one page, plain English, readable by a senior engineer in two minutes and a skeptical exec in one.

This is the template I've converged on after running through versions of it on a few teams. You can steal it. I would prefer you changed the names of the critical systems to match your own, thought about each line for thirty seconds, and then agreed with it or didn't.

TL;DR

  • A good AI usage policy is short, boring, and specific. One page, ten numbered rules, written in active voice.
  • Do not try to write it in your first week. The policy describes habits that already exist. If the habits don't exist yet, you are drafting aspiration, not policy.
  • Three things every policy must do: assign accountability (every change belongs to a human), protect data (no PII or credentials in prompts), and preserve review (AI-drafted is reviewed the same as human-drafted, with more care for high-risk code).
  • What it must not do: name a specific tool, specify a specific model, specify a specific prompt, or attempt to limit creativity. Those are operational decisions, not policy.
  • Review the policy quarterly. The model landscape changes every six months; the policy should age slower than that but faster than annual review.

When to write the policy

Do not write this in your first week on a team. I say this repeatedly and I will keep saying it. A policy written before you understand how the team actually works is aspiration dressed up as governance. Engineers can smell it in thirty seconds, and once they dismiss the first version, you have a harder time getting the second one adopted.

The right time to write the policy is after you have:

  • A running AGENTS.md or equivalent at the root of the main repo.
  • CI gates that enforce the invariants the team already cares about (tests, lint, typecheck).
  • At least one month of real engineers using assistants on real work, with visible output.
  • A meaningful conversation about which classes of work the team is comfortable routing to assistants and which it is not.

The policy is the written form of the agreement you already have. If you do not have an agreement yet, the policy is premature.

For the context behind this sequencing, the First 90 Days post is the narrative version of how this agreement develops in the first quarter.

The policy

Here is the one-page version. Adapt it. Do not sanitize it.

[Team Name] AI Usage Policy

Effective [date]. Reviewed quarterly. Plain English, not legalese.

  1. Every change belongs to a human. The engineer whose name is on the pull request is the author, responsible for correctness, and accountable for follow-up. "The assistant drafted it" is context, not a defense.
  2. No customer data, credentials, PII, or secrets in prompts. Use synthetic examples or redacted fixtures. If you need to debug against real data, do it in an environment approved for that data.
  3. Review AI-drafted output with the same care as human-drafted output. Read every line. Run the tests. For high-risk changes — [list your critical systems here, e.g. payment routing, patient matching, auth, schema migrations] — apply additional care including pairing.
  4. The team standardizes on one primary assistant. Alternatives require a written reason and a named owner. Tool variance should be a diff in a config file, not a personal preference.
  5. Destructive or irreversible actions require explicit human authorship. Production database changes, schema migrations, rotating secrets, force-pushing, deleting branches or buckets, sending customer-visible messages. The assistant can propose; the human executes.
  6. Tool access is scoped and reviewed. Assistants have only the tools they need for their defined class of work. Tool permissions are a diff in source control, reviewed like any other permission change.
  7. Persistent logs are non-negotiable. Every assistant session records its decisions, tool calls, and outcomes to a place the team can audit. No ephemeral chat histories as the only record.
  8. We do not use AI assistants for final review of security-critical code. A human reviewer signs off on the [security-critical list above]. AI can draft, flag, or suggest; it does not approve.
  9. We write down the classes of work we route to assistants, quarterly. The list grows or shrinks based on evidence. If we cannot answer "how is AI going" with specifics, the list is too vague.
  10. When an AI-drafted change causes an incident, the named human reviewer is accountable. The postmortem does not blame the tool. It examines the loop that let the change through, and we fix the loop.

That's it. Ten lines. Fits on one page with the team name and effective date.

What each line is doing

Every line solves a specific failure mode I have seen happen.

Line 1 (every change belongs to a human) exists because teams are tempted, under deadline pressure, to let "the assistant wrote it" become rhetorical cover for unreviewed code. The line makes the cover fail.

Line 2 (no PII in prompts) is the one exec sponsors ask about first. It is also the one line where the exact wording matters. Make sure it aligns with your security team's view of acceptable use before it's final. Do not let the security team write the whole policy, but do let them red-line this line.

Line 3 (review with the same care) is the quiet failure mode. Junior engineers tend to trust AI-drafted code more. Senior engineers tend to skim code that "looks right." The line asks for the same rigor regardless of source, and calls out the systems where extra care is mandatory.

Line 4 (standardize on one assistant) sounds draconian but is practical. When everyone on the team uses the same tool with the same config, reviewers share intuition about what the tool tends to get right and wrong. When everyone uses something different, every review is solo. If your team legitimately needs multiple tools, make that a named exception with a configured diff — not a free-for-all.

Line 5 (destructive actions require human authorship) is where the wording matters most. The pattern is: the assistant can produce the diff, the message, the migration file, the plan. A human runs the command, sends the message, merges the migration, executes the plan. It's a small separation that catches a large class of incidents.

Line 6 (tool access is scoped) is load-bearing as agents become more capable. If the assistant can call any tool it can see, eventually it will call one you did not expect. Scoping access by class of work and putting it in version control means "the assistant can do X now" is a diff you can review, not a permission drift.

Line 7 (persistent logs) is the one engineers push back on most, because "just let me chat with it." Push back. Without a persistent log you have no audit, no debugging, and no way to improve the loop. The log does not have to be visible by default — it has to exist.

Line 8 (no AI final review for security-critical code) is the line that most clearly distinguishes a working policy from slideware. Drafting and flagging are useful AI tasks; final approval on the things that can sink the company is not. State this explicitly so nobody is surprised.

Line 9 (quarterly review of routed work) is the accountability for the policy itself. It forces the team to answer, four times a year, "what are we using this for and how is it going." That answer is also the report leadership wants.

Line 10 (accountability in incidents) is the most important line. The assistant cannot be disciplined, fired, promoted, or given feedback. Only humans can. A blame-the-tool postmortem improves nothing; a blame-the-loop postmortem improves the loop. Write it in.

What's deliberately not in the policy

Things you will be tempted to add. Resist each one.

A named tool. "We use [specific assistant]." The tool will change. The version will change. The vendor will change. Keep the tool decision out of the policy and in a separate ADR that can be replaced without touching governance.

A named model. Same reason. Models are replaced every few months. Policy should outlive them.

A list of approved prompts. This turns into compliance theater within two weeks. Prompts evolve as the team learns what works. A policy that constrains prompts ossifies the practice.

Percentages. "Use AI for at least 30% of commits." Do not. You are measuring the wrong thing; engineers will optimize for the number.

A prohibition on creativity. "Do not let AI write code that surprises you." Surprising code is sometimes good code. The review step is where surprise gets evaluated, not the policy.

Training data restrictions. These belong in your data governance policy, not your AI usage policy. Keep them separate or they dilute each other.

How to evolve it

The policy is a living document, but not a volatile one. The right cadence is quarterly review with this checklist:

  • Does any line describe a habit that no longer matches practice? (Either update the practice or update the policy.)
  • Did any incident in the quarter happen because a policy line was ambiguous? (Tighten the line.)
  • Did any incident happen because a policy line was missing? (Add a line. Sparingly. The policy's value is brevity.)
  • Are new classes of work being routed to assistants that weren't before? (Update the team's work-classification list, not the policy itself.)
  • Are any lines unchanged for two years? (Sanity check them against current practice; they may be stale.)

A policy that changes every week is theater. A policy that never changes is theater of a different kind. Quarterly, with written rationale, is the stable rhythm.

Where to go next

If you're building the underlying practice before you write the policy, the First 90 Days post is the narrative version of how to do that, and the AI-Assisted Dev Adoption Loop is the structured playbook.

If you're building agents specifically — where tool access, contracts, and rollout gates matter most — the Agent Rollout with Guardrails playbook covers the adjacent territory.

If you want the broader perspective, the method spine is the unified set of principles all of this rests on. The short version: an AI policy, like everything else worth shipping, works when it is a durable loop with visible state, explicit gates, and evidence for every claim — including the claim that you need a policy in the first place.

Related Articles

Comments

Join the discussion. Be respectful.