Skip to main content
Method · Reading paths

Four curated sequences

The archive read in the order that makes each argument. Every path is a short reading sequence — four or five pieces — that prepares you to pick up one of the four playbooks.

PathReviewed Apr 2026 · today

Introducing AI-Assisted Dev to a New Team

A curated five-post sequence that takes you from "we should be using AI more" to a team practice that survives contact with production.

For: Engineering leaders and senior ICs joining a team or rolling out AI assistants for the first time.~40 min total
  1. Start with the narrative. This is how the rollout actually sequences through the first quarter — week one, the AGENTS.md pass, the gates, and the conversation you earn the right to have at week ten.

  2. Once you know how the sequence goes, this is the artifact the team converges on. Ten lines, plain English, the specific failure modes each line prevents.

  3. The patterns inside the repo that make the practice durable: instruction hierarchy, hooks, shared memory, and workflows as version-controlled processes.

  4. The loop itself — draft, verify, revise — applied specifically to agents, with the verification strategies that separate generators from agents.

  5. The ongoing risk most rollouts miss: borrowed competence eroding judgment. What to do about it, at the team and the individual level.

Prepares you for the playbook
AI-Assisted Dev Adoption Loop

How a team moves from "we tried Copilot" to a reproducible AI-assisted development practice without losing engineering judgment.

PathReviewed Apr 2026 · today

Auditing and Stabilizing GPU Infrastructure

A five-post sequence for platform leads who need to turn GPU assumptions into baselines and ship a credible 90-day roadmap.

For: Platform and infrastructure teams operating GPU workloads or planning to take them on.~22 min total
  1. The overview of the audit itself — the four tracks, what each one delivers, and why you do them in parallel rather than in series.

  2. The cost track in depth. What to measure, what the cloud console lies about, and how to derive a per-unit-of-work cost that survives scrutiny.

  3. The reliability track. A taxonomy of what breaks on GPUs in production — from thermal throttling to VRAM fragmentation — and how to catch each early.

  4. The SLO track. How to define meaningful latency, error, and saturation targets for inference, and what to do when the error budget is spent.

  5. How the roadmap lands in operations. GitOps patterns that make on-prem or hybrid GPU as boring to run as managed cloud services.

Prepares you for the playbook
AI Infrastructure Readiness Audit

A repeatable process for auditing production AI infrastructure: what to measure, what to fix first, and how to write down the answer.

PathReviewed Apr 2026 · today

Healthcare Integration From Scratch

A four-part sequence that takes you from national interoperability context to a specific failure mode to the reference implementation.

For: Integration engineers, clinical data teams, and platform leads responsible for partner feeds.~28 min total
  1. Start with the national context. What changed between Meaningful Use and TEFCA, and what still blocks plug-and-play apps despite a decade of standards work.

  2. The operational playbook: contracts as assets, observability first-class, executable docs, and support workflows that compound.

  3. A specific failure mode — an "enhanced best match" trap where filter and mode parameters collide — and the defensive pattern that fixes it on both sides.

  4. The reference implementation. How Source Profiles, a three-phase parsing pipeline, and a workflow DSL turn messy legacy formats into semantic events without a one-off codebase per feed.

Prepares you for the playbook
Healthcare Integration Onboarding

How to add a new healthcare feed — HL7v2, CSV, EDI, FHIR, or CDA — without a one-off codebase and without silent data loss.

PathReviewed Apr 2026 · today

Rolling Out Production Agents

A four-post sequence for teams moving agents from "impressive demo" to "owns a class of work with visible track record."

For: Teams piloting or scaling AI agents inside an engineering organization.~31 min total
  1. Start with the shape of a reliable agent: an explicit loop, bounded tools, verification against concrete criteria, and visible operator state.

  2. The repo as control system. Hooks, memory, tasks, and workflows that give agents durable context and enforce the invariants you care about.

  3. The judgment-erosion risk. How to keep agents from becoming a substitute for engineering thinking instead of a force multiplier.

  4. The policy that makes the practice auditable. Ten lines the team converges on, with a quarterly review rhythm.

Prepares you for the playbook
Agent Rollout With Guardrails

A staged path from "one agent in one repo" to "agents own a class of work" without inheriting opaque state or runaway tool calls.

Different audience or question?

The paths above are the four I have written down so far. If your team needs a different sequence — for example, a reliability-only path or a compliance-oriented one — I am happy to sketch it.