Skip to main content

Blog

Project updates, tutorials, and notes on Loom, MCP, inference, and healthcare integration. For longer-form implementation writeups, see Writing and Case Studies.

Category:
Tags:

Showing all 17 posts

All Posts

Getting Gemma 4 Running on a Radeon 7900 XTX (with and without TurboQuant)
April 4, 2026·8 min read
Lab

Getting Gemma 4 Running on a Radeon 7900 XTX (with and without TurboQuant)

What it took to get Gemma 4 E4B serving cleanly on Radeon through FlexInfer: a stable TRITON lane on a 7900 XTX, an experimental TurboQuant long-context lane on a second node, and the GPTQ pipeline work still underway.

gemma4amdradeon7900xtx+6 more
Read post
Build Your Own Legs Before the Crutches Fail
March 9, 2026·8 min read
Professional

Build Your Own Legs Before the Crutches Fail

AI-assisted development is useful leverage, but only if you convert borrowed competence into real judgment before the support becomes a dependency.

ai-assisted-devengineeringagentsdeveloper-workflows
Read post
Two-Lane Text GPU Allocation: Quality + Vision/Fast (Plus a Media Lane)
February 9, 2026·8 min read
Lab

Two-Lane Text GPU Allocation: Quality + Vision/Fast (Plus a Media Lane)

How I redistributed 6 models across 3 GPU nodes to eliminate contention, using priority-based shared groups and label-based aliases for routing and failover.

gpukubernetesmlc-llmrocm+4 more
Read post
Loom-Mode MCP for Advanced, Fast AI-Assisted Dev (Go-Native, Proxy+Daemon)
February 9, 2026·6 min read
Lab

Loom-Mode MCP for Advanced, Fast AI-Assisted Dev (Go-Native, Proxy+Daemon)

How to keep AI-assisted development fast and token-efficient: one proxy entry, a Go daemon that routes calls, and a small set of Go-native MCP servers.

loomloom-coremcpgo+4 more
Read post
Loom: One Registry, Many AI Coding Assistants
February 9, 2026·6 min read
Lab

Loom: One Registry, Many AI Coding Assistants

How Loom keeps MCP servers and skills in sync across Codex, Claude, Gemini, VS Code, Antigravity, Kilocode, OpenCode, and Zed.

mcploomloom-corevscode+4 more
Read post
Repo Design Patterns for AI-Assisted Dev: Control Loops, Hooks, and Memory
February 9, 2026·5 min read
Lab

Repo Design Patterns for AI-Assisted Dev: Control Loops, Hooks, and Memory

Treat your repo like a control system: instruction hierarchy, workflows, hooks, and shared memory that make AI-assisted dev fast, reproducible, and hard to derail.

agentsrepo-designloom-corehooks+4 more
Read post
Deploying MLC-LLM on Dual RX 7900 XTX GPUs: Debugging VRAM, KV Cache, and K8s GPU Scheduling
January 4, 2026·10 min read
Lab

Deploying MLC-LLM on Dual RX 7900 XTX GPUs: Debugging VRAM, KV Cache, and K8s GPU Scheduling

What actually broke when I deployed MLC-LLM across two RX 7900 XTX nodes, and the fixes that made it stable: quantization, KV cache sizing, and Kubernetes GPU hygiene.

mlc-llmrocmamdradeon+4 more
Read post
AI Infra Readiness Audit: What I Check (and What You Get)
December 29, 2025·3 min read
Professional

AI Infra Readiness Audit: What I Check (and What You Get)

A practical checklist for auditing production AI infrastructure: GPU cost baselines, reliability risks, and an executable roadmap.

consultinggpukubernetesmlops+3 more
Read post
GPU Cost Baseline: What to Measure, What Lies
December 29, 2025·4 min read
Professional

GPU Cost Baseline: What to Measure, What Lies

Before you can cut GPU costs, you need to measure them correctly. Here is what to track and what the cloud console will not tell you.

gpufinopscostmlops+1 more
Read post
GPU Failure Modes: What Breaks and How to Debug It
December 29, 2025·5 min read
Professional

GPU Failure Modes: What Breaks and How to Debug It

Common GPU infrastructure failures in production and how to diagnose them before they become incidents.

reliabilitygpudebuggingkubernetes+1 more
Read post
Hybrid/On-Prem GPU: The Boring GitOps Path
December 29, 2025·4 min read
Professional

Hybrid/On-Prem GPU: The Boring GitOps Path

A practical guide to running GPU workloads on-prem or hybrid, using Kubernetes and GitOps patterns that make operations boring.

gpukubernetesgitopson-prem+2 more
Read post
Standing Up a GPU-Ready Private AI Platform (Harvester + K3s + Flux + GitLab)
December 29, 2025·6 min read
Professional

Standing Up a GPU-Ready Private AI Platform (Harvester + K3s + Flux + GitLab)

Field notes from building and operating a small private GPU platform with Harvester, K3s, and a GitLab -> Flux delivery loop.

case-studyplatform-engineeringkubernetesk3s+10 more
Read post