Blog
Project updates, tutorials, and thoughts on AI, homelab, and development.
Showing all 11 posts
All Posts
Deploying MLC-LLM on Dual RX 7900 XTX GPUs: Debugging VRAM, KV Cache, and K8s GPU Scheduling
What actually broke when I deployed MLC-LLM across two RX 7900 XTX nodes, and the fixes that made it stable: quantization, KV cache sizing, and Kubernetes GPU hygiene.
AI Infra Readiness Audit: What I Check (and What You Get)
A practical checklist for auditing production AI infrastructure: GPU cost baselines, reliability risks, and an executable roadmap.
GPU Cost Baseline: What to Measure, What Lies
Before you can cut GPU costs, you need to measure them correctly. Here is what to track and what the cloud console will not tell you.
GPU Failure Modes: What Breaks and How to Debug It
Common GPU infrastructure failures in production and how to diagnose them before they become incidents.
Hybrid/On-Prem GPU: The Boring GitOps Path
A practical guide to running GPU workloads on-prem or hybrid, using Kubernetes and GitOps patterns that make operations boring.
Standing Up a GPU-Ready Private AI Platform (Harvester + K3s + Flux + GitLab)
Field notes from building and operating a small private AI platform with GPU scheduling, GitOps, and production-grade guardrails.
SLOs for Inference: Latency, Errors, Saturation
How to define meaningful SLOs for production inference workloads, and what to do when they break.
Optimizing Real-Time Kubernetes Visualizations: From 25ms to 12ms Per Frame
A deep dive into optimizing Canvas 2D and Three.js visualizations for Kubernetes dashboards, covering algorithmic complexity, memory management, and GPU-efficient rendering patterns.
Welcome to My Homelab
An introduction to my personal site and the homelab infrastructure powering my AI experiments.
Running LLMs on Radeon GPUs with ROCm
A guide to getting local LLM inference working on AMD Radeon GPUs using ROCm.
Building Practical AI Agents
Thoughts on designing AI agents that actually work for real tasks.