Serving High Performance with Bun & Kubernetes
December 3, 2025There's a fine line between hosting a static site and deploying a mission-critical distributed system. I crossed that line, looked back, and sprinted towards complexity. Why? Because I can(and got nerd-sniped) 3-node Kubernetes cluster, leveraging Blue-Green deployments for zero-downtime updates and Bun for blistering fast performance.
The Metal: 3 Nodes of Fury
Why three nodes? In the world of distributed systems, two is a coincidence, and three is a quorum.
- Resilience: If one node goes dark (kernel panic, power outage, or I accidentally unplug it), the other two maintain the etcd consensus and keep serving traffic.
- Rolling Updates: K8s can drain a node for maintenance while the others shoulder the load.
- Overkill? Yes. But it feels great.
The Runtime: Betting on Bun
I recently migrated the entire stack from Node.js to Bun. Node is "works", but Bun is functioning. It acts as my runtime, bundler, and package manager.
During the build phase, Bun's native bundler shreds through dependencies significantly faster than Webpack. In production, the startup time is virtually instant.
# Multi-stage build for maximum efficiency FROM oven/bun:1 AS base WORKDIR /usr/src/app # Stage 1: Install dependencies # We use a cache mount to speed up repeat builds FROM base AS install RUN mkdir -p /temp/dev COPY package.json bun.lockb /temp/dev/ RUN cd /temp/dev && bun install --frozen-lockfile # Stage 2: Build the app FROM base AS prerelease COPY --from=install /temp/dev/node_modules node_modules COPY . . ENV NODE_ENV=production RUN bun run build # Stage 3: Production image FROM base AS release COPY --from=prerelease /usr/src/app/build ./build COPY --from=prerelease /usr/src/app/package.json . # Run with low-overhead USER bun EXPOSE 3000/tcp CMD ["bun", "run", "start"]
Deployment Strategy: Blue-Green
I don't like downtime(says the man who hosts dns via
cloudflare). Kubernetes makes Blue-Green deployments almost
trivial. We maintain two identical environments:
blue (live) and green (idle/staging).
When I push a commit, the CI/CD pipeline deploys to the idle environment. Health checks kick in—probing the Bun server to ensure it's accepting connections. Only when the green environment reports 100% health does the Service selector switch over.
This switch updates the Kubernetes Endpoints object. Our Ingress controller (Traefik) detects this change instantly and starts routing new requests to the green pods, while allowing existing connections to the blue pods to drain gracefully. Zero dropped packets.
# The "Switch" logic in the CI pipeline (simplified) # 1. Verify Green health curl --fail http://blog-green.internal/health # 2. Patch the Service to point to Green kubectl patch service blog-service -p '{"spec":{"selector":{"color":"green"}}}' # 3. Scale down Blue (optional, to save resources) kubectl scale deployment/blog-blue --replicas=0
Is this overkill for a blog? Absolutely. But it allows for testing optimizations in production-like environments without risking the user experience. Plus, the combination of Bun's speed and K8s' resilience means the site withstands traffic spikes without breaking a sweat.