Kubernetes Deployment
BoilStream ships an official Helm chart for running a cluster on any standard Kubernetes distribution (EKS, GKE, AKS, CloudFleet, k3s, OrbStack).
Chart location
Chart source: github.com/boilingdata/boilstream/charts/boilstream
OCI artifact: oci://ghcr.io/boilingdata/charts/boilstream
What the chart deploys
- StatefulSet with a headless
Servicefor stable per-pod DNS (pod-0.boilstream-headless.<ns>.svc.cluster.local) - Per-pod ClusterIP Services exposing PGWire (5432), Kafka (9092), FlightRPC (50051), FlightSQL (50250), auth (8443), and cluster (8444)
- Envoy Gateway with three categories of routes:
- Bare-hostname
TLSRoutes (your domain → round-robin across all pods) — the default client-facing endpoint, requires direct-TLS-aware clients (libpq 17+, psql 17+, pgjdbc 42.7+) - Per-pod
TLSRoutes (boilstream-N.<your-domain>→ that specific pod) — for debugging and explicit pinning, also direct-TLS only - Per-pod
TCPRoutes onpgwire.publicTcpPortBase + pod_index(default15432,15433, …) — pure L4 passthrough into the pod's PGWire listener, no SNI / no direct-TLS requirement. This is the path for stock libpq, DuckDB's bundledpostgresextension, DBeaver, Tableau, and every other libpq-based client. Vended credentials default to it.
- Bare-hostname
- cert-manager
Certificateresources for the public wildcard cert (covers both the bare hostname and*.<your-domain>) and a separate internal CA for pod-to-pod mTLS - PodDisruptionBudget,
preStopdrain hook, and standardapp.kubernetes.io/*labels
Prerequisites
The chart assumes these are already installed in the cluster:
- cert-manager >= 1.13 — issues the public and internal TLS
Secrets - Envoy Gateway >= 1.2 — provides the
GatewayClassreferenced bygateway.className
You also need:
- A pre-created
Secretwith the superadmin password (and optionally an MFA secret). The chart never reads the password from values — it stays out of Helm's release history. - S3-compatible object storage (AWS S3, Hetzner Object Storage, MinIO, RustFS, …). Credentials supplied via a pre-created
Secretkeyedaccess_key/secret_key. - A
ClusterIssuerfor the public cert (Let's Encrypt via DNS-01 or HTTP-01 typically).
Install
Using the OCI artifact:
helm install boilstream oci://ghcr.io/boilingdata/charts/boilstream \
--version 0.3.25 \
-n boilstream --create-namespace \
-f my-values.yamlOr from a checkout:
git clone https://github.com/boilingdata/boilstream
cd boilstream
helm install boilstream ./charts/boilstream \
-n boilstream --create-namespace \
-f my-values.yamlExample overlays
The chart ships two reference overlays you can copy and adapt:
values-eks-example.yaml— AWS EKS with NLB, IRSA, and AWS S3values-hetzner-example.yaml— CloudFleet / Hetzner ARM64 nodes with Hetzner Object Storage
For a step-by-step install on Hetzner + CloudFleet, see Kubernetes on CloudFleet + Hetzner.
Connecting
The Web Auth GUI and boilstream-admin catalog credentials hand you a connection string with the right host + port + flags filled in — paste it directly into psql / DBeaver / your code. The detail below is for understanding what's behind those strings.
PGWire (pgwire.publicTcpPortBase + pod_index, default :15432, :15433, …) — the default for vended credentials. Pure L4 passthrough at the gateway, no SNI, no direct-TLS — every libpq, every JDBC, every BI tool just works:
# What the dashboard / CLI vends — paste it directly:
psql "postgresql://USER:PASSWORD@boilstream-0.app.boilstream.com:15432/DBNAME?sslmode=require"
# DuckDB's stock postgres extension (libpq < 17) — same path:
duckdb -c "ATTACH 'ducklake:postgres:host=boilstream-0.app.boilstream.com port=15432 user=… password=… dbname=… sslmode=require' AS cat"PGWire bare-domain :5432 — SNI-routed across pods for clients that can do direct TLS (libpq 17+, psql 17+, pgjdbc 42.7+):
psql "host=app.boilstream.com port=5432 sslmode=require sslnegotiation=direct user=… dbname=…"Kafka — bare domain, SNI-routed, TLS:
kcat -b app.boilstream.com:9092 -t mytopic -X security.protocol=SSL ...Arrow Flight / DuckDB airport extension:
ATTACH 'boilstream' (TYPE AIRPORT, location 'grpc+tls://app.boilstream.com:50051/');Admin CLI (auth REST):
boilstream-admin auth login --server https://app.boilstream.com --email admin@example.comAdmin CLI from your laptop
For boilstream-admin against a Kubernetes-deployed cluster, the chart bundles a one-time setup helper that pulls the superadmin password and MFA secret out of the cluster's K8s Secrets into ~/.boilstream/<profile>/, then runs auth login. After setup, the CLI is used natively — no wrapper, no env vars.
# one-time per cluster (re-run when password/MFA rotates or the ~1h session expires)
./scripts/boilstream-admin-k8s-setup.sh --profile hetzner
# pick the profile as default, then use the CLI natively
boilstream-admin auth switch hetzner
boilstream-admin cluster status
boilstream-admin users list
boilstream-admin ducklakes list
# or keep multiple clusters side-by-side and switch per command
boilstream-admin -P hetzner cluster status
boilstream-admin -P eks-prod cluster statusWhat the setup script does:
kubectl -n boilstream get secret boilstream-superadmin … > ~/.boilstream/<profile>/password.txtkubectl -n boilstream get secret boilstream-superadmin-mfa … > ~/.boilstream/<profile>/mfa_secret.txtboilstream-admin auth login --server https://app.boilstream.com:8443 --as-profile <profile>using those files (read viaBOILSTREAM_PASSWORD_PATH/BOILSTREAM_MFA_SECRET_PATH)- The session lands at
~/.boilstream/sessions/<profile>.jsonwhere the CLI finds it on subsequent invocations.
Data-plane ops (queries, INSERTs, Kafka produce/consume) run directly on whichever pod the connection landed on. Catalog mutations (CREATE DuckLake, user management, etc.) are transparently forwarded from brokers to the elected leader over the internal :8444 cluster API — clients don't need to know which pod holds leadership.
Per-pod pinning (only needed for debugging or when you explicitly want to stick to one pod): use boilstream-0.app.boilstream.com, boilstream-1.app.boilstream.com, etc.
TLS architecture
- Public edge: browsers and external clients hit a single LoadBalancer IP. Envoy Gateway performs SNI-based TLS passthrough — no termination, no cert in Envoy. Pods present the wildcard cert directly. Issued via cert-manager + Let's Encrypt.
- Pod-to-pod cluster coordination: optional mTLS with a separate internal CA. Browser trust is not mixed with inter-pod trust.
- In-pod loopback: the auth server presents a self-signed cert for
localhost/127.0.0.1SNI connections and the public cert for its real hostname, allowing the DuckDB boilstream extension's libcurl to validate TLS for in-pod OPAQUE login.
High availability
replicas >= 3is recommended for production (tolerates one pod loss during a rolling update)affinity.podAntiAffinityforces one pod per nodepodDisruptionBudget.maxUnavailable: 1gates voluntary evictions- Leader election survives pod loss; brokers compete to promote when the leader's heartbeat goes stale in S3
Non-AWS S3 compatibility
The chart has been validated against AWS S3, Hetzner Object Storage, and RustFS. For object stores where ETags don't round-trip identically between GET and PUT (seen with some Hetzner and MinIO configurations), leader heartbeat retries automatically with re-read confirmation.
Next steps
- Cluster Mode — leader election, broker states, rolling updates
- boilstream-admin CLI — managing users and catalogs from outside the cluster
- AWS IAM Permissions — IRSA setup for EKS