Deploy zrok on Docker
This guide walks through deploying a self-hosted zrok2 instance using Docker Compose. The stack runs the same components as the Deploy zrok on Linux: OpenZiti overlay, zrok2 controller, frontend, and PostgreSQL, using official container images with runtime configuration via environment variables.
This compose stack runs one frontend instance. For higher throughput or availability, run multiple frontend instances behind a reverse proxy (e.g., Caddy or Traefik). See Scaling frontends for details.
Prerequisites
- Docker Engine 24+ with the Compose plugin (
docker compose) - A DNS zone with a wildcard
*A record resolving to the host (e.g.,*.share.example.com→ your server IP) - Ports reachable from the internet:
- 3022: OpenZiti router data plane (always required; direct TLS, not proxied)
- 1280: OpenZiti controller control plane (required for routers not co-located; uses mTLS and must be directly exposed—Caddy cannot proxy this port)
- 443: HTTPS for the zrok2 controller and frontend via Caddy (recommended for production)
- 18080 and 8080: zrok2 controller and frontend (local testing only, no TLS)
Use the Caddy TLS overlay for production deployments. Caddy terminates TLS for the zrok2 controller API and the wildcard share frontend on port 443, routing by subdomain. This avoids exposing the insecure ports 18080 (controller) and 8080 (frontend) directly to the internet.
Step 1: Download the Compose project
Download the Compose project files into a new zrok2-instance directory:
curl -sSfL https://get.openziti.io/zrok2-instance/fetch.bash | bash
cd zrok2-instance
This downloads the essential Docker Compose project files. Alternatively, clone the repository to build zrok2 from
source with compose.build.yml:
git clone --depth 1 https://github.com/openziti/zrok.git
cd zrok/docker/compose/zrok2-instance
Step 2: Configure environment variables
-
Copy the example environment file:
cp .env.example .env -
Open
.envand set these three required values (they match the variable names used in Deploy zrok on Linux):Variable Description Example ZROK2_DNS_ZONEDNS zone with wildcard A record share.example.comZROK2_ADMIN_TOKENzrok2 admin token (≥32 chars) myadmintokenoflength32ormore...ZITI_PWDOpenZiti controller admin password mysecurepassword
Optional variables
These variables have working defaults; override them if your deployment needs differ.
| Variable | Default | Description |
|---|---|---|
ZROK2_STORE_TYPE | postgres | Database backend (postgres or sqlite3) |
ZROK2_DB_PASSWORD | zrok2defaultpw | PostgreSQL password |
ZITI_USER | admin | OpenZiti admin username |
ZROK2_CTRL_PORT | 18080 | zrok2 controller API port |
ZROK2_FRONTEND_PORT | 8080 | zrok2 frontend port |
ZITI_CTRL_PORT | 1280 | OpenZiti control plane port |
ZITI_ROUTER_PORT | 3022 | OpenZiti data plane port |
ZROK2_INSECURE_INTERFACE | 127.0.0.1 | Bind address for non-TLS ports |
ZROK2_IMAGE | docker.io/openziti/zrok2 | zrok2 container image |
ZROK2_TAG | latest | zrok2 image tag; set to a specific version for reproducible deployments |
Step 3: Start the stack
Start all services in detached mode:
docker compose up -d
On first start, zrok2-init bootstraps the OpenZiti overlay and zrok2 stack before the main services begin. This takes
1–2 minutes. Monitor progress:
docker compose logs -f zrok2-init
Once zrok2-init exits successfully, the controller and frontend start automatically.
Step 4: Verify the stack
Check that all services are running:
docker compose ps
The zrok2 controller API should respond:
curl -sf -H 'Accept: application/zrok.v1+json' \
http://127.0.0.1:18080/api/v2/versions
Step 5: Create your first account
Create an account on your zrok2 instance:
docker compose exec zrok2-controller \
zrok2 admin create account you@example.com yourpassword
The command prints an enable token. Save it—you'll need it to enable your environment.
Step 6: Enable a client environment
On your workstation (not the server), install the zrok2 CLI and point it at your instance:
export ZROK2_API_ENDPOINT=http://your-server:18080
zrok2 enable <token>
Replace <token> with the enable token from the previous step.
Step 7: Verify named shares work
After enabling your environment, verify that the dynamic frontend serves named shares:
-
Create a named share (runs in the foreground—use a separate terminal):
zrok2 create name mytest
zrok2 share public http://127.0.0.1:8080 --name-selection public:mytest -
From another terminal, verify the frontend routes it:
curl -sf http://mytest.share.example.com:8080/
Make the frontend publicly accessible
By default, the zrok2 controller and frontend ports bind to 127.0.0.1 (localhost only) for safety. To make the
frontend reachable from the internet without TLS, set ZROK2_INSECURE_INTERFACE in your .env:
ZROK2_INSECURE_INTERFACE=0.0.0.0
This publishes ports 18080 (controller API) and 8080 (frontend) insecurely on all interfaces. For production, use the Caddy TLS overlay instead.
Alternatively, publish only the frontend port by overriding the port mapping in a compose.override.yml:
services:
zrok2-frontend:
ports:
- "0.0.0.0:8080:8080"
The OpenZiti router data-plane port (3022) already binds to 0.0.0.0 because SDK clients must reach it from outside the
Docker network and OpenZiti uses TLS for security.
Optional: Enable TLS with Caddy
For production deployments, enable TLS using the Caddy overlay. Caddy acquires a wildcard certificate via DNS challenge.
-
Set the Caddy variables in
.env:CADDY_DNS_PLUGIN=cloudflare
CADDY_DNS_PLUGIN_TOKEN=your-api-token -
Start with the Caddy overlay:
COMPOSE_FILE=compose.yml:compose.caddy.yml docker compose up -d
Caddy builds itself with the DNS plugin on first start (takes a few minutes), then handles TLS termination for all services:
https://zrok2.share.example.com→ zrok2 controller APIhttps://*.share.example.com→ zrok2 public frontendhttps://ziti.share.example.com→ OpenZiti management API
With TLS enabled, update your client:
export ZROK2_API_ENDPOINT=https://zrok2.share.example.com
Optional: Enable metrics pipeline
To collect usage metrics, enable the metrics profile:
docker compose --profile metrics up -d
This adds InfluxDB and a metrics bridge service. You also need to configure the the OpenZiti controller to emit
fabric.usage events. See Deploy zrok on
Linux for the configuration
reference.
Verify InfluxDB has data
After creating a share and sending some traffic through it, verify metrics arrived in InfluxDB:
docker compose exec influxdb influx query \
'from(bucket: "zrok2") |> range(start: -5m) |> count()' \
--org zrok2 --token "${ZROK2_INFLUX_TOKEN}" --raw
A successful result contains CSV rows with count values. If no data appears, check the metrics bridge logs and RabbitMQ:
docker compose logs zrok2-metrics-bridge --tail=50
docker compose exec rabbitmq rabbitmqctl list_queues
Optional: Build from source
For development or CI, you can build the zrok2 image from source instead of pulling a published image. This uses the
compose.build.yml overlay, which adds a multi-stage Docker build (Go compilation + UI assets) to each zrok2 service.
COMPOSE_FILE=compose.yml:compose.build.yml docker compose up -d --build --wait
The build context is the repository root, so run this from a cloned repository:
git clone https://github.com/openziti/zrok.git
cd zrok/docker/compose/zrok2-instance
cp .env.example .env
# edit .env with your values...
COMPOSE_FILE=compose.yml:compose.build.yml docker compose up -d --build --wait
The first build takes several minutes (Go module download + compilation). The build overlay can be combined with other overlays:
COMPOSE_FILE=compose.yml:compose.build.yml:compose.caddy.yml \
docker compose up -d --build --wait
Troubleshooting
Bootstrap fails with "connection refused"
The zrok2-init service waits for the OpenZiti controller health check. If the controller fails to start, check its
logs:
docker compose logs ziti-controller
Common causes: incorrect ZITI_PWD, port conflicts, DNS resolution issues.
Frontend returns 502
The frontend needs the public identity created during bootstrap. Verify the init completed:
docker compose logs zrok2-init
If it failed, restart it:
docker compose up -d zrok2-init
Reset everything
To start fresh, remove all containers, volumes, and orphans:
docker compose --profile metrics down -v --remove-orphans
Architecture
For details on what each service does, how the dynamic frontend's AMQP-based mapping updates work, and the full manual setup procedure, see the Deploy zrok on Linux.
┌──────────────────────────────────────────────────────┐
│ Docker Compose Network │
├──────────────────────────────────────────────────────┤
│ │
│ ziti-controller ──── OpenZiti control plane (PKI, etc.) │
│ │ │
│ ziti-router ──────── OpenZiti data plane (SDK traffic) │
│ │ │
│ postgresql ───────── zrok2 database │
│ │ │
│ rabbitmq ─────────── AMQP for frontend mappings │
│ │ │
│ zrok2-init ───────── one-shot bootstrap │
│ │ │
│ zrok2-controller ─── zrok2 API and admin │
│ │ │
│ zrok2-frontend ───── public share proxy │
│ │
│ Optional: │
│ caddy ────────────── TLS termination │
│ influxdb ─────────── metrics storage │
│ zrok2-metrics-bridge metrics pipeline │
└──────────────────────────────────────────────────────┘