Production Installation
Cosmonic Control separates operator and developer concerns: Wasm component developers use their own tooling to build and publish Wasm components, while operators use standard Kubernetes-native pipelines and tooling to deploy and manage them.
This page covers installing and configuring Cosmonic Control on a production Kubernetes cluster. See also:
- Ingress and Workloads — configuring Envoy ingress and deploying Wasm workloads
- Air-Gapped Installation — mirroring images to a private registry
- Upgrading — upgrading Cosmonic Control to a new version
- Observability — the built-in metrics, logs, and traces stack
Prerequisites
Installing Cosmonic Control
Cosmonic Control is distributed as an OCI Helm chart at oci://ghcr.io/cosmonic/cosmonic-control. The chart deploys the following components into the cosmonic-system namespace:
| Component | Role |
|---|---|
operator | Runtime operator — reconciles CRDs and manages wasmCloud workloads |
nexus | NATS message bus — internal communication backbone |
envoy | HTTP ingress proxy — routes external traffic to Wasm workloads |
opentelemetry-collector | Receives OTLP telemetry from all components |
prometheus | Metrics store |
loki | Log store |
tempo | Trace store |
perses | Observability dashboard UI |
Cloud clusters (EKS, GKE, AKS)
For cloud clusters, set envoy.service.type=LoadBalancer. Most providers will provision a load balancer automatically. Use envoy.service.annotations to control cloud-specific load balancer behavior.
AWS (Network Load Balancer):
helm install cosmonic-control oci://ghcr.io/cosmonic/cosmonic-control \
--version 0.3.0 \
--namespace cosmonic-system \
--create-namespace \
--set envoy.service.type=LoadBalancer \
--set-json 'envoy.service.annotations={"service.beta.kubernetes.io/aws-load-balancer-type":"nlb","service.beta.kubernetes.io/aws-load-balancer-scheme":"internet-facing"}'GKE / AKS (standard cloud load balancer):
helm install cosmonic-control oci://ghcr.io/cosmonic/cosmonic-control \
--version 0.3.0 \
--namespace cosmonic-system \
--create-namespace \
--set envoy.service.type=LoadBalancerOn-premises and bare-metal
For clusters without a cloud load balancer controller, expose Envoy as a NodePort and route external traffic to that port yourself:
helm install cosmonic-control oci://ghcr.io/cosmonic/cosmonic-control \
--version 0.3.0 \
--namespace cosmonic-system \
--create-namespace \
--set envoy.service.type=NodePort \
--set envoy.service.httpNodePort=30950Traffic must reach port 30950 on any node in the cluster. Configure your external load balancer, firewall rules, or ingress proxy accordingly.
Using a values file
For anything beyond a simple install, use a values.yaml file to manage configuration:
# cosmonic-control-values.yaml
envoy:
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facinghelm install cosmonic-control oci://ghcr.io/cosmonic/cosmonic-control \
--version 0.3.0 \
--namespace cosmonic-system \
--create-namespace \
-f cosmonic-control-values.yamlWait for readiness
kubectl rollout status deploy -l app.kubernetes.io/instance=cosmonic-control -n cosmonic-systemInstalling HostGroups
A HostGroup is a group of one or more wasmCloud host pods that run Wasm workloads. Every Cosmonic Control installation needs at least one HostGroup. HostGroups connect to the nexus NATS server and register themselves as available hosts for workload scheduling.
Install the default HostGroup:
helm install hostgroup oci://ghcr.io/cosmonic/cosmonic-control-hostgroup \
--version 0.3.0 \
--namespace cosmonic-systemWait for it to be ready:
kubectl rollout status deploy -l app.kubernetes.io/instance=hostgroup -n cosmonic-systemScaling HostGroups
HostGroups are standard Kubernetes Deployments and can be scaled horizontally. Set replicaCount to run multiple host replicas:
helm install hostgroup oci://ghcr.io/cosmonic/cosmonic-control-hostgroup \
--version 0.3.0 \
--namespace cosmonic-system \
--set replicaCount=3Cosmonic Control automatically load-balances workloads across all available hosts in a HostGroup (round-robin). If a host crashes, its workloads are redistributed to remaining hosts.
Multiple HostGroups
Deploy multiple HostGroups with different names and host labels to create distinct scheduling zones—for example, separating general-purpose workloads from GPU workloads, or isolating workloads by team or environment:
# General-purpose HostGroup
helm install hostgroup-default oci://ghcr.io/cosmonic/cosmonic-control-hostgroup \
--version 0.3.0 \
--namespace cosmonic-system \
--set hostgroup=default \
--set replicaCount=2
# GPU-enabled HostGroup
helm install hostgroup-gpu oci://ghcr.io/cosmonic/cosmonic-control-hostgroup \
--version 0.3.0 \
--namespace cosmonic-system \
--set hostgroup=gpu \
--set gpu=true \
--set runtimeClassName=nvidia \
--set replicaCount=1Workload placement is controlled through host labels on WorkloadDeployment manifests. See Multi-tenancy and RBAC for details.
Resource sizing
Control plane
The control plane components (operator, nexus, envoy, observability stack) are each deployed as single replicas. No resource requests or limits are set by the chart defaults; set them via --set or a values file for any component as needed:
# Example: constrain the operator
operator:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512MiThe control plane components currently run as single replicas. For production deployments requiring high availability, contact support@cosmonic.com.
HostGroups
HostGroup pods run Wasm workloads and typically benefit from tuning. Set resources in the HostGroup values file:
# hostgroup-values.yaml
replicaCount: 3
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 2
memory: 2GiAs a starting point: a single HostGroup pod can comfortably run dozens of concurrent Wasm components given their small footprint (typically sub-millisecond startup, kilobyte-scale memory per component). Scale out by increasing replicaCount rather than individual pod resource limits.
Multi-tenancy and RBAC
Host labels for workload placement
HostGroups expose a hostLabels map that is propagated to every wasmCloud host in the group. These labels are used in WorkloadDeployment manifests to control which HostGroup runs a given workload:
# hostgroup-team-a-values.yaml
hostgroup: team-a
hostLabels:
team: a
environment: production
replicaCount: 2helm install hostgroup-team-a oci://ghcr.io/cosmonic/cosmonic-control-hostgroup \
--version 0.3.0 \
--namespace cosmonic-system \
-f hostgroup-team-a-values.yamlReference the labels in a WorkloadDeployment using the hostSelector field to pin workloads to specific HostGroups.
Kubernetes RBAC
Cosmonic Control creates a ClusterRole granting the operator service account access to the following resource groups:
control.cosmonic.io:ProjectEnvironment,HTTPTrigger(and their status subresources)runtime.wasmcloud.dev:Artifact,Host,Workload,WorkloadReplicaSet,WorkloadDeployment(and status subresources)""(core):ConfigMap,Secret(read-only),Namespace(read-only),Eventcoordination.k8s.io:Lease(for leader election)
Tenant isolation is enforced by Kubernetes namespace-scoped RBAC. Grant teams access to their own namespaces with standard Role/RoleBinding objects scoped to the control.cosmonic.io and runtime.wasmcloud.dev API groups.
Further reading
- Ingress and Workloads — configuring Envoy and deploying HTTP workloads
- Component Configuration — passing config, secrets, and private registry credentials to components
- Tenant RBAC — namespace-scoped RBAC for multi-team deployments
- Air-Gapped Installation — mirroring images to a private registry with ORAS
- Upgrading —
helm upgradeprocedures and version migration notes - Observability — the built-in Prometheus, Loki, Tempo, and Perses stack
- GitOps with Argo CD — a complete GitOps workflow example
- Custom Resources — the full CRD reference