Skip to main content

Multitenancy

Cosmonic Control supports multi-tenant deployments where multiple teams share the same Kubernetes cluster and Cosmonic Control installation. Isolation is enforced at two levels: the Kubernetes namespace boundary (for resource management) and the WebAssembly sandbox (for runtime execution).

Tenancy model

Each tenant is mapped to a Kubernetes namespace. Teams create and manage their workloads — primarily HTTPTrigger resources — within their own namespace and cannot see or modify resources in other namespaces.

The Cosmonic Control operator and infrastructure components run in cosmonic-system and are managed by platform operators, not tenants.

cosmonic-system/
  operator, nexus, envoy, observability stack  ← platform team only

team-a/
  HTTPTrigger: checkout-service                ← team-a manages
  HTTPTrigger: product-api

team-b/
  HTTPTrigger: analytics-pipeline              ← team-b manages

Access to each namespace is controlled with standard Kubernetes Role and RoleBinding objects. See Tenant RBAC for a step-by-step setup guide.

Workload isolation

Wasm components are isolated by the WebAssembly sandbox, which enforces strict capability boundaries regardless of which namespace a component is deployed to:

  • A component has no access to the host filesystem, network, or environment by default.
  • Outbound HTTP calls require explicit opt-in via allowedHosts.
  • Configuration and secrets are injected by the operator at deploy time, scoped to the component's own manifest (component configuration).
  • Components cannot communicate with other components or services unless explicitly linked via hostInterfaces.

This means a misbehaving or compromised component cannot read another tenant's secrets, call arbitrary external services, or access cluster infrastructure — the sandbox enforces the boundary at the CPU instruction level, not just at the network level.

HostGroups and scheduling

A HostGroup is a set of wasmCloud runtime pods. Each pod in a HostGroup can run Wasm components from multiple namespaces simultaneously — the sandbox ensures they remain isolated from each other on the same host.

By default, all workloads share a single default HostGroup. For stronger isolation between teams (e.g., compliance requirements, different resource profiles, GPU workloads), deploy separate HostGroups with distinct labels and use hostSelector in WorkloadDeployment manifests to pin workloads to specific HostGroups:

# A dedicated HostGroup for a high-compliance team
helm install hostgroup-secure oci://ghcr.io/cosmonic/cosmonic-control-hostgroup \
  --version 0.3.0 \
  --namespace cosmonic-system \
  --set hostgroup=secure \
  --set hostLabels.compliance=high \
  --set replicaCount=2
# WorkloadDeployment that targets the secure HostGroup
spec:
  template:
    spec:
      hostSelector:
        hostgroup: secure

See Installing HostGroups for full configuration options.

Architecture

At runtime, Envoy routes inbound HTTP requests to the correct Wasm component based on the Host header. Each HTTPTrigger registers its host and path rules with the XDS cache service, which pushes updated routing configuration to Envoy. Components from different namespaces can be reachable on distinct hostnames via the same Envoy instance.

Envoy routing tree showing Internet flowing into Envoy, which routes three hostnames to components in team-a and team-b namespaces

Components on the same HostGroup pod remain sandbox-isolated despite sharing the runtime process — there is no shared memory or direct call path between components in different namespaces unless an explicit link is declared.