Skip to main content

Two-component NATS workloads

Many real workloads split into two pieces: a thin, fresh-per-request HTTP component that handles the inbound request, and a heavier worker component that does the actual work and replies asynchronously. The two pieces communicate over NATS using wasmcloud:messaging/consumer and wasmcloud:messaging/handler@0.2.0.

This page walks through the pattern end-to-end against a fresh kind cluster, using the cosmonic-labs/dark-vessels demo as a worked example. The same shape applies to any HTTP-fronted-worker decomposition.

The shape

Two-component NATS workload diagram

  • The HTTPTrigger fronts the gateway. On each inbound request, the gateway publishes a NATS request on a known subject via wasmcloud:messaging/consumer.request and waits for a reply.
  • The WorkloadDeployment declares the same subject as a subscription. The host subscribes to it on the worker's behalf and dispatches inbound messages to the worker's wasmcloud:messaging/handler.handle_message export.
  • The worker replies by reading msg.reply_to and calling wasmcloud:messaging/consumer.publish. NATS routes the reply back to the gateway's consumer.request call.

Example: dark-vessels

cosmonic-labs/dark-vessels is a vessel-detection demo that uses this pattern. The api-gateway component serves the UI and forwards detection requests over NATS to sar-processor, which runs the CFAR algorithm on a synthetic SAR image and replies with detected vessels.

The full deployment manifest is at deploy/control/httptrigger.yaml. It declares both pieces in a single file:

apiVersion: control.cosmonic.io/v1alpha1
kind: HTTPTrigger
metadata:
  name: dv-apigateway
  namespace: cosmonic-system
spec:
  ingress:
    host: darkvessels.localhost.cosmonic.sh
    paths:
      - pathType: Prefix
        path: /
  timeout: 300s
  replicas: 1
  template:
    spec:
      hostInterfaces:
        - namespace: wasi
          package: logging
          version: 0.1.0-draft
          interfaces:
            - logging
        - namespace: wasmcloud
          package: messaging
          version: '0.2.0'
          interfaces:
            - consumer
            - types
      components:
        - name: api-gateway
          image: ghcr.io/cosmonic-labs/dark-vessels/api-gateway:0.1.0
---
apiVersion: runtime.wasmcloud.dev/v1alpha1
kind: WorkloadDeployment
metadata:
  name: dv-sarproc
  namespace: cosmonic-system
spec:
  replicas: 1
  template:
    spec:
      hostInterfaces:
        - namespace: wasi
          package: logging
          version: 0.1.0-draft
          interfaces:
            - logging
        - namespace: wasi
          package: webgpu
          version: 0.0.1
          interfaces:
            - webgpu
        - namespace: wasmcloud
          package: messaging
          version: '0.2.0'
          interfaces:
            - handler
            - consumer
            - types
          config:
            subscriptions: tasks.sar-processor
      components:
        - name: sar-processor
          image: ghcr.io/cosmonic-labs/dark-vessels/sar-processor:0.1.0
          poolSize: 1

A few field-level details worth pointing out:

  • The HTTPTrigger's consumer + types cover the publish-and-wait side. The WorkloadDeployment's handler + consumer + types cover both receiving and replying.
  • config.subscriptions on the worker is the NATS subject the host subscribes to on the component's behalf. Inbound messages on that subject invoke the component's handle_message export.
  • spec.timeout: 300s on the HTTPTrigger bounds Envoy's wait time per inbound HTTP request. The bound on the messaging request itself is the hostgroup chart's nexus.requestTimeoutSeconds value (also 300s by default in chart 0.4.1).
  • wasi:webgpu/webgpu@0.0.1 on the worker requires --set gpu=true on the hostgroup install. Without it, the worker fails placement with WORKLOAD_STATE_ERROR because the host does not advertise the WebGPU interface (see hostInterfaces is a scheduling filter).

Run it locally

Create a kind cluster that forwards host ports 80 and 443 to Traefik's NodePorts:

cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
    extraPortMappings:
      - containerPort: 30080
        hostPort: 80
        protocol: TCP
      - containerPort: 30443
        hostPort: 443
        protocol: TCP
EOF

Install Cosmonic Control with an Ingress pre-created for the demo's host:

helm install cosmonic-control oci://ghcr.io/cosmonic/cosmonic-control \
  --version 0.4.1 \
  --namespace cosmonic-system --create-namespace \
  --set 'ingress.hosts[0].host=darkvessels.localhost.cosmonic.sh'

Install the hostgroup with WebGPU enabled (required for sar-processor):

helm install hostgroup oci://ghcr.io/cosmonic/cosmonic-control-hostgroup \
  --version 0.4.1 \
  --namespace cosmonic-system \
  --set gpu=true

Apply the manifest from upstream:

kubectl apply -f https://raw.githubusercontent.com/cosmonic-labs/dark-vessels/main/deploy/control/httptrigger.yaml

Wait for both workloads to reach Ready=True:

kubectl get workloaddeployments -n cosmonic-system

Then exercise the round-trip. *.localhost.cosmonic.sh resolves to 127.0.0.1, so no /etc/hosts changes are required:

curl -sS -X POST -H 'Content-Type: application/json' \
  --data '{"region":"singapore","sar_width":512,"sar_height":512,"num_targets":0,"seed":42,"density":0.5,"force_cpu":true}' \
  http://darkvessels.localhost.cosmonic.sh/api/detect

The gateway publishes the request on tasks.sar-processor. The worker subscribes to that subject, runs the detection pipeline, and publishes a reply on msg.reply_to. The gateway's consumer.request call returns with the reply, and the gateway forwards the JSON back to curl.

Reproducing the pattern in your own application

The dark-vessels manifest is a template. To adapt it to your own two-component application:

  1. Pick a NATS subject for the request channel. Any subject the host can subscribe to works; namespace it under tasks.<your-app> to avoid collisions.
  2. On the gateway side (HTTPTrigger), declare wasmcloud:messaging/{consumer,types}@0.2.0 under hostInterfaces. The component imports wasmcloud:messaging/consumer and calls consumer.request(subject, body, timeout).
  3. On the worker side (WorkloadDeployment), declare wasmcloud:messaging/{handler,consumer,types}@0.2.0 under hostInterfaces, with config.subscriptions set to the same subject as step 1. The component exports wasmcloud:messaging/handler.handle_message. To reply, read msg.reply_to and call consumer.publish.
  4. If the worker uses additional host capabilities (wasi:webgpu, wasi:blobstore, wasi:filesystem, etc.), declare them under hostInterfaces too and ensure the host advertises them — otherwise the worker fails placement.

For a lighter-weight reference using the same pattern without WebGPU, see cosmonic-labs/geoint-playback. For a service-plus-component variant that pairs an HTTP gateway with a long-running in-memory service, see cosmonic-labs/ocelaudit and the Service-plus-component workloads section.