Two-component NATS workloads
Many real workloads split into two pieces: a thin, fresh-per-request HTTP component that handles the inbound request, and a heavier worker component that does the actual work and replies asynchronously. The two pieces communicate over NATS using wasmcloud:messaging/consumer and wasmcloud:messaging/handler@0.2.0.
This page walks through the pattern end-to-end against a fresh kind cluster, using the cosmonic-labs/dark-vessels demo as a worked example. The same shape applies to any HTTP-fronted-worker decomposition.
The shape
- The
HTTPTriggerfronts the gateway. On each inbound request, the gateway publishes a NATS request on a known subject viawasmcloud:messaging/consumer.requestand waits for a reply. - The
WorkloadDeploymentdeclares the same subject as a subscription. The host subscribes to it on the worker's behalf and dispatches inbound messages to the worker'swasmcloud:messaging/handler.handle_messageexport. - The worker replies by reading
msg.reply_toand callingwasmcloud:messaging/consumer.publish. NATS routes the reply back to the gateway'sconsumer.requestcall.
Example: dark-vessels
cosmonic-labs/dark-vessels is a vessel-detection demo that uses this pattern. The api-gateway component serves the UI and forwards detection requests over NATS to sar-processor, which runs the CFAR algorithm on a synthetic SAR image and replies with detected vessels.
The full deployment manifest is at deploy/control/httptrigger.yaml. It declares both pieces in a single file:
apiVersion: control.cosmonic.io/v1alpha1
kind: HTTPTrigger
metadata:
name: dv-apigateway
namespace: cosmonic-system
spec:
ingress:
host: darkvessels.localhost.cosmonic.sh
paths:
- pathType: Prefix
path: /
timeout: 300s
replicas: 1
template:
spec:
hostInterfaces:
- namespace: wasi
package: logging
version: 0.1.0-draft
interfaces:
- logging
- namespace: wasmcloud
package: messaging
version: '0.2.0'
interfaces:
- consumer
- types
components:
- name: api-gateway
image: ghcr.io/cosmonic-labs/dark-vessels/api-gateway:0.1.0
---
apiVersion: runtime.wasmcloud.dev/v1alpha1
kind: WorkloadDeployment
metadata:
name: dv-sarproc
namespace: cosmonic-system
spec:
replicas: 1
template:
spec:
hostInterfaces:
- namespace: wasi
package: logging
version: 0.1.0-draft
interfaces:
- logging
- namespace: wasi
package: webgpu
version: 0.0.1
interfaces:
- webgpu
- namespace: wasmcloud
package: messaging
version: '0.2.0'
interfaces:
- handler
- consumer
- types
config:
subscriptions: tasks.sar-processor
components:
- name: sar-processor
image: ghcr.io/cosmonic-labs/dark-vessels/sar-processor:0.1.0
poolSize: 1A few field-level details worth pointing out:
- The
HTTPTrigger'sconsumer+typescover the publish-and-wait side. TheWorkloadDeployment'shandler+consumer+typescover both receiving and replying. config.subscriptionson the worker is the NATS subject the host subscribes to on the component's behalf. Inbound messages on that subject invoke the component'shandle_messageexport.spec.timeout: 300son the HTTPTrigger bounds Envoy's wait time per inbound HTTP request. The bound on the messaging request itself is the hostgroup chart'snexus.requestTimeoutSecondsvalue (also300sby default in chart 0.4.1).wasi:webgpu/webgpu@0.0.1on the worker requires--set gpu=trueon the hostgroup install. Without it, the worker fails placement withWORKLOAD_STATE_ERRORbecause the host does not advertise the WebGPU interface (see hostInterfaces is a scheduling filter).
Run it locally
Create a kind cluster that forwards host ports 80 and 443 to Traefik's NodePorts:
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30080
hostPort: 80
protocol: TCP
- containerPort: 30443
hostPort: 443
protocol: TCP
EOFInstall Cosmonic Control with an Ingress pre-created for the demo's host:
helm install cosmonic-control oci://ghcr.io/cosmonic/cosmonic-control \
--version 0.4.1 \
--namespace cosmonic-system --create-namespace \
--set 'ingress.hosts[0].host=darkvessels.localhost.cosmonic.sh'Install the hostgroup with WebGPU enabled (required for sar-processor):
helm install hostgroup oci://ghcr.io/cosmonic/cosmonic-control-hostgroup \
--version 0.4.1 \
--namespace cosmonic-system \
--set gpu=trueApply the manifest from upstream:
kubectl apply -f https://raw.githubusercontent.com/cosmonic-labs/dark-vessels/main/deploy/control/httptrigger.yamlWait for both workloads to reach Ready=True:
kubectl get workloaddeployments -n cosmonic-systemThen exercise the round-trip. *.localhost.cosmonic.sh resolves to 127.0.0.1, so no /etc/hosts changes are required:
curl -sS -X POST -H 'Content-Type: application/json' \
--data '{"region":"singapore","sar_width":512,"sar_height":512,"num_targets":0,"seed":42,"density":0.5,"force_cpu":true}' \
http://darkvessels.localhost.cosmonic.sh/api/detectThe gateway publishes the request on tasks.sar-processor. The worker subscribes to that subject, runs the detection pipeline, and publishes a reply on msg.reply_to. The gateway's consumer.request call returns with the reply, and the gateway forwards the JSON back to curl.
Reproducing the pattern in your own application
The dark-vessels manifest is a template. To adapt it to your own two-component application:
- Pick a NATS subject for the request channel. Any subject the host can subscribe to works; namespace it under
tasks.<your-app>to avoid collisions. - On the gateway side (
HTTPTrigger), declarewasmcloud:messaging/{consumer,types}@0.2.0underhostInterfaces. The component importswasmcloud:messaging/consumerand callsconsumer.request(subject, body, timeout). - On the worker side (
WorkloadDeployment), declarewasmcloud:messaging/{handler,consumer,types}@0.2.0underhostInterfaces, withconfig.subscriptionsset to the same subject as step 1. The component exportswasmcloud:messaging/handler.handle_message. To reply, readmsg.reply_toand callconsumer.publish. - If the worker uses additional host capabilities (
wasi:webgpu,wasi:blobstore,wasi:filesystem, etc.), declare them underhostInterfacestoo and ensure the host advertises them — otherwise the worker fails placement.
For a lighter-weight reference using the same pattern without WebGPU, see cosmonic-labs/geoint-playback. For a service-plus-component variant that pairs an HTTP gateway with a long-running in-memory service, see cosmonic-labs/ocelaudit and the Service-plus-component workloads section.