Strategies & Mesh
For multi-venue trading — cross-exchange arbitrage, spread capture, portfolio rebalancing — Sequence uses a mesh model. You write a single Algo type and deploy it as independent instances to multiple venue edges. Instances communicate via labeled messages routed by the Central Coordinator.
Running a single-venue algo? Use the same Algo trait — no extra traits, no extra macros. Single-venue is just "mesh with one node." This page is about the multi-node case.
Why mesh (and not a sharded strategy/executor design)?
A naive multi-venue design puts one "strategy" brain on a central node and relays orders out to edges. Three problems:
- Extra hop on the hot path. Your brain sees a signal, sends an order down to an edge, waits. That's a network round-trip for every action, every venue.
- Monolithic brain. One crash and the whole fleet stops trading — the brain is a single point of failure.
- No local autonomy. The edge can't re-price on its own book update without re-consulting the brain.
Mesh flips this: every edge runs the full Algo locally against its own book. Cross-venue coordination happens through small, labeled messages, not order relays.
┌─── Algo instance on Kraken edge ───┐
│ label = "maker" │
│ on_book → local Kraken book │
└──────┬─────────────────────┬───────┘
│ send("hedger", …) │ on_message from "hedger"
▼ ▲
┌──────┴─────────────────────┴───────┐
│ Algo instance on Coinbase edge │
│ label = "hedger" │
│ on_book → local Coinbase book │
└────────────────────────────────────┘
Each edge owns its own order lifecycle. No central brain, no order relay.
The Algo trait (mesh view)
The trait is the same one documented in the SDK overview — what changes for mesh is which callbacks you implement:
| Callback | When it fires | Mesh use |
|---|---|---|
on_book | Local venue book updates | Act on your edge's liquidity |
on_nbbo | Cross-venue NBBO changes (CC-aggregated) | Detect cross-market signals |
on_message | Another instance sent you a message | React to a sibling's intent/fill/hint |
on_fill / on_reject | Your orders resolved | Update state, maybe tell a sibling |
on_heartbeat | 1 Hz, optional | Periodic rebalance / housekeeping |
on_shutdown | Instance is being torn down | Cancel your open orders |
The runtime probes your WASM for algo_on_nbbo and algo_on_message exports. If they're present, it wires up cross-venue data and the mesh inbox. If not, you just run as a single-venue algo — same binary, same export_algo!.
Sending messages
use algo_sdk::*;
pub struct Maker { /* ... */ }
impl Algo for Maker {
fn on_fill(&mut self, fill: &Fill, _state: &AlgoState) {
// Tell the hedger we just got filled so it can offset the exposure.
let payload = bytemuck::bytes_of(fill);
messaging::send("hedger", payload);
}
fn on_book(
&mut self,
book: &L2Book,
state: &AlgoState,
_features: &OnlineFeatures,
actions: &mut Actions,
) {
// Quote locally on whichever venue this instance is deployed to.
}
}
export_algo!(Maker { /* init */ });Constraints:
- Payload ≤ 256 bytes — enforced by the host import. Bigger and the call returns an error.
- Max 16 outbox entries per callback — same limit as action buffers. The outbox is drained after the callback returns.
- Label must match
Sequence.toml— if yousend("hedger", …)and no instance is labelledhedger, the message is dropped.
Receiving messages
impl Algo for Hedger {
fn on_message(
&mut self,
from: &str,
payload: &[u8],
state: &AlgoState,
actions: &mut Actions,
) {
if from == "maker" && payload.len() == std::mem::size_of::<Fill>() {
let fill: &Fill = bytemuck::from_bytes(payload);
// Offset maker's fill on our local book.
actions.sell(state.next_id(), fill.qty_1e8, /* mid - offset */);
}
}
fn on_book(
&mut self,
_book: &L2Book,
_state: &AlgoState,
_features: &OnlineFeatures,
_actions: &mut Actions,
) {
// No-op — this instance only acts when nudged by the maker.
}
}
export_algo!(Hedger { /* init */ });Deployment topology
Mesh topology is declared in Sequence.toml. Every instance gets a venue and a label. Siblings reference each other by label.
[deploy.maker]
venue = "kraken"
label = "maker"
symbols = ["BTC-USD"]
bundle = "target/wasm32-unknown-unknown/release/my_strategy.wasm"
[deploy.hedger]
venue = "coinbase"
label = "hedger"
symbols = ["BTC-USD"]
bundle = "target/wasm32-unknown-unknown/release/my_strategy.wasm"Deploy both at once:
sequence deployThe CLI reads Sequence.toml, registers the bundle (once), and creates one deployment per [deploy.*] block. Each instance runs on its target venue's edge.
Latency model
| Hop | Typical latency |
|---|---|
Local on_book → local actions.buy | Microseconds — never crosses the network |
messaging::send("peer", …) same region | ~1–2 ms (edge → local CC → peer edge) |
messaging::send("peer", …) cross-region | ~50–100 ms (through the peer link; see Architecture) |
The cross-region hop is why message-based coordination beats order relay: one 256-byte ping is a lot cheaper than shipping an order through the fleet, and the receiving edge can decide locally how to act on it (or not).
Failure handling
| Failure | What happens |
|---|---|
| One instance crashes | The runtime terminates that instance. Siblings keep running. Your on_message handler should tolerate missing responses. |
| Edge disconnects from CC | The edge keeps running locally but can't send/receive mesh messages. on_book still fires on the local feed. Reconnection restores the mesh inbox. |
| CC restart | Instances keep trading on their local edges. Mesh routing re-establishes when the CC is back. Use on_heartbeat + timeouts if you need liveness guarantees. |
| Message drop | The transport is best-effort inside the CC. Design on_message to be idempotent — include your own sequence number in the payload if you need dedup. |
Each instance is independent. There is no central brain whose crash halts trading.
When mesh is the wrong answer
- Strict atomic cross-venue fills. Mesh gives you coordination, not atomicity. If your strategy breaks when one leg fills and the other doesn't, use the server-side execution graph primitive instead — it has built-in risk reservations and typed triggers between legs.
- You need sub-ms cross-venue decisions. 50–100 ms cross-region latency dominates anything else. Mesh is for strategies where the coordination hop is cheap relative to the signal timescale.
- You want one place to reason about state. If your model is "one global position, many venues," the execution-graph API keeps all the state server-side and is simpler to monitor.
Summary
| You want… | Use |
|---|---|
| Single-venue algo (make markets on Kraken) | Algo + export_algo!, deploy once |
| Multi-venue coordinated algo (market-make + hedge) | Same Algo trait + on_message, deploy N times with labels |
| Atomic multi-leg order with risk limits | Execution graph (server-side, no WASM) |
One trait. One macro. How you deploy it decides whether it's a single-venue algo or a mesh.