Rust SDK
Typed async Rust client with builder patterns, fixed-point wrappers, and WebSocket streaming. Every REST endpoint is covered. This page is a reference — every public method, every builder, every domain type, grouped by the product ladder.
Internal crate — not published to crates.io. Add it as a workspace path dep (if you're inside the execution-engine repo) or a git dep against the private repo:
[dependencies]
# Inside the monorepo:
sequence-sdk = { path = "crates/sdk/sequence-sdk" }
# External consumer with access to the private repo:
# sequence-sdk = { git = "ssh://git@github.com/Bai-Funds/execution-engine.git", package = "sequence-sdk" }
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }
futures-util = "0.3" # needed for StreamExt on the WS iteratoruse sequence_sdk::{Sequence, Side, Urgency, Policy, SequenceError};
use sequence_sdk::builders::graph::{Node, Trigger, Sizing, FailureAction};
use sequence_sdk::builders::history::Range;
let seq = Sequence::new("seq_live_…")
.base_url("https://api.sequencemkts.com"); // override for local devClient
Sequence::new(api_key) → Sequence
.base_url(&str) → Sequence (builder)
Construct a client. The HTTP layer is a shared reqwest::Client with a 30s timeout and Authorization: Bearer … baked into default headers. Clones are cheap — the underlying client is Arc-backed.
let seq = Sequence::new("seq_live_…"); // production
let seq = Sequence::new("seq_test_…").base_url("http://localhost:50052");SequenceError
Unified error type returned by every fallible method.
| Variant | When |
|---|---|
Api { status: u16, message: String } | Non-2xx from the CC. Code + message come from the error envelope |
Network(reqwest::Error) | TCP / TLS / timeout |
Deserialize(String) | Response JSON didn't match the expected type (SDK bug — file an issue with the body snippet) |
Validation(String) | Client-side pre-flight failed (e.g. amend() with no fields) |
CapabilityMissing { verb, venue, reason } | Verb is not available through the selected SDK/API surface. Batch no longer uses this gate for Polymarket. |
match seq.buy("ETH-USD", 50.0).submit().await {
Ok(ok) => …,
Err(SequenceError::Api { status: 429, .. }) => backoff().await,
Err(SequenceError::Api { status: 409, message }) => kill_switch_check(&message),
Err(SequenceError::CapabilityMissing { venue, reason, .. }) => eprintln!("{venue}: {reason}"),
Err(e) => return Err(e.into()),
}Level 0 — Connect
async connect_venue(venue: &str, credentials: VenueCredentials) -> Result<(), _>
Store venue API credentials. POST /v1/credentials/{venue}. VenueCredentials:
pub struct VenueCredentials {
pub api_key: String,
pub api_secret: String,
pub passphrase: Option<String>, // OKX, Coinbase Advanced, Crypto.com, Bitget
pub extra_json: Option<serde_json::Value>, // Polymarket signer material, HL vault, …
}Kalshi — RSA keypair (PKCS#1 or PKCS#8 both work):
seq.connect_venue("kalshi", VenueCredentials {
api_key: "dc9d0cd5-076a-4ea8-b406-1847279dac4b".into(),
api_secret: "-----BEGIN RSA PRIVATE KEY-----\nMIIE...\n-----END RSA PRIVATE KEY-----".into(),
passphrase: None,
extra_json: None,
}).await?;Polymarket — EIP-712 signer goes in extra_json; CLOB L2 credentials are auto-derived on first order:
seq.connect_venue("polymarket", VenueCredentials {
api_key: String::new(), // leave empty
api_secret: String::new(), // leave empty
passphrase: None,
extra_json: Some(serde_json::json!({
"signer_private_key": "0x…", // required
"proxy_address": "0x…", // optional
"builder": "0x…", // optional — builder-code recipient
})),
}).await?;async disconnect_venue(venue: &str) -> Result<(), _>
Delete stored credentials. Existing WS subscriptions on the edge persist until reconnect — call this before rotating keys.
async venues() -> Result<Vec<ConnectedVenue>, _>
ConnectedVenue { venue: String, connected: bool }.
async health() -> Result<bool, _>
Unauthenticated GET /v1/health/live. Returns true on any 2xx.
Level 1 — See
async quote(symbol: &str) -> Result<Quote, _>
NBBO + per-venue BBO + merged book. Returns the typed Quote:
pub struct Quote {
pub symbol: String,
pub nbbo: QuoteNbbo, // bid, ask, mid, spread_bps
pub venues: HashMap<String, QuoteVenue>, // per-venue BBO with age_ms
pub book: Option<QuoteBook>, // merged L2 (bids, asks: Vec<BookLevel>)
pub source: String, // "edge_nbbo" | "edge_rpc_snapshot" | "venue_public_api" | "unavailable"
}The source field tells you which path served the quote:
| Value | Meaning |
|---|---|
"edge_nbbo" | Cache hit — live WebSocket NBBO (sub-ms) |
"edge_rpc_snapshot" | Cold-path REST through the edge (Polymarket bulk /books / Kalshi signed /orderbook, 100–300ms) — also kicks off a background WS subscribe so subsequent calls warm up |
"venue_public_api" | Graceful-degradation fallback — edge RPC was unreachable |
"unavailable" | No path produced book data (market empty, venue 404'd, etc.) |
The builder variant quote(symbol).nowait(true).send() skips the up-to-2s cold-subscribe wait. Research-grade only — trading code must never pass nowait=true.
async quotes_batch(&[&str]) -> QuotesBatchBuilder (builder)
Wraps POST /v1/quotes_batch. Server-side tokio fan-out with bounded concurrency; per-symbol errors return as partials.
let r = seq.quotes_batch(&[
"BTC-USD",
"4394372887385518214471608448209527405727552777602031099972143344338178308080",
"KXNBA-26-TOR",
])
.depth(20)
.nowait(true) // recommended for batches of 50+
.concurrency(64) // cap 256
.send()
.await?;
for entry in r.quotes {
match entry {
BatchQuoteEntry::Ok(q) => { /* q.source, q.nbbo, q.book */ }
BatchQuoteEntry::Err { symbol, error } => eprintln!("{symbol}: {error}"),
}
}| Builder method | Default | Description |
|---|---|---|
.depth(u16) | 10 | Book levels per side (1–100) |
.nowait(bool) | true | Skip 2s subscribe-wait per cold symbol — default true because serial waits are catastrophic at batch scale |
.concurrency(usize) | 64 | Max concurrent upstream fetches (cap 256) |
.instrument_type(&str) | auto | Uniform override: "spot", "perp", or "prediction" |
fn positions_unified(&self) -> PositionsRequest (builder)
Unified positions across fiat, crypto, perps, and event contracts. Fills come from three reconciled sources (fills → write path, venue balance APIs → cold sync, venue portfolio APIs → cold sync) and all land in the same response shape. Replaces balances(), positions(), perp_positions(), and portfolio().
use sequence_sdk::InstrumentKind;
// All active positions across every venue
let resp = seq.positions_unified().fetch().await?;
println!("NAV = ${:.2}", resp.totals.nav_usd_1e9 as f64 / 1e9);
// Cash-only (fiat + crypto stablecoins)
let cash = seq.positions_unified()
.kind(InstrumentKind::Fiat)
.kind(InstrumentKind::Crypto)
.fetch().await?;
// Or with the shorthand
let cash = seq.positions_unified().cash_only().fetch().await?;
// Event contracts on Kalshi + Polymarket
let events = seq.positions_unified()
.events_only()
.venue("kalshi")
.venue("polymarket")
.fetch().await?;Builder methods:
| Method | Description |
|---|---|
.venue(impl Into<String>) | Add a venue filter (repeatable) |
.venues(impl IntoIterator<Item: Into<String>>) | Replace venue filter |
.kind(InstrumentKind) | Add an instrument-kind filter (repeatable) |
.cash_only() | Shorthand for kind(Fiat) + kind(Crypto) |
.events_only() | Shorthand for kind(EventContract) |
.include_closed(bool) | Include terminal closed/redeemed rows. Resolved event contracts surface by default — no flag needed. |
.fetch() | Execute → PositionsResponse { positions, totals, as_of_ns } |
Each PositionView carries qty_1e8, entry_price_usd_1e9, current_price_usd_1e9, unrealized_pnl_usd_1e9, realized_pnl_usd_1e9, fees_usd_1e9, lifecycle, settled: bool, settled_mark_usd_1e9: Option<i64>, and a typed instrument discriminant. settled flips to true the instant resolution telemetry lands — check it to render post-settlement P&L without waiting on the venue to zero out qty. See API › Positions for the full field list.
The following methods still exist but are #[deprecated] and point at removed endpoints (404): balances(), positions(), perp_positions(), portfolio(). They'll be removed in the next major version — migrate to positions_unified().
async symbols() -> Result<Vec<Symbol>, _>
Symbol { symbol, base, quote, venues: Vec<String>, instrument_type: String }.
price_history(symbol: &str) -> PriceHistoryQuery (builder)
Unified history primitive across CEX, perp, DEX, Kalshi, and Polymarket. Builder pattern — call .fetch().await? to execute.
pub struct PriceHistoryQuery<'a> { … }
impl<'a> PriceHistoryQuery<'a> {
pub fn range(self, range: Range) -> Self; // default Range::OneDay
pub fn fidelity_secs(self, secs: u64) -> Self; // default 60; 0 = tick mode
pub fn paginated(self) -> Self; // Polymarket 7-day windowing
pub fn venue(self, v: impl Into<String>) -> Self; // required for long-range CEX/DEX
pub fn limit(self, n: u32) -> Self; // tick-mode cap
pub fn ticks(self, n: u32) -> Self; // shorthand: fidelity_secs(0).limit(n)
pub fn start_ts(self, s: u64) -> Self; // explicit window in unix seconds
pub fn end_ts(self, s: u64) -> Self;
pub fn underlying(self) -> Self; // include Chainlink curve on PM crypto
pub async fn fetch(self) -> Result<PriceHistory, SequenceError>;
}
pub enum Range {
OneHour, SixHours, OneDay, OneWeek, OneMonth,
ThreeMonths, SixMonths, OneYear, TwoYears, FiveYears, Max,
}
pub struct PriceHistory {
pub symbol: String,
pub source: String, // "tape" | "polymarket_rest" | "kalshi_rest" |
// "venue_proxy" | "venue_ticks" |
// "binance_aggtrades_historical" | …
pub window: Option<PriceHistoryWindow>,
pub points: Vec<PricePoint>,
pub underlying: Option<UnderlyingCurve>,
}
pub struct PricePoint {
pub ts_ns: u64,
pub price_1e9: u64,
pub qty_1e8: Option<u64>, // tick mode only
pub side: Option<String>, // tick mode: "buy" | "sell"
pub venue_id: Option<u8>, // tick mode: VenueId discriminant
}// Year of Binance BTC-USDT 5-min bars
let btc = seq.price_history("BTC-USDT")
.range(Range::OneYear).fidelity_secs(300).venue("binance")
.fetch().await?;
// Last 500 tape prints with qty + side
let ticks = seq.price_history("BTC-USDC").ticks(500).fetch().await?;
// Polymarket 5-min lifecycle, include Chainlink underlying
let hist = seq.price_history("btc-updown-5m-1776537600")
.fidelity_secs(0).limit(50_000).underlying()
.fetch().await?;
if let Some(u) = hist.underlying {
if u.agrees_with_settlement == Some(false) { /* drop for ML */ }
}async funding_rates() -> Result<FundingRates, _>
async funding_rates_for_venue(venue) -> Result<FundingRates, _>
async funding_rate(venue, symbol) -> Result<Option<FundingRate>, _>
All three hit /v1/market/funding-rates with different filters. Rates are normalized to rate_bps_per_hour regardless of venue cadence.
pub struct FundingRate {
pub symbol: String,
pub venue: String,
pub dex: String, // "" for CEX
pub rate_bps_per_hour: f64,
pub predicted_rate_bps_per_hour: f64,
pub mark_price_1e9: i64,
pub open_interest_1e8: i64,
pub next_settlement_ns: u64,
pub snapshot_age_ms: u64,
}settlement(slug: &str) -> SettlementQuery (builder)
Prediction-market settlement data.
pub struct SettlementQuery<'a> { … }
impl<'a> SettlementQuery<'a> {
pub fn with_underlying(self) -> Self; // include Chainlink curve (bounded RPC)
pub async fn fetch(self) -> Result<Settlement, SequenceError>;
}
pub struct Settlement {
pub slug: String, pub question: String,
pub outcomes: Vec<String>, // native labels aligned with outcome_token_ids
pub outcome_token_ids: Vec<String>,
pub yes_token_id: String, pub no_token_id: String,
pub condition_id: String, pub neg_risk: bool,
pub status: String, // "open" | "resolving" | "resolved"
pub resolved_outcome: Option<String>, // ground truth from Polymarket (never reconstructed)
pub outcome_source: String,
pub yes_price: Option<f64>, pub no_price: Option<f64>,
pub event_start_s: u64, pub event_end_s: u64,
pub resolution_source: String,
pub underlying: Option<UnderlyingSeries>,
}markets(venue: &str) -> MarketsQueryBuilder (builder)
Unified Kalshi + Polymarket discovery.
impl<'a> MarketsQueryBuilder<'a> {
pub fn slug(self, s: &str) -> Self;
pub fn search(self, q: &str) -> Self;
pub fn tag(self, t: &str) -> Self;
pub fn limit(self, n: u32) -> Self; // clamped [1, 1000] — server cap
pub fn offset(self, n: u32) -> Self;
pub fn active_only(self, b: bool)-> Self;
pub fn include_closed(self, b: bool) -> Self;
pub fn order_by(self, field: &str) -> Self; // default = no sort. Options:
// volume | volume_week | volume_total |
// liquidity | end_date | start_date |
// created | competitive | closed_time
pub fn cursor(self, c: &str) -> Self; // opaque cursor from prior next_cursor
pub fn expand(self, modes: &str) -> Self; // e.g. "outcomes"
pub async fn fetch(self) -> Result<MarketsResponse, SequenceError>;
/// Walk every page that matches the current filters; threads
/// `next_cursor` automatically. Forces page_size=1000 internally.
pub async fn fetch_all(self) -> Result<Vec<Market>, SequenceError>;
}
pub struct MarketsResponse {
pub markets: Vec<Market>,
pub count: usize,
pub venue: String,
pub has_more: bool, // more pages remain
pub next_cursor: Option<String>, // pass back via .cursor(c)
}
pub struct Market {
pub venue: String, pub slug: String, pub question: String,
pub description: Option<String>,
pub outcomes: Vec<String>, pub outcome_token_ids: Vec<String>,
pub yes_token_id: String, pub no_token_id: String,
pub condition_id: String, pub neg_risk: bool,
pub volume_24h: f64, pub volume_week: f64, pub volume_total: f64, pub liquidity: f64,
pub end_date: Option<String>, pub start_date: Option<String>,
pub url: String,
}async market(slug, venue) -> Result<Option<Market>, _>
async search_markets(query, venue, limit) -> Result<MarketsResponse, _>
Convenience wrappers — .slug() and .search() + .fetch() respectively.
Pagination
Use .cursor() for explicit page-by-page walking, or .fetch_all() to walk the whole catalog in one call:
// Whole open Kalshi universe (~43k rows, ~4 s).
let all = seq.markets("kalshi").active_only(true).fetch_all().await?;
// Manual cursor — for streaming / progress reporting.
let p1 = seq.markets("kalshi").active_only(true).limit(1000).fetch().await?;
if let Some(c) = p1.next_cursor.as_deref() {
let p2 = seq.markets("kalshi").active_only(true).limit(1000)
.cursor(c).fetch().await?;
}For "everything Kalshi/PM ever recorded" pair .active_only(true).include_closed(true) — the Status filters section of the API reference details the full (active, closed) matrix.
Tripwire: the SDK's previous default of
order_by("volume")silently routed every Kalshi call through a 4× over-fetch + client-side rerank that broke cursor invariants — exhaustive walks capped at ~25% of the catalog. The default is now empty (no sort); pass.order_by("volume")explicitly only when you want top-by-volume and aren't paginating.
async edges() -> Result<serde_json::Value, _>
async account() -> Result<serde_json::Value, _>
async risk_limits() -> Result<serde_json::Value, _>
Free-form JSON accessors. Portfolio totals are returned on the positions_unified() response under .totals — see the builder above.
fees(ticker: &str) -> FeesQuery (builder)
Maker/taker fees per venue for a ticker. null taker/maker = "do not route here," not "free."
let resp = seq.fees("KXMLBHR-26APR292140KCATH-ATHLBUTLER4-1:YES")
.venues(&["kalshi", "polymarket"])
.fetch().await?;
for row in &resp.fees {
println!("{} taker={:?} maker={:?}", row.venue, row.taker_bps, row.maker_bps);
}/v1/fees returns the fee components and curve formula (fee_model, components.rate_bps, formula.taker_fee_usd) so latency-sensitive callers can compute the exact realized fee locally — fetch once per ticker, cache, evaluate per fill. For a server-authoritative answer that also includes size-aware slippage + total landed cost, use preview() (/v1/orders/preview).
async preview(symbol: &str, side: Side, qty: f64) -> Result<OrderPreview, _>
Pre-trade dry-run: estimated fee + slippage + total cost + historical time-to-first-fill across candidate venues, without submitting. Returns OrderPreview { candidate_venues, best_venue, estimated_fee_bps, worst_case_fee_bps, estimated_slippage_bps, estimated_total_cost_bps, historical_ttff, nbbo, source, generated_at_ns, … }.
Cold-resilient by default. On a cold ticker the handler runs the same fallback chain as quote() — dynamic-subscribe + brief WS poll → edge-routed REST snapshot → venue public-API. First call on a cold prediction-market ticker takes ~200-500ms and returns real fee + slippage + NBBO; subsequent calls hit the warm cache sub-ms. The source field reports which path served the data (edge_nbbo / edge_rpc_snapshot / venue_public_api / unavailable). Trading code can gate on source == "edge_nbbo" for live-WS guarantees.
NbboSnapshot fields (bid_px_1e9, ask_px_1e9, mid_px_1e9, spread_bps) are Option<...> — one-sided books return None on the missing side rather than leaking sentinel values.
The same cold-fallback applies to intelligence/depth, intelligence/slippage, intelligence/routing, and execution/forecast endpoints. No explicit quote() warm step is needed before calling any of these.
async kill_switch_engage(reason: Option<&str>) -> Result<KillSwitchStatus, _>
async kill_switch_clear() -> Result<KillSwitchStatus, _>
async cancel_all_on_venue(venue: &str) -> Result<serde_json::Value, _>
Risk ops. kill_switch_engage halts new submission for the caller's identity (existing open orders are NOT cancelled — pair with cancel_all_on_venue for stop-everything). kill_switch_clear is admin-only.
Level 2 — Trade
buy(symbol: &str, qty: f64) -> OrderBuilder
sell(symbol: &str, qty: f64) -> OrderBuilder
Returns an OrderBuilder. Call .submit().await? to fire it as a 1-node execution graph. Every optional field:
impl<'a> OrderBuilder<'a> {
pub fn venue(self, v: &str) -> Self; // pin to one venue (else SOR)
pub fn limit_price(self, px: f64) -> Self; // omit for market order
pub fn policy(self, p: Policy) -> Self; // default Policy::Sor
pub fn urgency(self, u: Urgency) -> Self; // Low | Medium | High
pub fn max_slippage_bps(self, bps: u16) -> Self;
pub fn horizon_ms(self, ms: u64) -> Self;
pub fn time_in_force(self, t: TimeInForce) -> Self; // Gtc | Ioc | Fok
pub fn perp(self) -> Self; // instrument_type = "perp"
pub fn sandbox(self) -> Self; // paper fill against live book
pub async fn submit(self) -> Result<SubmitGraphResponse, SequenceError>;
}
pub enum Side { Buy, Sell }
pub enum Urgency { Low, Medium, High }
pub enum Policy { Sor, IocSweep, PassiveLimit, AggressiveChase, PassiveLadder, TimeDrip }
pub enum TimeInForce { Gtc, Ioc, Fok }
pub struct SubmitGraphResponse {
pub graph_id: String,
pub status: String,
pub node_count: usize,
pub edge_count: usize,
}seq.buy("ETH-USD", 50.0)
.urgency(Urgency::High)
.max_slippage_bps(10)
.submit().await?;
seq.sell("BTC-USD", 0.5)
.venue("coinbase")
.limit_price(75_000.0)
.time_in_force(TimeInForce::Gtc)
.submit().await?;
seq.buy("ETH-USD-PERP", 5.0)
.perp()
.venue("hyperliquid")
.submit().await?;
// Paper fill against live book — works on seq_live_*
seq.buy("SOL-USD", 100.0).sandbox().submit().await?;async order(node_order_id: &str) -> Result<OrderResponse, _>
Fetch a single order. Returns 404 error if unknown. OrderResponse exposes raw + human:
pub struct OrderResponse {
pub node_order_id: String,
pub client_id: String,
pub client_order_id: Option<String>,
pub side: String, // "BUY" / "SELL" — UPPERCASE
pub symbol: String,
pub qty_1e8: i64, pub filled_qty_1e8: i64,
pub status: String, // "NEW" | "PENDING" | "ACCEPTED" | "PARTIAL" |
// "FILLED" | "COMPLETED" | "CANCELLED" |
// "REJECTED" | "FAILED"
pub constraints: Option<serde_json::Value>,
pub created_unix_ns: Option<u64>, pub updated_unix_ns: Option<u64>,
pub failure_reason: Option<String>,
pub tca: Option<TcaReport>,
}
impl OrderResponse {
pub fn qty(&self) -> f64; // qty_1e8 / 1e8
pub fn filled_qty(&self) -> f64;
pub fn fill_pct(&self) -> f64; // 0.0–100.0
pub fn is_terminal(&self) -> bool; // COMPLETED | CANCELLED | FAILED
}orders() -> OrderQuery (paginated builder)
let page = seq.orders().await
.symbol("ETH-USD").limit(100).offset(0)
.fetch().await?;
// OrdersResponse { orders, total, limit, offset, has_more }async cancel(graph_or_order_id: &str) -> Result<(), _>
Accepts a graph ID or a node-order ID — it strips the suffix before calling DELETE /v1/execution_graphs/{graph_id}.
fills() -> FillQuery (paginated builder)
Same shape as orders(). Each Fill has qty() / price() float helpers.
async amend(order_id, new_price: Option<f64>, new_qty: Option<f64>) -> Result<AmendResult, _>
At least one of new_price/new_qty must be Some(_). Returns:
pub enum AmendMode { NativeAtomic, CancelReplace }
pub struct AmendResult {
pub order_id: String, // new id on CancelReplace; same on NativeAtomic
pub mode: AmendMode,
pub status: Option<String>,
}Today the SDK/API graph path returns CancelReplace, so queue position is not guaranteed. Kalshi's lower-level transport supports native /portfolio/orders/{id}/amend, but CC does not yet preserve that native amend path end-to-end through this helper.
async decrease(order_id: &str, reduce_by: f64) -> Result<DecreaseResult, _>
pub enum DecreaseMode { Native, CancelReplaceShrink }
pub struct DecreaseResult { pub order_id: String, pub mode: DecreaseMode, pub status: Option<String> }Errors if reduce_by takes remaining qty to ≤ 0 — cancel instead.
batch(venue: &str) -> BatchBuilder
impl<'a> BatchBuilder<'a> {
pub fn emulate(self) -> Self; // compatibility no-op; batch mode stays Serial
pub fn buy (self, sym: &str, qty: f64) -> Self;
pub fn sell(self, sym: &str, qty: f64) -> Self;
pub fn buy_limit (self, sym: &str, qty: f64, px: f64) -> Self;
pub fn sell_limit(self, sym: &str, qty: f64, px: f64) -> Self;
pub fn cancel(self, order_id: &str) -> Self;
pub fn len(&self) -> usize; pub fn is_empty(&self) -> bool;
pub async fn end(self) -> Result<BatchResult, SequenceError>;
}
pub enum BatchMode { NativeAtomic, Serial }
pub enum BatchItemResult {
Submitted(SubmitGraphResponse),
Cancelled(String),
Error(String),
}
pub struct BatchResult { pub responses: Vec<BatchItemResult>, pub mode: BatchMode }BatchBuilder currently submits one graph root per op and returns BatchMode::Serial for both Kalshi and Polymarket. Kalshi's transport supports native /portfolio/orders/batched (20-op atomic), and Polymarket's transport supports native CLOB POST /orders batch submission (15 orders, per-order results), but the SDK graph path does not preserve that native batch window end-to-end yet.
// Kalshi — ok by default
let r = seq.batch("kalshi")
.buy_limit("KXBTCZ-26DEC31-T99000", 10.0, 0.55)
.buy_limit("KXBTCZ-26DEC31-T100000", 10.0, 0.50)
.cancel("stale_order_id")
.end().await?;
assert_eq!(r.mode, BatchMode::Serial);
// Polymarket — graph-level SDK batch is serial today
let r = seq.batch("polymarket").buy("TOKEN_ID", 10.0).end().await?;
assert_eq!(r.mode, BatchMode::Serial);async redeem(slug: &str, venue: &str) -> Result<RedeemResult, _>
pub enum RedeemMode { AutoSettled, RelayerSubmitted, Noop }
pub struct RedeemResult {
pub mode: RedeemMode,
pub venue: String,
pub slug: String,
pub condition_id: Option<String>, // Polymarket only
pub token_ids: Option<Vec<String>>, // Polymarket only
pub note: String,
}Level 3 — Orchestrate
graph() -> GraphBuilder
Fluent builder for execution graphs. Every graph is composed of Nodes connected by triggered edges, wrapped in an optional 4-layer risk envelope.
impl<'a> GraphBuilder<'a> {
pub fn node(self, id: &str, node: Node) -> Self;
pub fn edge(self, from: &str, to: &str, trigger: Trigger) -> Self;
pub fn edge_with(self, from: &str, to: &str, trigger: Trigger,
f: impl FnOnce(EdgeConfig) -> EdgeConfig) -> Self;
pub fn risk(self, f: impl FnOnce(RiskConfig) -> RiskConfig) -> Self;
pub fn sandbox(self) -> Self;
pub fn metadata(self, m: serde_json::Value) -> Self;
pub async fn submit(self) -> Result<SubmitGraphResponse, SequenceError>;
}Nodes with no incoming edge auto-activate as root; the rest as wait.
Type: Node
impl Node {
pub fn buy (symbol: &str, qty: f64) -> Self;
pub fn sell(symbol: &str, qty: f64) -> Self;
// Venue + instrument
pub fn venue(self, v: &str) -> Self;
pub fn perp(self) -> Self; // instrument_type = "perp"
pub fn prediction(self) -> Self; // instrument_type = "prediction"
// Execution config
pub fn limit_price(self, px: f64) -> Self;
pub fn policy(self, p: Policy) -> Self;
pub fn urgency(self, u: Urgency) -> Self;
pub fn max_slippage_bps(self, b: u16)-> Self;
pub fn horizon_ms(self, ms: u64) -> Self;
pub fn tif(self, t: &str) -> Self; // "GTC" | "IOC" | "FOK"
pub fn reduce_only(self) -> Self; // close existing position only
pub fn derived(self) -> Self; // qty derived from inbound edge sizing
// Pacing (sub-graph expansion under the hood)
pub fn twap(self, slices: u16, interval_ms: u64) -> Self;
pub fn vwap(self, slices: u16, interval_ms: u64) -> Self;
pub fn iceberg(self, display_qty: f64) -> Self;
// Atomic overrides
pub fn placement(self, s: &str) -> Self; // "aggressive" | "passive" | "passive_offset"
pub fn escalate_after(self, ms: u64) -> Self; // cross spread after N ms
pub fn venue_fallback(self, v: &[&str]) -> Self; // priority-ordered venues
}Type: Trigger
pub enum Trigger {
FillPct(f64), // 0.0–1.0
FirstFill,
OnFill, // every fill (streaming hedge)
FullFill,
OnAccepted, // SOR ACKed
Timeout(u64), // ms
OnCancel,
OnDone, // any terminal state
OnPrice { symbol: String, direction: String, offset_pct: f64 },
}Type: Sizing
How the child node's quantity is derived from the parent's fill.
pub enum Sizing {
ParentFilledQty, // use parent's cumulative filled qty
IncrementalFill, // only the delta since last edge fire
LinearHedge(f64), // multiplier; -1.0 = opposite-side 1:1 hedge
ScaledNotional { multiplier: f64, cap_usd: Option<f64> },
Residual, // parent target - parent filled
Fixed(f64), // ignore parent entirely
}Type: EdgeConfig
impl EdgeConfig {
pub fn sizing(self, s: Sizing) -> Self;
pub fn on_parent_cancel(self, action: &str) -> Self; // "cancel_all_open" | "pause_graph" | "do_nothing"
pub fn on_child_reject (self, action: &str) -> Self;
}Type: RiskConfig — 4-layer risk model
impl RiskConfig {
// Layer 1: Admission — evaluated before any order leaves
pub fn max_notional_usd(self, usd: f64) -> Self;
pub fn max_leverage(self, lev: f64) -> Self;
pub fn required_venues(self, v: &[&str]) -> Self;
pub fn max_concurrent_graphs(self, n: u32) -> Self;
// Layer 2: Pre-trade — per-node gates
pub fn max_node_notional_usd(self, usd: f64) -> Self;
pub fn require_balance_check(self) -> Self;
pub fn max_slippage_bps(self, bps: u16) -> Self;
// Layer 3: Runtime — continuously enforced while the graph executes
pub fn max_unhedged_qty(self, qty: f64) -> Self;
pub fn max_loss_usd(self, usd: f64) -> Self;
pub fn max_drawdown_usd(self, usd: f64) -> Self;
// Layer 4: Failure — what to do on breach
pub fn on_runtime_breach(self, a: FailureAction) -> Self;
pub fn on_disconnect (self, a: FailureAction) -> Self;
}
pub enum FailureAction {
CancelAll, // serialized as "cancel_all_open"
CancelPending, // serialized as "cancel_all_open" (alias today)
Pause, // "pause_graph"
Continue, // "do_nothing"
}Complete examples
// Spot + perp hedge with 4-layer risk
seq.graph()
.node("spot", Node::buy ("ETH-USD", 200.0).policy(Policy::Sor))
.node("hedge", Node::sell("ETH-USD-PERP", 200.0).perp().venue("hyperliquid"))
.edge("spot", "hedge", Trigger::FillPct(0.5))
.risk(|r| r
.max_notional_usd(500_000.0)
.max_unhedged_qty(5.0)
.on_runtime_breach(FailureAction::CancelAll))
.submit().await?;
// TWAP — 10 slices, 30s apart
seq.graph()
.node("twap", Node::buy("BTC-USD", 1.0).twap(10, 30_000))
.submit().await?;
// Streaming hedge — every fill triggers proportional perp sell
seq.graph()
.node("spot", Node::buy ("ETH-USD", 100.0))
.node("hedge", Node::sell("ETH-PERP", 0.0).perp().derived())
.edge_with("spot", "hedge", Trigger::OnFill,
|e| e.sizing(Sizing::LinearHedge(-1.0)))
.submit().await?;
// Bracket — entry → take-profit + stop-loss
seq.graph()
.node("entry", Node::buy ("BTC-USD", 0.1))
.node("tp", Node::sell("BTC-USD", 0.0).derived().limit_price(80_000.0))
.node("sl", Node::sell("BTC-USD", 0.0).derived())
.edge("entry", "tp", Trigger::FullFill)
.edge("entry", "sl", Trigger::OnPrice {
symbol: "BTC-USD".into(), direction: "below".into(), offset_pct: -5.0,
})
.risk(|r| r.max_loss_usd(2_000.0).on_runtime_breach(FailureAction::CancelAll))
.submit().await?;async graph_status(graph_id: &str) -> Result<GraphStatusResponse, _>
pub struct GraphStatusResponse {
pub graph_id: String,
pub client_id: String,
pub status: String, // pending | active | partial_fill | completed |
// aborted | expired | paused
pub nodes: HashMap<String, NodeStatus>,
pub edges: HashMap<String, EdgeStatus>,
}
pub struct NodeStatus {
pub status: String, // pending | armed | executing | filled |
// partial_fill | cancelled | rejected
pub node_order_id: Option<String>,
pub target_qty_1e8: i64, pub filled_qty_1e8: i64,
pub avg_fill_price_1e9: i64, pub fill_count: u64,
}
pub struct EdgeStatus { pub status: String, pub fired: bool }async graph_cancel(graph_id: &str) -> Result<(), _>
async graph_resume(graph_id: &str) -> Result<(), _>
async amend_order(graph_id, node_id, amendments) -> Result<serde_json::Value, _>
amend_order is the graph-node amend (PUT /v1/execution_graphs/{graph_id}/nodes/{node_id}) — updates execution parameters on a running node (urgency, max_slippage_bps, horizon_ms, qty_1e8). It's a different endpoint from the flat amend() verb above — use the flat one for resting-order price/qty amends, this one to retune live execution.
Level 4 — Automate
async deploy_algo(name, wasm_bytes, symbols) -> Result<Deployment, _>
Base64-encodes the WASM blob, posts to /v1/deployments with start_immediately=true. Returns:
pub struct Deployment {
pub deployment_id: String,
pub name: Option<String>,
pub symbols: Vec<String>,
pub size_bytes: u64,
pub status: String,
pub pushed_to: usize, // edges that accepted the push
pub capable_edges: usize, // edges that *could* host this algo
pub edges: Vec<DeploymentEdge>, // per-edge runtime state
}
pub struct DeploymentEdge {
pub edge_id: String, pub venue: String, pub status: String,
pub position_1e8: i64, pub realized_pnl_1e9: i64, pub unrealized_pnl_1e9: i64,
pub fill_count: u64, pub callback_latency_ns: u64,
}async algo_status(deployment_id) -> Result<Deployment, _>
async algo_start (deployment_id) -> Result<(), _>
async algo_stop (deployment_id) -> Result<(), _>
async algo_undeploy(deployment_id) -> Result<(), _>
async algo_logs(deployment_id, limit: u32) -> Result<serde_json::Value, _>
async algo_mesh(deployment_id) -> Result<serde_json::Value, _>
async algo_metrics(deployment_id) -> Result<serde_json::Value, _>
algo_stop does not cancel the algo's resting orders — handle cleanup in your on_stop callback or issue per-order cancels.
Algos keyed by symbol
/v1/algos/* is the user-facing surface for algos (one running WASM per symbol per client). /v1/deployments/* is the underlying CRUD primitive.
seq.algos().await?; // GET /v1/algos
seq.algos_per_edge().await?; // GET /v1/algos?detail=edge
seq.algo("ETH-USD").await?; // GET /v1/algos/:symbol
seq.algo_start_by_symbol("ETH-USD").await?;
seq.algo_stop_by_symbol("ETH-USD").await?;
seq.algo_logs_by_symbol("ETH-USD").await?;
seq.algo_stats("ETH-USD").await?;Hosted strategies
Lifecycle for hosted strategies (Level 4-5). Strategy create flows through the CLI's vendored-SDK packaging path (sequence strategy push) since it requires a base64-encoded artifact; read/start/stop/logs are SDK-native.
seq.strategies().await?; // GET /v1/strategies
seq.strategy("strat_abc").await?; // GET /v1/strategies/:id
seq.strategy_start("strat_abc").await?;
seq.strategy_stop("strat_abc").await?;
seq.strategy_logs("strat_abc").await?;
seq.strategy_delete("strat_abc").await?;Level 5 — Monitor
async tca() -> Result<serde_json::Value, _>
async tca_symbol(symbol: &str) -> Result<serde_json::Value, _>
async intel(symbol: &str) -> Result<serde_json::Value, _>
async slippage(symbol: &str, side: Side) -> Result<serde_json::Value, _>
async forecast(symbol, side, qty: f64) -> Result<serde_json::Value, _>
async routing(symbol, side, qty) -> Result<serde_json::Value, _>
async depth(symbol: &str) -> Result<serde_json::Value, _>
async execution_live(node_order_id) -> Result<serde_json::Value, _>
async execution_review(node_order_id) -> Result<serde_json::Value, _>
async execution_summary() -> Result<serde_json::Value, _>
async trace(node_order_id: &str) -> Result<Vec<TraceEvent>, _>
All Level 5 endpoints return free-form JSON today; typed shapes land in v0.2. trace() is the exception — it comes back typed:
pub struct TraceEvent {
pub event_type: String, // "admitted" | "routed" | "edge_submitted" |
// "venue_acked" | "first_fill" | "completed" | …
pub timestamp_ns: u64,
pub venue: Option<String>,
pub detail: Option<String>,
}Streaming
stream(symbols: &[&str]) -> StreamBuilder
impl<'a> StreamBuilder<'a> {
pub fn orders(self, yes: bool) -> Self; // subscribe to `orders`
pub fn fills (self, yes: bool) -> Self; // subscribe to `fills`
pub fn routing(self, yes: bool) -> Self; // subscribe to `routing`
pub fn book(self, yes: bool) -> Self; // subscribe to `book:{sym}` for each sym
pub async fn connect(self) -> Result<MarketDataStream, SequenceError>;
}
pub enum StreamEvent {
Price { symbol: String, data: serde_json::Value }, // channel "prices:{sym}"
Book { symbol: String, data: serde_json::Value }, // channel "book:{sym}"
Order (serde_json::Value), // channel "orders"
Fill (serde_json::Value), // channel "fills"
Routing(serde_json::Value), // channel "routing"
Trace { deployment_id: String, data: serde_json::Value }, // channel "traces:{dep_id}"
Raw { channel: String, data: serde_json::Value }, // anything we don't specifically parse
}The stream auto-PONGs to server PINGs. Close it explicitly with .close().await — or drop it; the WS task cleans up on close_timeout.
use futures_util::StreamExt;
let mut s = seq.stream(&["BTC-USD", "ETH-USD"])
.orders(true).fills(true).book(true)
.connect().await?;
while let Some(evt) = s.next().await {
match evt? {
StreamEvent::Price { symbol, data } => println!("{symbol}: {data}"),
StreamEvent::Fill (fill) => println!("fill: {fill}"),
StreamEvent::Order(order) => println!("order: {order}"),
StreamEvent::Book { symbol, data } => println!("{symbol} L2: {data}"),
StreamEvent::Routing(r) => println!("routing: {r}"),
StreamEvent::Trace { deployment_id, data } => println!("{deployment_id}: {data}"),
StreamEvent::Raw { channel, data } => println!("[{channel}] {data}"),
}
}MarketDataStream also implements futures::Stream, so it's compatible with the whole combinator ecosystem — .take(100), .filter_map(…), .chunks_timeout(…).
async stream_new_markets() -> Result<LifecycleStream, _>
async stream_resolved_markets() -> Result<LifecycleStream, _>
Two prediction-market lifecycle streams — each yields one JSON envelope per event. On connect the server flushes the last ~128 events from its replay ring, then streams live. Deduped across multi-region edges.
pub struct LifecycleStream { … }
impl LifecycleStream {
pub async fn next(&mut self) -> Result<Option<serde_json::Value>, SequenceError>;
}
// Every freshly-minted market — already carries tick + fee schedule
let mut s = seq.stream_new_markets().await?;
while let Some(ev) = s.next().await? {
if ev["slug"].as_str().unwrap_or("").contains("btc-updown") {
let tok = ev["outcome_token_ids"][0].as_str().unwrap();
seq.buy(tok, 5.0).venue("polymarket").submit().await?;
}
}
// Every resolution — winning token paid $1, losing token paid $0
let mut s = seq.stream_resolved_markets().await?;
while let Some(ev) = s.next().await? { /* … */ }Fixed-point types
Prices and quantities travel on the wire as fixed-point integers. The Qty / Px wrappers expose both representations:
use sequence_sdk::{Qty, Px};
let qty = Qty::from_human(1.5); // stores 150_000_000
assert_eq!(qty.human(), 1.5);
let px = Px::from_human(74_000.0); // stores 74_000_000_000_000
assert_eq!(px.human(), 74_000.0);Response types carry raw fixed-point fields. Divide by the scale in the suffix to get human units:
let resp = seq.positions_unified().fetch().await?;
for v in &resp.positions {
let qty = v.qty_1e8 as f64 / 1e8;
let entry = v.entry_price_usd_1e9 as f64 / 1e9;
let unrealized = v.unrealized_pnl_usd_1e9 as f64 / 1e9;
println!("{:?} qty={} entry=${:.2} P&L=${:+.2}",
v.instrument, qty, entry, unrealized);
}Summary of scales:
| Suffix | Scale | Example |
|---|---|---|
_1e8 | × 10⁸ | 1 BTC = 100_000_000 |
_1e9 | × 10⁹ | $50 000 = 50_000_000_000_000 |
_bps | basis points | 10 bps = 0.10% |
_ns | nanoseconds since Unix epoch | 1705406400000000000 |
| (no suffix) | human float | bid: 73984.57 |
Sandbox
Two ways in:
- Per-call —
.sandbox()on anyOrderBuilderorGraphBuilder. Lives fine alongside live orders. - Session-wide — log in with a
seq_test_*key. Every order routes through the sandbox adapter regardless of the builder flag.
Sandbox fills settle against the live NBBO with realistic slippage + fees. Balances and positions stay isolated from your live account.