Sequence/docs

Streaming

One unified WebSocket endpoint at /v1/stream carries every real-time channel — NBBO updates, L2 books, order lifecycle, fills, routing decisions, TCA reports, algo callback traces, funding rates. Plus two dedicated lifecycle streams for prediction-market new-mint and resolution events. All require a Bearer token; the connection auto-PONGs server PINGs.


Connect & subscribe

rust
use futures_util::StreamExt;
use sequence_sdk::stream::StreamEvent;
 
let mut s = seq.stream(&["BTC-USD", "ETH-USD"])
    .orders(true).fills(true).book(true)
    .connect().await?;
 
while let Some(evt) = s.next().await {
    match evt? {
        StreamEvent::Price { symbol, data }   => println!("{symbol}: {data}"),
        StreamEvent::Book  { symbol, data }   => println!("{symbol} L2: {data}"),
        StreamEvent::Fill (fill)              => println!("fill:  {fill}"),
        StreamEvent::Order(order)             => println!("order: {order}"),
        StreamEvent::Routing(r)               => println!("routing: {r}"),
        StreamEvent::Trace { deployment_id, data } =>
            println!("{deployment_id}: {data}"),
        StreamEvent::Raw   { channel, data }  => println!("[{channel}] {data}"),
    }
}

StreamBuilder methods: .orders(b), .fills(b), .routing(b), .book(b), .connect(). The returned MarketDataStream also implements futures::Stream — combinator-friendly (.take(100), .filter_map(...), .chunks_timeout(...)).

Close explicitly with .close().await or just drop it — the WS task cleans up on close_timeout.


Available channels

ChannelEmits
prices:{symbol}NBBO updates ({bid, ask, mid, spread_bps, ts_ns})
book:{symbol}L2 depth updates
ordersYour order lifecycle events
fillsYour fills
routingSOR routing decisions — {legs, est_cost_bps, chosen}
tcaPost-completion TCA reports — one per order with full cost decomp
traces:{deployment_id}Algo callback traces
fundingEvery funding-rate update across every venue
funding:{venue}One venue only
funding:{venue}:{symbol}One instrument only

The tca channel carries achieved_vwap_1e9, benchmark_vwap_1e9, total_fees_1e9, venues_used, execution_time_ms, fee_cost_bps, spread_cost_bps, market_impact_bps, implementation_shortfall_bps, savings_vs_benchmark_bps. For prediction-venue fills four extra fields populate: is_prediction, achieved_implied_prob_bps, arrival_implied_prob_bps, probability_slippage_bps — branch on is_prediction before interpreting slippage (price-bps for spot, probability-bps for prediction).


Lifecycle streams (prediction markets)

Two specialized streams for prediction-market discovery and settlement. On connect, each flushes the last ~128 events from the server's replay ring, then streams live. Deduped across multi-region edges.

New markets

Every freshly-minted prediction market — already carries tick_1e9 + fee_schedule, so you can size your first quote without an extra fees() call.

rust
let mut s = seq.stream_new_markets().await?;
while let Some(ev) = s.next().await? {
    if ev["slug"].as_str().unwrap_or("").contains("btc-updown") {
        let tok = ev["outcome_token_ids"][0].as_str().unwrap();
        seq.buy(tok, 5.0).venue("polymarket").submit().await?;
    }
}

Each event:

json
{
  "kind": "new_market",
  "venue": "polymarket",
  "slug": "btc-updown-5m-…",
  "condition_id": "0x…",
  "question": "Bitcoin up or down…",
  "outcomes": ["Up", "Down"],
  "outcome_token_ids": ["…", "…"],
  "neg_risk": false,
  "tick_1e9":  10000000,
  "fee_schedule": {
    "exponent": 1, "rate_1e8": 7200000,
    "taker_only": true, "rebate_rate_1e8": 20000000
  },
  "start_ts_s": 1776657000,
  "end_ts_s":   1776657300,
  "observed_at_ns": 1776657000000000000
}

Resolved markets

Every resolution — winning token paid $1, losing token paid $0. Pair with stream_new_markets to see the full lifecycle.

rust
let mut s = seq.stream_resolved_markets().await?;
while let Some(ev) = s.next().await? { /* ... */ }
json
{
  "kind": "market_resolved",
  "venue": "polymarket",
  "condition_id": "0x…",
  "winning_outcome": "Up",
  "winning_token_id": "…",
  "losing_token_id":  "…",
  "observed_at_ns": 
}

Operational notes

  • Auto-PONG. The SDK responds to server PINGs without exposing the heartbeat — the stream stays alive as long as you're consuming it.
  • Backpressure. If the consumer falls behind, server-side buffering drops the oldest events first per channel. Catch up via REST snapshots after a long pause; don't expect gap-fill from the stream.
  • Replay ring. Lifecycle streams flush the last ~128 events on connect. The generic /v1/stream does not — subscribe before taking action you want to observe.
  • Multi-region dedupe. New-market and resolved-market events are deduped across the three regional CCs at the publish boundary. Fills + orders are not deduped (they're per-client traffic, only one CC sees them).

Next steps