Nice perspective on precision and data capture...what the models do with it is the real secret sauce.
@MrUpgradable4 ай бұрын
Impressive... if only we could start trading at the zepto second ;) thank you for sharing Just curious, why don't you use Hash Wheel Timers?
@nsd11694 жыл бұрын
could you please explain why you need such precise [nano, pico] time stamping given that your decision making loop is well above that?
@dennisfleurbaaij4 жыл бұрын
Hi ns, as Zuotian noted in the presentation this is used at Deutsche Borse. It might be enlightening to read up on their insights into their infrastructure and latency to put things in perspective: www.deutsche-boerse.com/resource/blob/1637232/da0ae611905acda0d7502260903a0835/data/Open-Day-2019_T7-Latency-Roadmap_Andreas-Lohr_final.pdf
@simhendra23772 жыл бұрын
The main low latency race happens when someone submits a limit order (an order that gives a price that someone is willing to trade at) to the exchange that all firms think is a good deal. Every firm then tries to trade with that limit order. Whichever firm sends that order first, not accounting for network variance, will get all the profit. What the Optiver wants to observe is how long it takes for them to see a feed packet with a juicy limit order and send their own order out, because nano-pico improvements to that time can make the difference between lots of profit and 0.
@nsd11692 жыл бұрын
@@simhendra2377 i do understand concepts of market making and the motivation to snipe orders. my question though was more on a technicality; a very crude way [for the context of my original question] to describe their system is two components: event sequencer (timestamps events); event loop (for event processing and signal generation). event loop can process events with frequency X; sequencer can timestamp incoming events with frequency Y, where Y is on magnitude more frequent than X (i.e. picoseconds vs milliseconds). My question is what's it the point for such granular timestamping (all what matters is the order of events) given that consumer of these events (aka event loop) won't be able to process at such high frequencies...
@simhendra23772 жыл бұрын
@@nsd1169 I can't say this for certain re Optiver, but pico timestamping won't be for the event loop. It will be for later analysis. The event loop needs to see the events as soon as possible, but the timestamps themselves would for asking questions later like: - we have two fpgas trying to snipe orders, with different hardware features - which is faster, by how much, how often? - we have one new fpga, and can connect to 1 of 2 exchange servers - which sends feed packets sooner, by how much, how often? As for real time processing, there are usecases like: you have 4 connections to the exchange which publish the same data, but your event loop doesn't want to see 4 copies of the same event. Maybe you use the timestamps to decide which messages are describing the same exchange event, and get rid of three of them. Also, one feed line alone on Eurex can produce packets with just 100 nanos between, imagine having 1500 connections and having to produce an order of events. For that, picos is probably overkill, but millis is useless.