Trading at light speed: designing low latency systems in C++ - David Gross - Meeting C++ 2022

  Рет қаралды 155,926

Meeting Cpp

Meeting Cpp

Күн бұрын

Trading at light speed: designing low latency systems in C++ - David Gross - Meeting C++ 2022
Slides: slides.meetingcpp.com
Survey: survey.meetingcpp.com
Making a trading system "fast" cannot be an afterthought. While low latency programming is sometimes seen under the umbrella of "code optimization", the truth is that most of the work needed to achieve such latency is done upfront, at the design phase. How to translate our knowledge about the CPU and hardware into C++? How to use multiple CPU cores, handle concurrency issues and cost, and stay fast?
In this talk, David Gross, Auto-Trading Tech Lead at global trading firm Optiver will share industry insights on how to design from scratch a low latency trading system.

Пікірлер: 68
@pranaypallavtripathi2460
@pranaypallavtripathi2460 Жыл бұрын
If this man is writing a book; something like "introduction high performance trading"; then I am buying it !
@payamism
@payamism Жыл бұрын
Do you know any material or anyone who publishes regarding the subject?
@workingaccount1562
@workingaccount1562 Жыл бұрын
@@payamism Quant Galore
@boohoo5419
@boohoo5419 3 ай бұрын
this guy is totally clueless and your are even more clueless..
@draked8953
@draked8953 3 ай бұрын
@@boohoo5419 how so?
@statebased
@statebased Жыл бұрын
Array oriented designs are a the core of the low level model of a trading system. And while this array view is much of what this talk is about, it is important enough to reemphasize it. Also, template based objects are handy to glue your arrays together so as to fully optimize the result.
@sui-chan.wa.kyou.mo.chiisai
@sui-chan.wa.kyou.mo.chiisai Жыл бұрын
Is it what similar to data oriented programming in game?
@santmat007
@santmat007 8 ай бұрын
@@sui-chan.wa.kyou.mo.chiisai Yes.... DOP rules over all... OOP to the trash 😋
@edubmf
@edubmf Жыл бұрын
Interesting and always love speakers who give "further reading".
@nguyenkien5558
@nguyenkien5558 Жыл бұрын
Same thought
@IonGaztanaga
@IonGaztanaga Жыл бұрын
At 23:00, when stable_vector is explained (built using boost::container::static_vector), just mentioning additional info for viewers. boost::container::deque has a feature to allow configuring the chunk size (called block size in Boost).
@khatdubell
@khatdubell Жыл бұрын
"its hard to crack the S&P 500" Explain that to congress.
@hhlavacs
@hhlavacs Жыл бұрын
Excellent talk, I learned a lot!
@wolpumba4099
@wolpumba4099 Жыл бұрын
Nice! Some examples of discussion of queues for few producers and many consumers.
@nguonbonnit
@nguonbonnit Жыл бұрын
Wow ! So great. You help me a lots.
@melodiessim2570
@melodiessim2570 Жыл бұрын
Where is the link to the code for Seqlock and SPMC shared in the talk ?
@kolbstar
@kolbstar Жыл бұрын
For the SPMC Queue V2 at 45:00, why does he have an mVersion at all? If the block isn't valid until mBlockCounter has been incremented, then readers don't risk reading during a write, no? Or, if you are reading while it's writing, it's because you've lagged so hard that the writer is lapping you.
@aniketbisht2823
@aniketbisht2823 3 ай бұрын
std::memcpy is not data-race safe as per the standard. you could use std::atomic_ref to read/wrie individual bytes of the object.
@thisisnotchaotic1988
@thisisnotchaotic1988 Ай бұрын
I think there is a flaw with this design. Since the spmc queue supports variable-length messages, if a consumer is lapped by the consumer, the mVersion field the consumer thinks it is spinning on is probably not the version counter field at all. It may well be spinning on some random bytes right in the middle of mData. Then if the random bytes happen to be the version of the consumer is expecting(although the probability is very low), it could be disastrous. The customer does not know it is lapped at all, and continue processing with the meaningless data.
@firstIndian-ez9tt
@firstIndian-ez9tt 4 ай бұрын
Love you sir from India Bihar ❤❤❤
@sb-zn4um
@sb-zn4um Жыл бұрын
can anyone explain how the write is setting the lowest bit to 1, is this a design feature of std::atomic? 34:23
@Alex-zq1yy
@Alex-zq1yy Жыл бұрын
Note that the write increments a counter by one, copies, then increments by one again. So if the consumer reads in the middle of writing, the counter is odd (or the lowest bit is 1). Only when it is done writing is it even again
@kolbstar
@kolbstar Жыл бұрын
Remember his logic is that if the mVersion is odd, then it's currently being written. (int & 1)==0 is just an ugly version of an "is even" function.
@gabrielsegatti8017
@gabrielsegatti8017 7 ай бұрын
@@Alex-zq1yy What happens in the scenario where we have 2 writers: Writer A increments a counter by one, and is now writing. Then, while the writing is in progress, Writer B increments value by one as well (to then start writing). Now, before Writer A increments the counter again, consumer reads and counter is even, despite none of the writes being completed. Wouldn't that be possible to happen? Perhaps the full implementation also checks preemptively if the lowest bit is 1. Then this problem wouldn't exist.
@dareback
@dareback 6 ай бұрын
@@gabrielsegatti8017 The code comment says one producer multiple consumers, so there can't be two or more writers.
@yihan4835
@yihan4835 8 ай бұрын
My question is std::unordered_map is still not very efficient because the pointer itself still lives in heap and you are getting one indirection at least because they are stored as pointer to a pointer in the bucket. Am I mistaken somehow?
@dinocoder
@dinocoder 2 ай бұрын
I was wondering the same thing. I have three theories. One, most instruments are added to the store at construction time (or in one large chuck) and the memory for the pointers are luckily allocated sequentially/contiguously which is easier due to the size of the the pointer being significantly smaller than the Instrument struct. And two, they know how the allocator they're using works or have implemented their own (they do say they don't include all the details), and know it will more likely allocate into contiguous addresses being made easier by the smaller size of the pointer vs the Instrument struct. Thirdly, they could reserve space for the map at construction time (again, they say they don't include all the details). Imo, reserving space for this seems pretty straightforward and I would imagine they could be doing something like this. Would be easier to tell if we knew how dynamic the number of instruments is... but... I imagine for a given application it is relatively consistent and is something that would be configurable or deducible. Good chance that I'm missing something too, but these are just my thoughts.
@sidasdf
@sidasdf Ай бұрын
Yes, you are right in that it is a couple jumps, but this is missing the bigger picture about what this design choice accomplishes. Better locality. You want the data in your program to be close together. Everything on your computer wants the data to be close together. Your hardware, if it sees you make consecutive memory accesses, WANTS to preload a big chunk of memory. Your page table address converter wants you to be playing in the same few pages so you don't have to do an expensive page table walk. Your L2/L3 cache don't want to have to constantly be cleaning themselves out. And so part of the game is the tiny optimizations - the instruction level battle (such as avoiding the indirection that you mention). But individual instructions are so fast anyways - all your latency in a single threaded program like this is really coming from TLB lookups and calls to RAM.
@broken_abi6973
@broken_abi6973 Жыл бұрын
At 33:00, why does it use memcpy instead of a copy assignment?
@manit77
@manit77 Жыл бұрын
copying large blocks of memory or large nested structs is more efficient using memcopy.
@_RMSG_
@_RMSG_ Жыл бұрын
@@manit77 Can't someone overload assignments for structs such as those to ensure the use of memcopy?
@shakooosk
@shakooosk Жыл бұрын
Because a copy assignment might have control flow and branches. Imagine this, while the copy assignment is executing in the reader, a 'write' operation is taking place on another thread. At first glance that might seem OK since the value will be discarded when the version check fails in the reader. However, it is dangerous because it might result in unpredictable state in the logic. For example: if (member_ptr != nullptr) { use_member(*member_ptr); } You can see how the check can pass and before the body of the if-statement executes, the writer would assign nullptr to member_ptr and boom you crash. So, the solution is to either do memcpy and hope it works at all, if not it will crash spectacularly most of the time, which should be a good indication you're doing something wrong. Or a better solution is to constrain the template parameter to be trivially_copyable
@shakooosk
@shakooosk Жыл бұрын
@@manit77 no this has nothing to do with efficiency. It's about correctness. check my reply to the OP.
@pouet843
@pouet843 Жыл бұрын
Very nice, I'm curious how do you log in production without sacrificing performance ?
@JoJo-fy2vb
@JoJo-fy2vb Жыл бұрын
only memcpy raw args in the main thread and let the logging thread format to the string and create the logs
@Michael_19056
@Michael_19056 Жыл бұрын
Record args in binary form, record format string only once. Use thread local buffers to avoid contention. NEVER rely on delegating work to another thread except for handing off full instrumentation buffers. View logs offline by reconstituting the data back into readable format. I've been using a system like this for 10-15 years. Logging overhead, if done wisely, can easily reach single digit nanoseconds per entry. Even lower if you consider concurrency of logging many threads simultaneously.
@mnfchen
@mnfchen Жыл бұрын
He mentioned this but all log events are produced to a shared memory queue, which is then consumed by a consumer that then publishes it to, say, TimeseriesDB. Using the SeqLock idea, publisher isn't blockable by consumer, and the consumers are isolated from each other.
@_RMSG_
@_RMSG_ Жыл бұрын
@@Michael_19056 Hi , why is using another thread for logging bad? let's say theoretically that we could garuntee that the logging thread will never thrash the same cache as the main fn, would it still interfere? & if the added instructions required to save that data "in the same breath" are so light that it only impacts on the nanosecond scale, does it become complicated to implement?
@Michael_19056
@Michael_19056 Жыл бұрын
@@_RMSG_ sorry, I only saw your reply just now. In my experience, it would take longer to delegate the data to another thread than to simply record the data with the current thread. Again, the most efficient approach is to use a thread_local buffer to copy the arguments into so there is not locking or synchronization required for the thread to log its own args.
@var3180
@var3180 Жыл бұрын
how does rust compare to this?
@joelwillis2043
@joelwillis2043 Жыл бұрын
trash
@isodoublet
@isodoublet 5 ай бұрын
I imagine it would be tricky to write the instrument container in (safe) Rust since it must hold a bunch of stable references. The concurrent data structure would probably be challenging as well since the same borrowing rules prevent the kind of "optimistic" lock-free operation (though keep note that, as written, the SeqLock & friends code is UB in C++).
@AndrewPletta
@AndrewPletta Жыл бұрын
What advantage does stable_vector provide that std::array does not?
@BenGalehouse
@BenGalehouse Жыл бұрын
The ability to add additional elements. (without starting over and invalidating existing references)
@JG-mx7xf
@JG-mx7xf 10 ай бұрын
@@BenGalehouse Just allocate an array large enough . If you know you have 100 instruments and 100 new created intraday on average ... just use normal vector preallocated for a size of 1k... that way you are sure you dont invalidate anything.
@thomasziereis330
@thomasziereis330 9 ай бұрын
The stable vector shown here has constant lookup time if im not mistaken so thats a big advantage
@guangleifu5384
@guangleifu5384 Жыл бұрын
Which exchange can give you trigger to trade at 10ns? You probably not mean the exchange timestamp but more your capture timestamp on your wire.
@BlueCyy
@BlueCyy Жыл бұрын
Haha, I see you are here as well.
@BadgerStyler
@BadgerStyler Жыл бұрын
I was wondering about that too. If the wire between the exchange server and the clients' machines is more than 1.5m long then it's not even physically possible. He has to mean the wire-to-wire latency
@andrewcampbell9926
@andrewcampbell9926 Жыл бұрын
I work at a similar trading firm to Optiver and when we measure trigger to trade the trigger is the time at which we see the exchange's packet on our switch. I think it's standard in the business to refer to it like that as no client of the exchange can see the packet before it reaches the client's switch.
@davejensen5443
@davejensen5443 Жыл бұрын
The secret to low network latency is to be co-located in the exchange's data center. Even ten years ago it was worth it.
@Lorendrawn
@Lorendrawn 6 ай бұрын
@@davejensen5443 Occam's razor
@gastropodahimsa
@gastropodahimsa Жыл бұрын
Undamped systems ALWAYS devolve to chaos...
@myhouse-yourhouse
@myhouse-yourhouse Жыл бұрын
Optiver's competitors beware!
@stavb9400
@stavb9400 3 ай бұрын
Optiver is market maker so requirements are a bit different but generally speaking trading at these time scales is just noise
@JamieVegas
@JamieVegas Жыл бұрын
The slides don't exist at the URL.
@MeetingCPP
@MeetingCPP Жыл бұрын
Seems like the speaker didn't share them. :/
@user-qh2le5dz3s
@user-qh2le5dz3s Жыл бұрын
I want to know your tick-to-order and jitter.
@sisrood
@sisrood 4 ай бұрын
I really didn't understand the 10 nanosecond latency. Anyone here could help?
@dinocoder
@dinocoder 2 ай бұрын
It says on the diagram that they have a trigger price at the FPGA... so I'm assuming they have something ready to send back to the exchange as soon as they receive a message as long as the incoming message fits certain criteria. So, most of the 10 nanoseconds is probably just physical time it takes for the message to get to the FPGA, compare bits, and send something back.
@dinocoder
@dinocoder 2 ай бұрын
either that or a commenter below is correct and the 10ns just represents the time at the fpga
@doctorshadow2482
@doctorshadow2482 Жыл бұрын
What is this "auto _" at kzbin.info/www/bejne/bqakiGh8htmWrKc ? Is this underscore just a way to say "not needed variable" or there is something new in C++ syntax?
@MeetingCPP
@MeetingCPP Жыл бұрын
_ is the variable name, its in this case '_'. Likely because its not even used.
@doctorshadow2482
@doctorshadow2482 Жыл бұрын
@@MeetingCPP thanks for the clarification. I remember that some years ago even use of '_' prefix in variable name in C/C++ was reserved for language system needs, now even the '_' alone is used. Funny usage, although.
@MeetingCPP
@MeetingCPP Жыл бұрын
@@doctorshadow2482 Well, its not a C++ invention, I've seen this used as a popular place holder variable (because its needs a name) in code snippets of other programming languages.
@mohammadghasemi2402
@mohammadghasemi2402 10 ай бұрын
He was very knowledgeable but his presentation was not very good. He should have slowed down his thought process for people like me who are not familiar with the subject matter so that we can follow him. But should thank you anyway for the things I picked up from his talk; like the stable vector data structure.
@BunmJyo
@BunmJyo 9 ай бұрын
❤😂🎉😅看着发量,就是高手👍
@dallasrieck7753
@dallasrieck7753 7 ай бұрын
who can print momey the fastest, same thing, 😉
High Frequency Trading and its Impact on Markets
11:54
The Plain Bagel
Рет қаралды 149 М.
Follow @karina-kola please 🙏🥺
00:21
Andrey Grechka
Рет қаралды 25 МЛН
1❤️
00:20
すしらーめん《りく》
Рет қаралды 23 МЛН
Watch Citadel's high-speed trading in action
2:51
CNN
Рет қаралды 10 МЛН
Branchless Programming in C++ - Fedor Pikus - CppCon 2021
1:03:57
How low can you go? Ultra low latency Java in the real world - Daniel Shaya
55:32
Why i think C++ is better than rust
32:48
ThePrimeTime
Рет қаралды 265 М.
code::dive conference 2014 - Scott Meyers: Cpu Caches and Why You Care
1:16:58
NOKIA Technology Center Wrocław
Рет қаралды 182 М.
Samsung or iPhone
0:19
rishton vines😇
Рет қаралды 8 МЛН