Getting the Most out of the Server Price Flow
Today I noticed that my previous efforts to speed up the tick flow through the server helped, but there were still times when the CPU usage climbed to those levels that looked like there was a large look-up being done. I checked the logs and about the same time those were happening, I was getting large ClientProxy queue sizes. This makes sense - if the outgoing queues to the clients start to grow, the search time for a duplicate (they must be unique) would go up, and that would slow down the injection rate, and in turn make the queue grow even more.
So... I needed to make sure I was using the most efficient storage as possible. I did a little reading and found that std::set is an RB tree, and the find() uses this to get to the elements as quickly as possible. No room for improvement there, unfortunately. But I could not give up, there had to be a solution.
And while it wasn't earth-shaking, the solution was fun. Rather than try to get more efficient data storage, realize that maybe I was asking it to do the wrong thing. All I needed was to make sure that each Instrument appears only once on each ClientProxy queue. Why not leave it up to the Instrument, then? Rather than have a few large lists in the ClientProxys, have a lot of small lists in the Instruments.
Elegant. Simple. Re-phrase the question.
So what I did was to put a std::set on each Instrument with a protecting mutex on it and then make the ClientProxy tell each instrument that it's going on the queue, and coming off the queue. That way, we can be sure that the same effect occurs, but that we can have a simple queue in the ClientProxy and it doesn't have to look for duplicates.
This change allowed the server to not get into those periods of large queue size. This is good in that the entire system will respond better to the prices. Also, it means that we don't have those conditions where the CPU usage rises and slows down the tick processing. It's a great solution all the way around.