Built Templated Conflation Queue in DKit
This morning I finished up work on a conflation queue for DKit. The idea is pretty simple - take a queue and a trie, and based on the key value for the elements in the trie, allow updates to the values still in the queue, but keeping their relative placement in the queue such that when they are popped off, the latest value is pulled off, and the element is considered removed from the queue. It's a very common thing to have in processing market data when you don't need every tick, but what you do need is the very latest information when you can get it. Such would be great for risk systems, but not so great for execution systems - depending on the strategy.
Anyway, the key to all this was really that I had all the elements of the conflation queue in DKit - I just needed to bring them all together. For instance, since it's a FIFO queue, we based it off the FIFO
namespace dkit { /** * This is the main class definition. The paramteres are as follows: * T = the type of data to store in the conflation queue * N = power of 2 for the size of the queue (2^N) * Q = the type of access the queue has to have (SP/MP & SC/MC) * KS = the size of the key for the value 'T' * PN = the power of two of pooled keys to have in reserve (default: 2^17) */ template <class T, uint8_t N, queue_type Q, trie_key_size KS, uint8_t PN = 17> class cqueue : public FIFO<T> { }; } // end of namespace dkit
The idea is simple: you have to know:
- What to store - this is the type of data you're going to place in the queue, and if it's a pointer, then the conflation queue is going to automatically destroy the old copies when new values come in and overwrite the old
- How many to store - this is a maximum number, as we're using the efficient circular queues. In practice, this isn't a real limitation as memory is pretty cheap and a queue meant to hold 217 elements is not all that expensive
- The type of access - this is so you can control how many producers and now many consumers there are going to be. This is important as you can get higher performance from limiting the producers and consumers.
- The key size of the trie - this is really what you're going to use to uniquely identify the values you are putting in the queue. If you know how you'll identify them, then you can choose the correct sized key to make that as efficient as possible.
- (Optionally) The size of the pool of keys - this implementation allows for a set of pooled keys for the queue. This is nice in that you don't have to worry about getting keys into or out of the queue, but in order to be as efficient as possible, it makes sense to have a pool of them around. This optional parameter allows you to specify how many to hold onto at any one time.
Things went together fairly well because I had all the components: the different queues, even how to use the different access types in the pool. I had the pool, and so it was just a matter of putting things together and testing them out.
One thing I did find out was that when I call key_value() I'm not sure what exactly I'm going to be getting back. If we assume that we're using a key structure like this:
struct key_t { uint8_t bytes[KS]; };
and the reason being is that we want to be able to create and destroy these without having to put the array form of the delete operator, then we can't simply do this:
key_t *key = _pool.next(); if (key == NULL) { throw std::runtime_error("Unable to pull a key from the pool!"); } // copy in the value for this element *(key->bytes) = key_value(anElem);
because the compiler is going to think that the LHS is just a uint8_t and not a series of bytes, capable of holding whatever is returned from key_value(). We also can't do this:
// copy in the value for this element memcpy(key->bytes, key_value(anElem), eKeyBytes);
because the return value of key_value() is a value and not a pointer. So we have to be a little more involved than this. What I decided on was to use the fact that the compiler will choose the right form of the method and function, so I added in setters to the key:
struct key_t { uint8_t bytes[KS]; // these are the different setters by size void set( uint16_t aValue ) { memcpy(bytes, &aValue, 2); } void set( uint32_t aValue ) { memcpy(bytes, &aValue, 4); } void set( uint64_t aValue ) { memcpy(bytes, &aValue, 8); } void set( uint8_t aValue[] ) { memcpy(bytes, aValue, eKeyBytes); } };
and with this, I can say:
key_t *key = _pool.next(); if (key == NULL) { throw std::runtime_error("Unable to pull a key from the pool!"); } // copy in the value for this element key->set(key_value(anElem));
and the compiler will pick the right set() method based on the right key_value() function the user provides. This is not as nice as simply copying bytes, as there's a call here, but it's not horrible, either. I need it to make the code simple and make it work.
Other than that, things went together very well. The tests are nice, and it's all ready to be hammered to death in a real setting. I'm so happy to have gotten to this point in DKit. These things took me a long time to get right before, and they aren't nearly as nice as what I've got now, and these will be out there for me to use no matter what. That's a very nice feeling.