Finished the Good, Fast, Message Cache

Professor.jpg

This morning I was able to finish up the message cache that I started work on yesterday afternoon. It's been pretty straight forward right up until I got to the part about being thread-safe and fast at the same time. This meant lockless data structures, and that meant the compare and swap operations.

Since I'm only having one producer and many consumers, it's a thought that maybe I didn't have to worry too much about this. Well... that's a mistake. I do. But thankfully, there's only one place I need to worry about the complexities of this map - in the "setter" of the individual value.

The code I ended up with looks like this:

  boost::unordered_map<uint64_t, Message *>   mCache;
 
  Message   *oldMsg = mCache[key];
  while (!__sync_bool_compare_and_swap(&(mCache[key]), oldMsg, newMsg)) {
    oldMsg = mCache[key];
  }
  // now handle the old message
  if (oldMsg != NULL) {
    delete oldMsg;
    oldMsg = NULL;
  }

where the value newMsg is the value to place in the map at the key of key.

Sure, this is greatly simplified, but the single core issue is there - we have to check to see if the value has been changed, and if it has, we need to get the new value, and then try and set our value again. The idea is that it's thread-safe, not that it's the right element in the map, but that it's the one that's put there last.

This is working great for me and while I haven't had the chance to run tests against it, it's as fast as I'm going to be able to get regardless if it's fast enough. I think it will be. I've used the boost::unordered_map which is supposed to be faster than the STL std::map which uses a read/black tree for the key space, and since my key space is a uint64_t, it's pretty easy to hash that guy - it's just the value.

I'm going to have to test this guy with the real data feeds, but I think I'm going to be OK. I feel good about it.