Working Under the Gun

GeneralDev.jpg

Today I've been working a little under the gun. OK... a lot under the gun. Another group is looking to use the ticker plant I'm building and they want to be using it "right now". Well... it's not exactly built right now, but I'm doing my best to provide them with something they can test with. What ensues is a lot of pressure that I usually don't like working in. But... I know these guys, and they'd love to have an excuse why not to use it, so I have to suck it up and get things done as fast as possible.

Today was spent resolving a few issues about the ticker plant. The first issue that was a hold-over from yesterday was the fact that the FAST (FIX Adapted to STreaming) decoder for strings was returning more than it was supposed to. I was seeing a trailing asterisk (*) in some of the strings in the position after the last character I was supposed to see. So if the string was a maximum of 5 characters, then I'd see an asterisk in the sixth position.

In order to fix it, I went into my wrapper class and did a little more defensive coding:

  std::string codec::decode_string( fast_tag_t aTag, uint32_t aSize )
  {
    // let's give the decoder a little safe room - it needs it at times
    char    buff[aSize + 8];
    decode(aTag, buff, aSize);
    // make sure they didn't run long (and they do often)
    char  *ptr = &buff[aSize];
    *(ptr--) = '\0';
    // trim off the excess spaces on the right-hand side
    while ((ptr >= buff) && (*ptr == ' ')) {
      *(ptr--) = '\0';
    }
    return std::string(buff);
  }

I needed to give them a little headroom, and then truncate it at the maximum length, and then do a simple right trim of the data. Not hard, but it's amazing that their own decoder has these problems. Yikes.

The next problem with the ticker plant was the CPU usage. When it's just the ticker plant - without the cached ticks, it idles around 10%-20%. With the cache it was up around 70%. I started playing with the cache (it's lockless but uses boot's unordered_map) and saw that we were in another pickle. The general operation of this guy is to replace the old message with the new, and delete the old. But if we had people looking at the old, we'd mess them up something fierce.

I've written this up on another post, but the idea is to allow them to remain valid for as long as the client needs, and then trash them. It's not hard, and I don't think it slows things down much, but it's absolutely vital for proper operation.

Unfortunately, with all this work, I got to 3:15 before I could get a lot more done. At that point, I have no ticks, and I'm stuck. Kind of a drag to have to depend on other systems like this. But so it goes. Tomorrow I'll hit the CPU usage harder and see what I can find out for certain.