Archive for the ‘Coding’ Category

Exposing UDP Exchange Recorders to The Broker

Wednesday, January 11th, 2012

Ringmaster

This afternoon one of the big things I've needed to get done is to try to add to my UDP Exchange Recorders the ability to publish themselves as a service on our Broker system. A little background here, as I haven't written about this yet… we need to record the data feeds from the exchange feeds in order to see what happened in the middle of the day - in a post mortem sense. In order to do that as simply as possible, I wrote a UDP feed recorder. It's a lot like our UDP exchange feeds, but instead of decoding the datagrams, we simply write them out to a disk file in 10MB blocks.

Depending on the feed, these blocks are a minute or two, all the way up to hours of real-time data. We archive these for a few days, and then drop them as the disk space they consume is non-trivial. So… we wanted to be able to at this same data a little easier for the Greek Engines, so that a restart of a Greek Engine will not miss any ticks. By far, the easiest way to do this is to make it a service on out Broker system.

Thankfully, I've done this a lot, and it's a simple matter of creating a new class, filling in some code, tying in the initialization and periodic checks to make sure it's still connected, and we're ready for the meat of the request system. In all, about an hour of work, which isn't bad at all.

Now I need to figure out how these UDP exchange recorders are going to work with my archive server to get the latest data from the feeds in order to catch up the greek engine on restart.

Simple Least-Recently-Used Cache

Wednesday, January 11th, 2012

Building Great Code

Today I needed to add in an LRU cache to my archive server because we have some number of files loaded - each with about 10MB of messages from the exchange feeds. We don't want to have an infinite number, but we don't want to dump them prematurely. After all, they are immutable, so as long as we can keep them in memory, we should keep them in memory - just in case the next client asks for the data that happens to be in that chunk.

So I decided on a two-step memory management scheme - simple aging, and an LRU for times of very high load. The simple aging is that - simple. Every minute, or so, I'll look at all the chunks loaded into the cache, and see the last time it was accessed by a client. If

that time ever exceeds, say, an hour, we'll drop it. That just says that at the end of the day there's no reason to be a pig about the machine.

The LRU is pretty simple, in theory: every time we go to add a new chunk, we look to see how many we have, and if we have too many, we drop the one with the oldest access time, and continue until we get below the threshold so we can add the new one.

The question is about the determination of the oldest. You can make a list of keys, and each time the chunk is accessed, move it's key to the start of the list, and then take the last element on the list, but that's a lot of list accessing (adds, finds, deletes) when we might not be hitting the cache limit very often at all. So I took a simpler approach: just find the oldest one when I need to delete it.

  bool ArchiveSvr::dropOldestChunk_nl()
  {
    bool      error = false;
 
    // first, find the one to delete…
    uint64_t    when = 0;
    std::string what;
    for (cmap::iterator i = cache.begin(); i != cache.end(); ++i) {
      if ((when == 0) || (when > it->second.lastAccessTime)) {
        when = it->second.lastAccessTime;
        what = it->second.filename;
      }
    }
 
    // if we have anything to delete - do it
    if (!what.empty()) {
      cache.erase(what);
    } else {
      error = true;
      // we were supposed to delete SOMETHING…
    }
 
    return !errror;
  }

Now, in the code, I can simply do:

  while (cache.size() >= MAX_SIZE) {
    if (!dropOldestChunk_nl()) {
      error = true;
      cLog.error("can't delete what we need to… Bad!");
      break;
    }
  }

and we're good to go! Not bad at all. Simple, but effective.

Google Chrome dev 18.0.1003.1 is Out

Wednesday, January 11th, 2012

Google Chrome

Yes, today we jump the century mark! The Google Chrome dev release 18.0.1003.1 marks the first release of the 18.x.x.x branch, and answers the question I've had for a while when it came to the third number - would they go past 1000? And the answer is Certainly! They are engineers!

So we have several things in this release - a new version of the V8 javascript engine (3.8.4.1) as well as several under-the-cover items, better zooming, and several crashing bugs. All in, I'd say it's nice to see that they are still working on major issues. Good to know.

Reading a GZip File in C++ – Boost Wins

Tuesday, January 10th, 2012

Boost C++ Libraries

Today I needed to be able to read compressed files for a service I was writing. Sure, I could have shelled out and run gunzip on the file and then gzip-ed it up after reading it, but I wanted something that would allow me to read the gzipped file in-place and uncompress it into a simple std::string for processing.

Enter boost to the rescue.

This is one of the more difficult things to get right in boost… OK, I take that back, it's pretty easy by comparison to the serialization and ASIO, but it's not something that is simple to see from their docs. Also, some of the more obvious attempts to use the boost filtering iostreams yielded some pretty bad results. Still… as I kept with it, success emerged.

Here's what finally worked for me:

  #include <zlib.h>
  #include <boost/iostreams/filtering_stream.hpp>
  #include <boost/iostreams/filter/gzip.hpp>
  #include <boost/iostreams/copy.hpp>
 
 
  std::string     contents;
  std::ifstream   file(aFilename.c_str(),
                       std::ios_base::in | std::ios_base::binary);
  if (!file.is_open()) {
    error = true;
    cLog.error("can't open the file %s", aFilename.c_str());
  } else {
    using namespace boost::iostreams;
    // make the filter for the gzip with the right args…
    filtering_streambuf<input> gin;
    zlib_params   p;
    p.window_bits = 16 + MAX_WBITS;
    gin.push(zlib_decompressor?);
    gin.push(file);
    // now let's get a string stream for a destination
    std::stringstream  ss;
    // copy the source to the dest and trap errors
    try {
      copy(gin, ss);
      contents = ss.str();
      cLog.info("read in %u bytes from %s", contents.size(), aFilename.c_str());
    } catch (zlib_error & err) {
      error = true;
      cLog.error("decompression error on %s: %s (%d)",
                 aFilename.c_str(), err.what(), err.error());
    }
  }

What's the point of all this? Well, it turns out that boost isn't about the general decompression file streams, it's about pipelined filters and one of the filters is a gzip compressor and decompressor. It's more flexible, yes, but it's a little harder to do, and it ends up with an intermediate std::stringstream that we don't need. But in the end, this is only about 100msec slower than reading the file uncompressed. That's a reasonable performance hit for the fact that I didn't have to do the messing with the zlib libraries.

Yeah boost!

No More Excuses

Monday, January 9th, 2012

It's been a very long time since I wrote a post. Work has been as difficult as I can ever remember it being. I've written at times about how it's killing my ability to write anything, and while the same it true today, I'm not going to let that possibility crush the life out of me. Not now. And shame on me for letting it get to me before.

Life is a tragedy for those who feel and a comedy for those who think - Fortune Cookie

I'm tired of being in the midst of a tragedy. I don't have to be, and if I simply refuse to be, I won't be.

Oh, my circumstances won't change because of this, but the way in which I interpret them certainly will. Again, tragedy or comedy - the choice is ultimately mine. And starting today I'm choosing comedy. And there's a very good reason for it.

I've been at The Shop for more than 18 months, and I've completed a few really solid products. It's nothing that someone else couldn't have done, but I was here, I did it, and they know it. Good work so far.

I was originally hired to work with a really sharp, funny, guy. I've done all kinds of things in my day, so I know it's not the what you are doing it's the who you are doing it with that matters. My previous place had been a really nice job with some decent people but a really difficult manager. The great work environment, good perks, and nice company didn't even come close to offsetting the bad manager, and so I tried to learn from my mistakes and this job was about the guy I'd be working with.

So fast forward to the present day. My boss/partner is now heading up IT for The Shop, and I am lucky to talk to him once a week, and it's been months since I really worked with him. I have been lucky to find someone else to work with for the last few months, but now it seems I'm being reassigned, and he's not coming with me. So the reason for me taking this job is really gone. What's left is just coding - and I can do that anyplace.

So the reason for me coming to work at The Shop is gone, and my attempts to resurrect it have been failures. I've found someone good to work with, but all attempts for me to continue the working relationship seem to have failed as well. Now we get to the kicker - I'm being re-assigned to the one group I didn't want to work in. And I'm being given no choice int he matter.

It seems there's always one no matter where you go. The group that's hand-picked by someone high up in the organization, to make some kind of Untouchables squad. They are smart folks - to be sure. But probably not as smart as they think they are. Certainly not as smart as they are telling others. But because they were hand-picked, their egos go pretty much unchecked, and that becomes a significant problem.

We're a trading company, but some of the Untouchables come in at 10:30 am - two hours after the open. I'm sure there's a type of job that this works for, but one of the key developers in a trading organization it's not. Yet nothing is done because he's one of the special few. And that's just one of the problems.

My concern, when I was offered a position in this group a few weeks ago, was that my style of work would not mesh at all with this group. I was reading rands' tweets the other day and came across this article that very much defines how I see myself and how I work: meaningful and mindful work. It is a very nice little statement of the principles of some individuals - but I can see it's not for everyone.

Yet for me, it's gospel. I don't need this particular job. I'm fortunate to have found myself in an industry, in a time, with a skill set that is currently in high demand. Another time, another place, another set of circumstances that brought me here, and I'd be in trouble. But I'm lucky that I'm not. And thankful for that.

This group I'm slotted to go into has picked on one of it's own to the point of bringing a nice, reasonably happy man, to sobbing tears in the middle of a workday. That's not the kind of group I'm going to fit into easily as I'm not the type that's going to sit and take it. I'm as likely to lash out and inflict wounds on those intending to do the same to me. It's not a good plan, in my book. Which is exactly why I asked not to be placed in that group.

But my request went unheeded.

So today, in a few hours, in fact, I'm going to have my first meeting with the leaders of the Untouchables, and we'll see how things go. I'm not looking for a fight, but I also know that I simply cannot trust any one of them. They may turn out to be decent folks to work with, and everything I've seen from the outside might be explained logically by the different view from the inside. But the coming in at 10:30 am and making a grown man cry are going to take a lot of explaining, and I'm not holding my breath that I'm going to buy their explanations. But they deserve a chance.

If I am not convinced that this is a good thing for me, it'll be time to talk to a few partners - my original boss, and my new boss. I will certainly give them a chance to explain the logic of this move, and their plans moving forward. But given that I've been through a lot in my processional life, and only recently has it become so depressing that I've been unable to write in my journal, I'm not interested in the same old stories that I've heard from them in the past.

After all… I don't really need this particular job.

Google Chrome dev 17.0.963.26 is Out

Thursday, January 5th, 2012

This morning Google Chrome dev 17.0.963.26 was released and the release notes indicate the the biggies this time were a new V8 Javascript engine - 3.7.12.12, and a few Mac-specific Lion bugs were felt with. In all, a nice update, but nothing major, which is nice to see. Again, stability means it's nearing a big release, or we are getting to the point that browsers are feature complete. It'll be interesting to see what the Google Guys come up with next.

Google Chrome dev 17.0.963.2 is Out

Monday, December 12th, 2011

This morning I noticed that Google Chrome dev 17.0.963.2 was out and there were just a few updates, but it appears that it's a standard bug-fix release. Interesting to see if this means we're nearing the end of this release cycle and about to jump major versions again. It'll be interesting because it's not been that long for this cycle, and there haven't been that many significant changes. If so, it might mean that we're nearing the end of the real development for Chrome. That would be interesting to see.

Google Chrome dev 17.0.963.0 is Out

Wednesday, December 7th, 2011

Google Chrome

This morning I saw that after a significant period of silence the Google Chrome team released dev 17.0.963.0 with some interesting release notes. I like that the V8 Javascript Engine has been updated to 3.7.12.6, several WebKit rendering bugs fixed, and even better fonts for PDF docs. Nice update.

It's been an interesting lapse in releases, not that it's bad, per se, but that it seems they were doing other things, or these additions have been more significant. If other things, then it's certainly understandable, and if these were especially difficult, or a change of people involved, then that's easy to understand as well. It'll be interesting to see if they come more frequent, or continue to slide.

Once Again, Work Stress Kills my Posts

Tuesday, November 15th, 2011

cubeLifeView.gif

This morning I noticed that once again, work, not laziness, has killed my posting here. I really hate that. One of the real joys of what I do is being able to write a little bit about it every day, but the way things have been going at The Shop, that's nearly impossible these last few weeks. It's been non-stop work on finishing up the Greek Engine project, and dealing with the fact that ultimately, we are helpless to really have the kind of system we'd like to have because the people maintaining the instrument data just don't seem to have it together.

I'm trying to believe that they are doing their best, but it's getting increasingly hard to believe that. I don't want to think badly of people, but when I come in to a database with five instruments in it - not the 400,000+ I was expecting, then it's really hard to think that they even did any testing at all on the data. I know it's a matter of expectations and abilities, and for the longest of time the expectations have been crazy low, but at some point you can't fall back to that and have to take some level of responsibility for your actions.

I can write the greatest code in the world, but what my users are going to remember is that the data coming out of it was horrible, and therefore my app was horrible, all because of bad data. There's no way around it. The Team depends on every single person doing their job. There just are no unimportant roles.

It's just heartbreaking to me to see this. Having put in the work I have to see the complete and total lack of personal responsibility in the quality of the data I'm getting. It really is just heartbreaking.

But then to add insult to the injury, I'm unable to write about it all. Unable to vent about it. I've tried and tried to make this a priority, but in the end, I know myself. When there is code to write and I have time before the train, I'm going to write it. That's my work-ethic, and there's very little I can do about it - even if I wanted to. Which I don't.

I'm going to have to try harder. I'm afraid that in the end, this is just the wrong job. Maybe it's a matter of timing. Maybe it's more fundamental than that, I don't know. I'll try to give it all the time I can to turn around, and I know there are people here really trying to turn this around. But if things don't change, I know there will come a point where I simply have to disconnect myself from this place in order to save myself.

I hope I can hold out.

Google Chrome dev 17.0.938.0 is Out

Tuesday, November 15th, 2011

Google Chrome

This morning I noticed that once again, Google Chrome dev was update - this time to 17.0.938.0 and this time, it looks like the big changes for me is the new V8 Javascript engine to 3.7.6.0 which includes the new garbage collector. The release notes post indicates that downloads might be broken for some, but thankfully, that's not the main usage I have for Chrome, so I'm safe - for now. I'm happy to see Google keep moving forward on Chrome, it's about the only thing I'm positive about when it comes to Google. The engineers aren't running the show anymore, and it pains me to see the "Do no evil" corporation do such horribly bad things.

Sigh.

But at least Chrome is going well.