Archive for June, 2011

Adding More New Features to Greek Engine

Monday, June 20th, 2011

High-Tech Greek Engine

Today I spent a good bit of time adding a significant new feature to the Greek Engine that is needed for the next significant phase of it's rollout. This part takes the option trades and calculates a little bit about the implied values and then generates a "sales" message with a lot of interesting data. This is used by the systems the traders use to see what's happening at the time of each option sale, and so it's really quite useful.

There were a few issues with the code - things that needed to be cleaned up, but for the most part, it was in decent shape. But the problems started when I started up the server and saw the load that was generated by this addition. It was too heavyweight for the threads that were feeding it, and it was causing speed issues that it didn't have to.

So I decided it was time to spin off the processing to a new thread, in fact, the thread that was doing the sending downstream just needed to do a larger portion of the work. So I had to fix up a few things, and refactor a few classes, but in the end I have something that's far better than before - far cleaner and clearer, and at the same time, it's a lot faster.

Good work indeed.

Skitch 1.0.6 is Out – On the Mac App Store

Monday, June 20th, 2011

Skitch.jpg

This morning I saw that Skitch 1.0.6 was out, so I went to my Skitch app and hit "Check for Updates...". I was then told "You're on the latest version - 1.0.4". OK, something's not right. So I went and did a little digging. It seems that the free version is crippled now, and the good version I had was no longer available in that format, so I had to purchase it again for a "mere" $25. Even though, it said on the web site it'd be $10. Hmmm...

OK, I like Skitch, and I'd probably have bought it without a grumble if I'd read about it in some email from the Skitch crew, but that's not how it happened, and I can't help but feel a little miffed at the way it was done. No warning, I'm stuck on an old version and not getting updates. No upgrade path, no continued support. Kinda bites.

I'm sure I'll get over it. I'm still a huge fan, but this is liable to prompt me to send them a letter and say "Hey guys... not nice. Try harder next time."

Happy Birthday, Marie!

Monday, June 20th, 2011

Marie Portrait

Hard to believe my second kid is getting a learner's permit. Yikes! I'm sure there are plenty of 49 year olds that have grandkids, but I'm not rushing anything, thank you very much. She's a teenager for certain now... frustrating and amazing all at the same time. I'm lucky to know her.

Tough Day of Adding Features

Friday, June 17th, 2011

It's been a long, tough, day of adding in several features to my Greek Engine for my clients, and then adding in the next big phase - the time and sales data which is really just the option data at every sales event on the market. It's quite a bit, but it's not a quote feed, still... it needed to be hooked in and in doing that, I found a bunch of little problems most of them honest mistakes by the original coder, but I blame myself for not being more hands-on in the coding phase.

Well... I paid for it today.

I had to rewrite significant parts of the code - thankfully, there wasn't all that much to rewrite, but still - I needed to make significant structural changes to the code. It took all day, but it's finally in and stable, and I need to monitor it for a while to see how it goes.

I hope I'm nearing the end of these kinds of sessions with this codebase. I'd rather not have to un-make someone else's problems only to make my updates.

Google Chrome dev 14.0.794.0 is Out

Friday, June 17th, 2011

Google Chrome

This morning they 'jumped the version' and the Google Chrome team put 13.x into 'beta' and started the dev series with Google Chrome dev 14.0.794.0. This guy is supposed to have the latest V8 javascript engine - 3.4.3.0, and quite a few fixes on different platforms. It's the inevitable march of progress for Chrome, and it's getting a little boring, to be honest. There's nothing really new coming out of that group, and in a way, that's OK. Browsers are OK to be boring - they are supposed to get out of the way and let the user to their thing. So OK... I'll give them boring.

Added New Calc Mode for my Greek Engine

Thursday, June 16th, 2011

High-Tech Greek Engine

This afternoon I got a request from the GUI guy to make some requests - specifically the price-to-implied Vol and implied Vol-to-price calculations where they just want a singular calculation done and then forgotten. I didn't plan on this for the code, so I had to think about it for a few minutes. Then it really hit me - I have the instruments getting the current prices and all the supporting data all the time. If I just copy that out and don't hook up the ticker plants, then I can do the calc and then toss the copy away. It's pretty clean as it's the same workspace and environment, it's just not ticking because it's not hooked up - but it doesn't need to. It's only one calculation - done right then.

Sweet.

I still needed to spend quite a bit of time refactoring the code to make it simpler to deal with these two new ways of getting the data. Pretty much the entire afternoon was spent doing this - but I got it. And it works perfectly. Very nice to see.

An added bonus is that I was able to clean up the code in the refactoring so it's easier to follow now than before. And that's always a good thing.

Added Start-of-Day Clearing Tools for Greek Engine

Thursday, June 16th, 2011

High-Tech Greek Engine

As my Greek Engine is progressing, it's time to think about running it 24x7 - and that means that we need to handle the start-of-day issue with the caches and data. Basically, we need to have some idea of a "clear and reload" of the data, but we don't want to do it all at once, and we certainly don't want to do it if we're not going to get data back into the system quickly. So we want to refresh the instrument data at a good time - say in the early hours of the morning, and then we'll refresh (clear out) the tick data right before the open.

It's a lot of little housekeeping like this that needs to get done in order to have this ready for production. After putting these things into the code I needed to add them to the IRC client so I could test them and manually control the apps. It's all pretty standard stuff, but it's this stuff that makes a system really easy to use and enjoyable.

The Importance of Comments

Wednesday, June 15th, 2011

Today I've once again been shown the critical importance of good comments. To set up the problem and it's resolution let's look at what was before the problem, and see how a good set of comments (by me) could have avoided a complete day of frustration.

We start with the statement of the problem:

Each exchange feed comes on both A and B lines where the same data is on both lines, but they arrive at the datacenter through different paths so that should one be lost, the other has the same data to carry on.

The problem comes in when you try to arbitrate these two lines in your code. You need to look at key sequence numbers and see which one has arrived first for the next number in the sequence, and then there are always the special cases where the exchange sends a reset sequence number message and you have to deal with that. So we need to carefully arbitrate between these two streams of messages to make sure we get the first copy of each message as soon as it's available.

The problem is that I want all this to be multi-threaded and lockless. It's that last part, combined with the fact that I don't want to waste threads, that brings us to the solution I arrived at. Basically, I had the 'primary' channel (A channel) processing thread doing all the arbitration, and the 'secondary' channel (B channel) processing thread sending it's results to a queue that would be read by the 'primary' thread. This merging of the two data streams was a pretty nice idea - I just needed to work out the processing of the 'non-primary' feed's queue with the primary processing thread.

What I had done was to make a method to process all the pending messages:

  bool UDPExchangeFeed::processPendingMessages()
  {
    bool       error = false;
    Message    *msg = NULL;
    while (mQueue.pop(msg)) {
      if (msg != NULL) {
        if (!processMessage(msg)) {
          error = true;
        }
      }
    }
    return !error;
  }

This seemed fine and worked great for quite a while. Then another developer looked at the code and said "Hey, this is going to starve the primary channel!", and because I didn't have a really good comment as to why I chose to do it this way, I said "Wow... yeah, I can see that." and so we changed it to look like this:

  bool UDPExchangeFeed::processPendingMessages()
  {
    bool       error = false;
    Message    *msg = NULL;
    if (mQueue.pop(msg)) {
      if (msg != NULL) {
        if (!processMessage(msg)) {
          error = true;
        }
      }
    }
    return !error;
  }

so that we're doing a 1:1 of the primary and the secondary. Which sounds fair, and a better solution, but the problem is that it really isn't, and it takes a little real-world thinking to get there.

Two feeds decode messages in the same time - one then transmits one, and the other just pushes the message on a queue. Clearly, the second one is going to process messages a little bit faster than the first. This means that it may take 10 or 20 messages, but sooner or later, there will be two message in the queue to process, and only one is going to be processed. Repeat.

Pretty soon, the queue overflows. After all, we're talking upwards of 50,000 msgs/sec. It doesn't take too many to get out of hand. So how to fix it?

Always empty the second queue. It's not unfair, it's the equalizer. Most times it's only going to have one message in it in the first place. But when it has two (or three), it's best to clear them all out right then, rather than to let them sit there and build up. So my initial implementation was really the better one - but I hadn't put this level of documentation with it so as not to be seen as wrong.

Now there's a four paragraph comment on that little method just so we're clear about why it's doing what it's doing, and it's not a bug.

Setting SQLAPI++/iODBC/FreeTDS for Minimal Impact

Wednesday, June 15th, 2011

database.jpg

This morning I spent a little time looking at the SQLAPI++ manuals looking for the way to make it minimal impact on the SQL Server I'm hitting. I was hoping to find a way of setting a timeout on the SQL statement's execution. What I'm seeing now is that every so often the act of reading from the database will hang the thread doing the reading, and it doesn't give it up for long enough that I restart the process.

This isn't good.

So I wanted to put in a timeout without resorting to a boost ASIO timeout. What I found was that there isn't a timeout in the SQLAPI++ code, and there isn't really one in the iODBC layer, either. There is one in the server configuration on FreeTDS, but I'm not really keen on putting a timeout value there for all connections and queries to a database. I just wanted to be able to put one on this set of queries.

What I did find was that I could make the SQLAPI++ command quite a bit nicer to the database with a few options on the command:

  cmd.setCommandTest(aSQL.c_str());
  cmd.setOption("PreFetchRows") = "200";
  cmd.setOption("SQL_ATTR_CONCURRENCY") = "SQL_CONCUR_READONLY";
  cmd.setOption("SQL_ATTR_CURSOR_TYPE") = "SQL_CURSOR_FORWARD_ONLY";
  cmd.Execute();

where the middle three lines are new this morning. The default for the command is to fetch only one row at a time - that's very bad, and to allow a more liberal reading/updating policy with the cursor. I don't need any of that, and this will make sure that I'm about as lightweight on the database as possible.

With no timeout to fall back on, I'll have to just see if these changes are enough to make sure I don't get the lock-up again. Sure hope so...

Creating an Exchange Feed Recorder (cont.)

Tuesday, June 14th, 2011

A few days ago I started creating an exchange feed recorder, and today (finally), I got the time to finish things off. The outstanding issues were that I hadn't tested anything, and that I hadn't really worked out the start/stop scripts, etc. So today it was a simple matter of testing the code, fixing a few issues, and then setting up the infrastructure so it'd be easy to start/stop the recorder for daily use.

It wasn't all that hard, but it took an hour or so to get it all nailed down. I then spent a little time writing a reader app to allow people to see how you would read from the file, find the datagram, and then process it. Not bad, but it needed to be done in order to show that the recording process wasn't corrupting the data.

Nice to finally get this all nailed down.