Archive for September, 2010

Running Some Initial Tests on Ticker Plant

Monday, September 27th, 2010

Today I needed to run some tests on my ticker plant code to get a decent scale for ordering hardware and network taps. The new machines are going to need double 10Gb ethernet cards - one for the incoming feeds from the exchanges and the other for the outgoing packets. Right now I don't split it up like that, but I know I'll need to in the real UAT testing. But today I just wanted to get as close to "real" as possible - given that I don't have a lot of the supporting data sources I'm going to need before UAT.

The big missing data source is the mapping of the exchange symbol to "security ID" - an internal unsigned integer that is generated in the database and used for all references to an instrument. I'm expecting that it'll be a simple data service where I'll open up a subscription channel to the data service and issue calls to map the symbols to security IDs. I have the code to map these (in both directions) so it's only necessary to get this data once, but I need a source of this data.

Well... not really. For these tests all I need is a unique ID for these guys. So let's make them up. Easy. I'll implement the "lookup" method on the class to generate a uuid_t, which is a random 128-bit number, and use the first 64-bits as the "security ID". I'll pass this back to the mapping method and it'll think it's real. For the sake of these tests, it's good enough as we just need to have these in order to check on the conflation of the data stream.

When I fired it up running on 1/24th of the OPRA feed it used under 10% of one CPU. A full-tilt feed using less than 10%! You gotta be kidding! I watched it for a while and if you toss out the CPU usage of the terminal that is streaming the log data, it's well under 10%. From time to time it spikes to 40% - not quite sure what that's all about, but it's not even 1% of the time.

If we assume we can get 4 channels on one CPU, and factor up the memory, it looks like we can get the complete OPRA feed on one 8 CPU box with 32 GB RAM. If we add another for all the remaining feeds, which is a good estimate, we're fitting the complete ticker plant into two small boxes. Pretty wild.

Past wild... that's better than I'd have ever believed. Sweet.

Acorn 2.5.1 is Out

Monday, September 27th, 2010

I got a tweet today saying that Acorn 2.5.1 was out with a nice list of fixes and additions. Very nice. Still my preference over Elements and Gimp. Simple, powerful, clean. Great Mac software.

Lots of Progress on Conversational Data Service

Friday, September 24th, 2010

Ringmaster

Today I got a lot of good work done on the conversational data service. I decided to go with an instance variable approach to integration. Basically, I looked at the inheritance scheme but in the end I wasn't really happy with the way that was going to work. It wasn't easy to handle multiple services... it was going to require more code to be written by the user... it was going to be difficult to work it into the exchange feed's cache... lots of little things. But if I think of it as an instance variable to a class, then it's almost a client to the MMD that allows me to "publish" data to it. It might even be possible to make a single, unified client for the MMD - that which can request and provide. Might be interesting.

Anyway... today was a lot of good progress on the design and getting the headers written. Lots to do, but at least I have a nice, clear path now.

Google Chrome dev 7.0.517.13 is Out

Friday, September 24th, 2010

I was updating the configuration of WP Super Cache on my WordPress installs at HostMonster this afternoon and noticed that Google Chrome was saying there was an update - I said 'yes', and saw that it's up to 7.0.517.13. OK... nothing in the release notes on this, maybe I'm just a little early.

As for the "Don't be Evil"... the problem is that Chrome is a really good browser. It's a few in the management that I have issues with. So that's the rationalization I'm going with. Yup. It'll work.

Adding Ant, JUnit to Ticker Plant Project

Friday, September 24th, 2010

I had to take a little time-out this afternoon to install Ant and JUnit on the development machines we're using so that the Team can start working on the Java side of things. I have to say that while I'm not a huge fan of Ant, it's adequate, and the rules that people are building are pretty impressive. Of course, there's very little that you can't do in GNU make, as I've learned, but I can see that the Java/XML generation doesn't want to have to learn what I've learned in order to have a decent, stable build system for Java. So be it. It's not horrible.

JUnit is in the same boat, in my book. It's adequate, and yet everything it does can be done by your own test frameworks, it's just that they already have this one, and in that it makes sense to use it.

It took me a little time to get things all set up and configured properly - about 30 mins. Yeah, that's about as fast as I've ever done it. It helped to have used both a lot, and to know how to lay out the project to get it all working easily. Once I had it all in I just needed to run a few tests and check it all in.

Now they can get started on the Java client.

Working on the Simple Data Service

Thursday, September 23rd, 2010

Ringmaster

The first data service I wanted to create was the simplest as well - something that could take a simple C++ variant and expose it to the Broker under a provided service name for query and update. There are two basic ways the Broker's clients interact with the data services: 'call' and 'subscribe'. The first is a "give it to me" idea, and the second is "give it to me and keep it up to date" plan. At least as far as the simple data service goes.

What I wanted to build was a very simple API for the service so that a user would be able to simply provide the variable and a service name and the rest of the "magic" would be done by the base service class. It's a nice idea - one application could then "publish" a bunch of values and services by simply gathering the data and publishing it with one of these "one-line" calls. Sweet.

Well... today was that day. It's the easiest form of the service as it's a single variant and I just need to be able to send it back to the caller and then track updates. Thankfully, I've talked a lot with the designer of the broker and know just what I want to do for tracking the changes. It's going to be a very simple transaction system where the user will have to start a transaction on the object, make changes, and then commit the changes and when that happens, the changes will be sent to all the registered listeners for that variant.

One of these registered listeners will, of course, be the service so that when the changes are sent, it can package them up and send them to all the subscribers. Pretty simple.

There were a lot of little things that needed to be worked out - especially with the transactions. But in the end, I have something that's working just fine for the simple case.

Tomorrow: Start on the conversational mode.

Flash Player 10.1.85.3 is Out

Thursday, September 23rd, 2010

I'm not a big fan of Flash, but I've had to develop in it, and I know that every little bit helps, so when I saw this morning that they had an upgrade, and it's even the hardware accelerated version that should decode H.264 in hardware, but that's not a lot of what I do, so maybe it's not all that great to get excited about it. Still... it's at least fixing security holes... So I got it.

Building the Other Side of the Broker – The Service

Wednesday, September 22nd, 2010

Ringmaster

Today has been spent putting together the design and initial coding of the other side of the broker - the service. Now that I've got the client working, and working pretty well, actually, it's time to be able to offer up data to the clients by becoming a service. There's really nothing new here - other than I can't control the multiplexing of the different requests. They are all going to come into one socket and that's controlled by the Broker. If I want to have multiple socket connections to the Broker, then I can split up the load, but I have no real control over the multiplexing. So I have to account for that in the design.

The problem with the design of the service is that there are two kinds of services I'd like to have, and they really need totally different ways of interacting with them. For one use case, I'd like to just give something a value and a name to publish it under. The service will then take this value (as a pointer or reference) and talk to the Broker, register this name, handle all the requests, and when and if I happen to change the value, it will send updates to those clients registered to receive updates.

The second one is more conversational in nature. This time, I'd like to be able to handle all the interaction with the clients. It's much more client-specific as we have to keep track of the state of the conversation with each client, but it allows me the greatest flexibility, and I have need of this guy as soon as I can write it.

So it's been a day of writing headers, working out the details, thinking again, fixing up the headers, and then, when it all seems solid, starting to implement the first class.

Transmit 4.1.2 is Out

Wednesday, September 22nd, 2010

I got a tweet today saying that Transmit 4.1.2 is out with a nifty URL that started Transmit, and initiated the auto-update. Pretty slick. The big changes to this release are the fixing of the new Transmit Disk feature - sans MacFUSE, for the 64-bit kernel machines. Every time I use it, I'm amazed at the way it works. Seamless, graceful, amazing.

Updating the Broker Client to Handle Ping-Pong Protocol

Tuesday, September 21st, 2010

Professor.jpg

This afternoon has been spent upgrading the features of my broker client so that it can handle a greater variety of messages and conditions. The first thing I needed to do was to realize that there are more than the two types of connections that I had originally thought. There is the 'call' and the 'subscribe', but those names (I've come to learn) are a little misleading.

The 'call' is really a one-time pull of a specific answer from the data service on the other side of the broker. This is RPC. It can be thought of as a lot more than that, but that's what it really is. The protocol implements it as a 'call' request to the service, and the service responds with a serialized data structure that the client decodes. No problem.

The 'subscribe' is really a request for an open channel to the service, and for that channel to remain open until it's asked to be closed. In the simplest case, the 'subscription' can be for a data item, like a table of values, that then updates asynchronously at the client. Each time the data on the data service updates, it sends "deltas" to the subscribed clients, and they receive it and update their copy of the same data structure.

But it can be so much more than that. Unfortunately, that's as far as my C++ client took it. What I learned what that it can be a general communication channel with as varied and complex a communication between the client and the data service as the service builder wants. It can be a series of questions and replies, setting of values, just about anything you can imagine, and in the end, a 'close channel' command will close the channel and return the socket to the pool for reuse.

What I needed to do was to add in the ability to carry on this more general communication with the data service. It wasn't too hard - I just had to add the ability to send a message to the data service and not worry about what's returned or when. Then I had to implement a listening mechanism on the variant so that when the data service updates it, something can register to act on those updates.

With that, I then had a way to register for the updates (or errors) from the data service so I'd know when the 'pongs' arrived from the service, and I could now send all the 'pings' I needed. I still needed one more thing: an asynchronous read timeout.

When issuing the first request to a data service, I needed to have some idea of a timeout so that I don't hang there for ever waiting on a dead or locked service. With boost asio, it was a lot easier than I thought. My initial entry point for the async read looked like:

  void BEClient::startAsyncMessageRead()
  {
    bool       error = false;
 
    // first, make sure we have something to do
    if (!error && ((mSocket == NULL) || (!mSocket->is_open()))) {
      error = true;
      cLog.warn("[startAsyncMessageRead] the socket hasn't been connected"
                " to a service");
    }
 
    // if all is well, start the receive process for the "header"
    if (!error) {
      using namespace boost::asio;
      async_read(*mSocket,
                 buffer(mInbound.body(), mInbound.size()),
                 boost::bind(&BEClient::asyncHeaderRead, this,
                             placeholders::error,
                             placeholders::bytes_transferred)
                );
    }
  }

and to add in the async read timeout, I only needed to create a boost::asio::deadline_timer as an ivar with the same boost io_service as the socket itself, add a method to act as the target of the timeout, and then change the method to read:

  void BEClient::startAsyncMessageRead()
  {
    bool       error = false;
 
    // first, make sure we have something to do
    if (!error && ((mSocket == NULL) || (!mSocket->is_open()))) {
      error = true;
      cLog.warn("[startAsyncMessageRead] the socket hasn't been connected"
                " to a service");
    }
 
    // if all is well, start the receive process for the "header"
    if (!error) {
      using namespace boost::asio;
      async_read(*mSocket,
                 buffer(mInbound.body(), mInbound.size()),
                 boost::bind(&BEClient::asyncHeaderRead, this,
                             placeholders::error,
                             placeholders::bytes_transferred)
                );
    }
 
    // if we have a non-zero timeout, the start it now
    if (!error && (aTimeoutInMillis > 0) && (mTimer != NULL)) {
      // set the timeout in millis as a time in the future...
      mTimer->expires_from_now(boost::posix_time::milliseconds(aTimeoutInMillis));
      // if it expires, call the right method
      mTimer->async_wait(boost::bind(&BEClient::asyncReadTimeout, this,
                                     boost::asio::placeholders::error)
                        );
    }
  }

and then to capture the timeout, we only need to have a very simple method:

  void BEClient::asyncReadTimeout( const boost::system::error_code & anError )
  {
    // if we have a target, alert them of the error
    if (mTarget != NULL) {
      mTarget->fireUpdateFalied("asynchronous read timeout occurred");
    }
 
    // let's recycle this socket now... it's all used up
    mBoss->recycle(this);
  }

With this, we can make sure that the client has the ability to specify a timeout (in milliseconds) if they wish it, and if that timeout occurs, the async operation will be cancelled and all will be returned to it's starting place. It's not ideal, but hey... it's a timeout.

With this all in and tested, I could call it a day. What a day.