Archive for the ‘Coding’ Category

Google Chrome dev 19.0.1061.1 is Out

Wednesday, March 7th, 2012

V8 Javascript Engine

This morning I noticed that Google Chrome dev 19.0.1061.1 is out and it's got a few nice things in it. Like a new V8 javascript engine (3.9.13.0), and support for remote file systems - could be interesting stuff. Glad to see they are still making improvements in V8 - that's something that I think I'm going to end up in sooner than later, and it'll be nice to have some improved performance there.

Converting CVS Repos to Git Repos

Friday, March 2nd, 2012

gitLogo_vert.gif

This morning I wanted to try to get some of my CVS repos converted to Git so that I could have the complete repo on my laptop and not have to worry about internet connectivity. I'm a big fan of CVS, and it's simplicity, but Git is the clear next-generation of CVS, and it doesn't need the connection to the server that CVS does.

Recently, I'd read that there was a simple git command: git cvsimport that would convert a repo, and I just had to try. The first thing was that I needed to have a program called cvsps. This is some other tool - not part of CVS, not part of Git, that I needed to get. I realized this as I tried to convert my first repo, and failed saying it couldn't find this app. So first things first, get the tools I need.

Getting cvsps

A simple google search revealed that the source code for cvsps was held in a very simple web site: http://www.cobite.com/cvsps/. I downloaded the latest stable code, and read the README. It's a simple make; make install, and I'm on my way:

  $ cd cvsps-2.1
  $ make
  $ sudo make install

The cvsps executable is now in /usr/local/bin and /usr/local/share/man. That's all we need - over and above Xcode 4.3 and it's command line tools (cvs, git).

Get a Local Copy of the CVSROOT

While I've read that this can be done using a remote pserver $CVSROOT, it's a good idea to just get a local copy of the complete CVSROOT to work from. Since I'm in a stable state with that, it was pretty easy to copy it to my TimeMachine external drive:

  $ cd /Volumes/Reststop
  $ scp -r frosty:/usr/local/CVSroot .

It only took a few minutes, and now I've got all the "source material" I need.

Migrate a Single CVS Repo

The process is pretty simple - you have to act as if you're starting a new git repo, but instead of the git init command, you issue the git cvsimport command. It's got a few arguments, but it's pretty simple to use.

For this example, I'm calling the new Git repo the same name as the old CVS repo, but I'm guessing if you want, you can change the names.

  $ cd git
  $ mkdir MyProj
  $ cd MyProj
  $ git cvsimport -p x -v -d /Volumes/Reststop/CVSroot MyProj

At this point, you have a new git repo but the origin is not set, so it's not "going" anywhere if you try to push it. Since I'm using gitosis on my home git server, it's a simple process to update that to add in the project(s) I'm migrating into the proper groups, and then push those changes up to the server before I try to set the origin of the new git repo.

Setting the Origin

Assuming that you have the server set up, and it could be that you're using GitHub and not gitosis, then all you need to do is to set the origin and push it up:

  $ git remote add origin git@git.myplace.com:MyProj.git
  $ git push origin master:refs/heads/master

Final Steps

It's possible to now go in and mess with the git config to set the master right, but I've found is just as easy to remove this new repo, and clone it again from the server. If I got it right, the history will be there, and I'll be sure it works. If not, I can start over. Simple.

It's been a lot of fun getting these guys over to git. Now I can use all the tools and fun I've had with git in the last year to these projects. Very nice!

Added Postgres Failover Code to Greek Engine

Wednesday, February 29th, 2012

High-Tech Greek Engine

One of my favorite things is to work with databases in code. Persistence and database hits are a blast as they get you a place to save stuff that, if you design it right, you can view from just about any tool on the planet. Can't say the same for redis or mongoDB. My Greek Engine gets it's instrument data from a local replica copy of a master postgres database, and should the local copy fail - or be down, it should auto-reconnect to the master and just function off that one. If he's dead… well… that's when it's time to get serious about getting things working.

The first thing I needed to do was to consolidate all the database activity to as few a number of places as possible. Thankfully, I had a simple execute() method that did about 90% of what I needed. It just took a few minutes to make that the only way to hit the database, and then I could focus on making that a little more fault-tolerant.

The idea is simple, really: put it in a retry loop, limit the number of retries, and then for each retry, hit The Broker for the correct database connection parameters to use. If the Broker is wrong, then I'm in real trouble, but it's not, so I'm OK. (Famous last words.)

Add a little logging, remove some error codes, and we're ready to go. It really didn't take me all that long, and the results are much better. When, and if, the database goes down, we'll fail over to the master. When we get the local copy up, we can issue an IRC command to repeat the process, and the local one will again be used. Simple. Clean.

Great.

Refactoring Out the TBB concurrent_vector

Wednesday, February 29th, 2012

bug.gif

This morning I came in to see that some of the exchange feeds on one of the staging boxes of mine hadn't shut down properly. When the exchange test data flooded in, it made a mess, and that was no good at all. The only code that seemed to matter was a simple iterator on the TBB concurrent_vector. I've had issues with this code before - and always moved away from it in favor of a simple std::vector and a mutex of some sort. Here was another case of the exact same thing.

Now I'm not saying that the concurrent_vector is a mess, but I think that it, along with the concurrent_map are a little trickier than normal to work with. The iterators have built-in locks, and that makes it very easy to write dodgey code. I think that's what happened, but I can't prove it.

Far easier to use a simple std::vector and then a TBB spin_rw_mutex_v3 to protect it. Virtually all the access to the vector is read-only, there's only really one method that adds to it, and another that removes from it. Those are easy write locks, and happen on start up and shutdown. Easy.

The rest of the time, the r/w mutex will be essentially a no-op, and that's fine with me. The refactoring was easy because all the same vector operations are the same, and most (say 80%) of the use cases are simple iterators on the vector's contents. All I needed to do was to put the scoped locks in the right place, and we're ready to go.

In the end, this is just as clean, probably faster, and a lot more well-understood. Good move.

Tracking Down a Tricky Problem

Tuesday, February 28th, 2012

bug.gif

I just finished spending a good hour tracking down a nasty little problem with the logic I had for creating new instruments on the fly. The problem turned out to really be me, and my preconceived notions about what the problem really was, but that's typically the case. The underlying problem was that I was thinking that the first new message for an instrument wasn't creating the underlying, but in fact, it was. That explained why I was seeing no errors.

No… the real problem was that I wasn't properly handling the case when I found it. It was made, but then the next time, I tried to find it, and it was missing - or so the code thought. In reality, I had failed to really detect that I'd found it, and act accordingly.

It's almost a coding standard in my mind now - For every 'if' statement, there had better be an 'else' clause. It would have saved me this headache, and when I saw it, it was clear that I was missing the else, and what to put in it when the value wasn't NULL.

Glad that's over. It was painful.

Refactoring Like a Bandit to Fix a Bug

Tuesday, February 28th, 2012

bug.gif

This morning I noticed that I had a problem with the initial volatilities for the options in my Greek Engine. Because the users want me to carry over the calculated values from yesterday's close to this morning, you can end up with a really odd situation: the job that computed the volatilities could have changed their values overnight, and now the new volatilities are different than the old. We can't replace the old with the new, as that would make the calculated results look bad. We can't ignore the new, and stick with the old (but that's just where we were doing).

What we needed to do was to load up the new, and leave the old as an output value of the calculations - just like the quote and spot values. This meant that I needed to refactor a good chunk of code and place a new ivar in the Instrument - the volatility, right next to the historical volatility, I then converted them from double values to uint32_t so they are handled a lot easier, and then put in the setters and getters that allowed me to update them as needed - even from the StaticData object that's reading in updates from the database.

All told, it was a good chunk of code in three major classes, but when I built it all and ran it, everything worked as you'd expect. Now there's an "output vol" and the instrument vol, so you can see when they will be different, but the client get the old value until the new value is "active" with a calculation.

It's clean, and I like it a lot more than what I had. I'm just sad it took me this long to find it.

It’s Hard for Me to Know When to Draw the Line

Monday, February 27th, 2012

cubeLifeView.gif

Today has been a really hectic day of a lot of issues in the testing brought up by someone that's a decent guy - kinda like a beer-drinking frat boy - lovable, but you'd never want him dating your sister, but ultimately, pretty useless. I'm getting partial sentences from him about bugs, he's clearly very frustrated with the process, and I believe he's closed himself off from learning another thing about this system. It's funny… the same things that made him a useful tester - able to find bugs because he gave no thought to what he was doing, is really his personal undoing. He's really frustrated. It shows.

I'm trying to cut him slack. I know he's capable of doing more than he is, but at the same time, every time to acts like an angry frat boy, it's hard to have patience for him. Really hard.

I have said many times - "This is hard. I know it. It's hard, but you can do it." only to pump him up enough to get through the next 15 mins and then have him come back to earth crashing even harder than before. He seems to have no patience for the learning process, or at least no interest in what it takes to learn in a place like this. There isn't time to spend several hours with him and take him back to programming basics. He's got a little of the basics, but not enough, and he wants to know more, but he's got no foundation to base it on.

It's not easy. This is clearly over his head, and he's being given an opportunity to move out of the simple QA role, but it's up to him. And in my way of looking, he's not making it. But it's not because of his ability - or lack of it, it's his attitude. He gets angry as I try to explain something to him. I can see he's angry, and I ask him if he's interested in listening. He says "No, I hate this", and walks off.

OK, choice made, ignorance retained. It's his choice.

But at some point, I simply have no more patience for this. I just don't. But it's hard for me to know when to draw the line. I know people that would have had stern words with him already. It's a zero tolerance policy for them when it comes to willful ignorance. But to me, I don't want to make it harder on him than it already is. I'm hoping that when he has the patience, he'll listen, and it'll sink in. But I'm beginning to have my doubts.

In the end, I don't know that it'll matter. In the end, I think he'll self-select and that will be that. It's his choice, after all.

Boost Shared Pointers to the Rescue!

Monday, February 27th, 2012

Boost C++ Libraries

Once again, I have found a perfect use for the boost::shared_ptr, and it's saved me a tons of grief. I've been working to refactor the exchange feed recorders and as I've been doing this, I starting getting stability problems in my StringPool class. Basically, I have a simple class that has an alloc() method that returns a (std::string *), and then allows you to recycle them when you are done with them. It's used in the exchange feeds, but I've been having issues when moving to the new format of append writing in the recorders.

So what to do?

Well… really, the problem is simple. I have a buffer that I fill, and rather than passing that to a write thread, and getting another, why don't we create a copy of what we have, clear out what we're using and just start over? The copy operation isn't bad, and if we use the boost::shared_ptr, we don't have to worry about it going out of scope on me, and it's easy to pass into the thread.

It's just about as clean as I can imagine. Simple. Clean. Get rid of the StringPool, have just a std::string and then when ready to fire off the write, make a new string smart pointer and use it. Sweet.

  block->pack(buff);
  if ((buff.size() >= HIGH_WATER_MARK) ||
      ((block->when > (lastSaved + saveInt)) && (buff.size() > 0))) {
    // grab the last saved time for the next interval
    lastSaved = block->when;
    // get the timestamp for the Beginning Of Buffer…
    uint64_t    bob;
    memcpy(&bob, buff.data(), sizeof(bob));
    // now let's fire off the thread and write this out…
    boost::shared_ptr<std::string>  done(new std::string(buff));
    boost::thread  go = boost::thread(&UDPExchangeRecorder::write, this,
                                      bob, block->when, done,
                                      isPreferred(aConnection));
    go.detach();
    // clear out the buffer that we're using…
    buff.clear();
  }

and then in the write method, it's very easy to use:

  void UDPExchangeRecorder::write( uint64_t aStartTime, uint64_t anEndTime,
                                   boost::shared_ptr<std::string> aBuffer,
                                   bool aMaster )
  {
    // make sure we have something to do…
    if (aBuffer->empty()) {
      return;
    }
    ...
  }

When the write method is done, the shared pointers will be dropped, and the memory freed. Easy, clean, and very stable. This cleared up all my issues.

Did a Lot of Code Cleanup Today

Friday, February 24th, 2012

Code Clean Up

Today I spent a good bit of time going through a co-worker's code and cleaning it up to be something that I'm OK with in the code base of the project. It's something that I'm used to doing, and while some will think it's the ultimate in micro-management, it's really not. I'm not asking him to do it - I'm the one doing all the work. I hope that he takes just a minute or two to look at what I've done and learn from it, but that's totally optional on his part. I can hope he'll do it, but I'm not planning on him doing it.

But I simply cannot leave this code in as-is. It's just starting to go into production, and to leave poorly designed, poorly commented, and code missing the coding standards at this point in time is just giving into the worst of entropy in this project. I have to hold it together as long as I can because there will come a day I have to leave it, and then I can do nothing to prevent this kind of slide.

It's not bad, as a job, it's just something you have to get in the right frame of mind to do.

Design By Committee Never Works

Friday, February 24th, 2012

cubeLifeView.gif

It's sad that I don't have a lot of nice things to say about work these days. Very sad. And one of the very saddest things is that I find myself in this current mode of Design by Committee, and it's just crazy. The problem really originates with the idea that the Big Boss wants to make a group of highly-skilled, high-power developers, that can work together and get things done. This model is very anti-committee of any kind. It's almost the best of the Cowboy coder. It's good people making good decisions, communicating when they need to, for what they need, but not wasting any time.

It's a dream job, to me. And they sold me on it.

But it's not come to pass. Rather, it was close, but we've drifted so far away from that in a few short period of time that it's like it was a distant memory. And what I'm living now is as bad a place of micromanagement as I can remember being.

So we have the users - several different groups. And they all are competing with each other to get things done. This was, and is, very inefficient, and so to solve that, the business put one guy in charge, and all business requests go through him. It's his job to make sure that the different business groups are on-board. He's the one guy we need to go to to get answers. And unfortunately, he's not checking in with some of the groups.

This is brought to my attention by my manager, who used to run the tech for one of these other groups. He's a nice guy, but he's got some views on how to run projects that I find more than a little stifling, and while I've tried to talk to him, I've given up of late, as it's just not doing any good.

So we have communication problems. We have misrepresentation of users' needs due to that. We have poor management styles. We have bad testing procedures. In short, the only thing I can think that we're doing right is… OK… give me a sec… Hmmm… well… I can't think of a thing we're doing right. All that's going right is being done in spite of this place.

And if I had to point to one thing - it's the communication. It's so bad, nothing really has a chance. Holy Cow!