Archive for May, 2008

Growl Updated Their Web Site – Nice Look

Monday, May 12th, 2008

growlicon.png

I just happened to be checking on a few things this morning, and one of the little unsung heros of my day is Growl. I really think it might be the next thing Apple pulls into 10.6. It's just an exceptionally handy tool to have at the system level - like tabbed terminal sessions (from iTerm) and virtual desktops, it's something that a lot of people like myself use.

Anyway, the point was that for the longest time, Growl had the same web site that didn't look all that great on Safari - at least how I looked at it. Today I noticed that they totally revamped the web site and it's looking much better. No new releases, but now they show the different styles as well as better docs, more screen shots, and a better look to the whole thing.

Nice update.

Acorn 1.2 is Out!

Monday, May 12th, 2008

acorn.png

Acorn 1.2 is out, and it fixes the one bug that I had found: multiple magic wand selections to create the exact selection you wanted. In the previous version, for each additional selection you did, the image seemed to move down a bit. The result was that you really couldn't do the multiple selection properly. I had to resort to other techniques, which worked just fine. In the latest release, I'm glad to say that that bug - along with a ton of others, and quite a few new features has been added.

I once read that Acorn is a programmer's editor, and to that I agree. I'm not a graphical artist, but I know what I want for images, and Acorn makes it exceptionally easy to get what I need in and out. Wonderful app.

Twitter’s (Un)Reliability is Really Pretty Bad

Friday, May 9th, 2008

Twitterrific.jpg

Given the state of web services and protocols these days, it's really pretty amazing that Twitter is as unreliable as it is. Before I even started using it, I had read many reports both praising the idea and cursing the implementation for it's unreliability. New things can start out on shaky ground, but as time goes on, they should stabilize out and get more reliable. But I don't see that happening with Twitter. I'm using Twitterrific and while I like most parts of the app, and understand they are working on an iPhone app for release in a few months, I have to say that I don't for a second think it's Twitterrific's fault.

So why is Twitter so unreliable? Can it be the load? Possible, but that's something that can be scaled up, and given all the attention it's received, I'm guessing that they aren't hurting for money to buy additional servers. So I don't think it's a matter of hardware.

Could it be the idea behind it? Doubtful, IM clients handle much larger packets of information and they aren't this bad. Also, given that it's not a 'push' technology, it's hard to understand the possible issues. Maybe it's the SMS or IM integration issues... don't know.

I've heard that it's all about being first. I disagree. That's a big factor, but it's not the end-game. Look at VisiCalc. Can't? Exactly my point. They created the spreadsheet and Microsoft beat them so badly that now most folks won't even know their name. Twitter? Twitter who? That's what might happen if they don't get the reliability up. People will like the idea and the barrier to entry is nothing for a Google, Yahoo, or Microsoft.

Ideas are great, but eventually you have to deliver. Twitter needs to get busy and deliver on the reliability.

Upgraded to WordPress 2.5

Thursday, May 8th, 2008

wordpress.gif

While it might not seem like much, being able to upgrade WordPress from 2.3.3 to 2.5 was nice to get done today. It takes a little longer for Fantastico at HostMonster to get the updates, but they work like a charm, and I haven't lost a thing. Can't beat that. Unfortunately, there's already 2.5.1 out, and so I'm still behind. Not to worry, it'll catch up... and if the Google Summer of Code is right, we'll get caching in the default WordPress which would be really nice because the existing caching schemes make it seem a little non-standard.

I'm not about to get Slashdotted anytime soon, so I'm safe. When it's built into the WordPress release, I'll get it. But I do with they had better facilities for Privacy. There's really only the ability to mask out the search engines. I'd like to be able to work on an opt-in policy where I had to give people accounts on the install and then they could read it. That would be nice.

Got My Kindle – Amazing Gadget for a Reader like Me

Wednesday, May 7th, 2008

Kindle.jpg

When I got home today my Kindle was waiting for me. I'm currently in the middle of Game of Thrones by George R. R. Martin - about 120 pages in, so I didn't think I'd be using the Kindle until I finished off this 800+ page book. Amazon, and luck, proved me wrong.

What I think one of the nicest parts of the Kindle is that I can get recent releases at paperback prices. I'm not one to buy a hardback book for myself. I just don't. I wait until they come out in paperback and then get them. That means I'm about a year and a half behind 'current'. For about the price of a paperback, I can get new releases on the Kindle. That's really nice.

So I open up my Kindle and find out that it's already been set up for me. That's probably nothing for Amazon to do, but it's an amazing feeling to have it there, out of the box, and ready for me. Very cool. I read the introduction on the Kindle - on the Kindle, and then decided to check out a few books I've wanted to have 'on hand' to read. So I go to the Kindle store.

Very nice experience. Clean, decent. All I could have expected in a wireless delivery system. Very nice.

So I look up Clive Cussler and get his latest two that are out in hardback. It's next to nothing to get them, and $20 later, I have them. It's simple, fast, and enjoyable. I can see why people are saying this is the 'iPod of books'. It's not a perfect device, but it's darn close, and they have a very nice online store.

I then go to look up Goerge R. R. Martin and find out that I can get two eBooks - the one I'm reading and the next in the three-part series for the cost of the second one. I get them both. I then use the Table of Contents on the Game of Thrones to get to where I am in the 'dead tree' version. My Kindle is ready to use.

Amazing experience. Really. About the only downside I've seen is the carrying case. I'm not sure that I'm going to use it - I don't use one for my iPod Touch and just take care of it, but it would be nice to be able to 'lock' the Kindle so that accidental presses didn't mess with things. But maybe they can come out with that in a software update.

In any case, I can see that my reading will be changed forever. I'll pick up a 4GB SD card this weekend and plop it in the Kindle and be set for life. Really. I can't imagine ending up with more than 4GB of books on this thing, but that's a staggering thought. I only wish there were a way to get my physical books on the Kindle without re-buying them. That's the one beauty of the iTunes system and music. I can convert them by ripping. Can't do that easily with books. Shucks.

Debugging Socket Problems on Vendor Software

Wednesday, May 7th, 2008

SwissJupiter.jpg

I've spent most of today debugging a problem I saw in a vendor's API to a messaging system. It's not the best vendor I've ever worked with, in fact, I don't think they are even in the top 80%, but they are the vendor I have to work with at the time, and I've had to try and make the best of it. So here's the problem we ran into today.

We have a price injecting system. It gets price data over a custom socket interface (I wrote) from a price feed server and the reformats it into the format needed by the vendor's product and sends it on it's way. It's a simple transformation system. Nothing fancy. But the vendor's API is socket-based as well, and as we were to learn, not done nearly as well as mine.

When the transformer/feeder app was running in Chicago, and the price source was in Chicago and the database for the vendor's product was in New York, everything worked fine. When the feeder was in London and everything else was the same, my feeder missed the messages coming from the vendor's messaging system when I was injecting prices.

Inject prices - miss messages. Stop injecting prices - get messages. Make another test app subscribed for the same messages and it always gets them - because it injects no prices. This was getting crazy. So on a whim I decided to make use of a price source in London for the London feeder - Bingo! Now it worked. It appears that the vendor's API can't handle delays in the socket delivery from another completely different source. Yup. It's got nothing to do with the use of the vendor's API - it's the activity on a completely different socket that's effecting the vendor's API.

Note that in all this, my code is working fine. It's the vendor's that's stopped working properly. Nowhere in the documentation do they say that excessive waiting on socket communications will invalidate the delivery of messages - why should they? They probably never tested it, as they probably never had reason to. But when you charge $20 mil for something, you really ought to take a more pro-active view on things. For example, don't use a home-grown messaging buss when there are so many commercial ones that you can include in your $20 mil cost and not effect the bottom-line much.

In the end, I think I'm stable now, but there's really no way to know. They aren't going to fix this, I didn't expect them to. They took almost 2 months to fix the last bug we pointed out and that was a simple recompile with the right data type for a 64-bit version of the API. This would require real changes in how they do things, and change is not a word I'd use with this vendor.

I just have to say that I really hate the fact that it's expected that we figure this out. I'm not getting any part of that $20 mil, and yet I've saved their bacon by figuring out how to make it work in our environment. Crummy vendor.

Amazing the Decisions that Get Made Every Day

Tuesday, May 6th, 2008

GottaWonder.jpg

I was talking to a friend today and he was fuming over a decision that one of the other guys he works with made. Of course, this other person really isn't in a position to make these kinds of unilateral changes, and then have other systems change as well, without at least some discussion with the lead developer (my friend), but that's what he did. It's the Better to ask forgiveness than permission style of work.

My friend tells the story that this guy needed to add a field to a RPC call, and while it was meant to be an integer, he decided to make it a string because he didn't want to get into parsing problems. The "fear of the integer" so consumed this guy that he told others down-stream of the system he was working on to use the string and then parse it as necessary. There were problems because this is clearly a dumb solution to the problem, and they had to change it to what it should have been all along - an integer. But the guy didn't want to change the RPC interface - 'leave it a string' he says. My friend was fuming.

Since this isn't happening to me, I can giggle about this to no end. There's a guy that shouldn't be developing because he's not really a developer, and doesn't really understand why you'd want to make the interface match the datatype - he's one of those guys that would make all the data in a database table two varchars() - one for the name, and the other for the value, and put everything in there. It's almost comical, if you didn't have to deal with the fallout.

So my friend has to spend the next several days unwinding all these changes and getting the right data type in there - in multiple systems, all the while cursing under his breath. Yeah... it's funny if you don't have to be involved in it.

Once Again, MacVim Amazes Me

Tuesday, May 6th, 2008

MacVim.jpg

There were a few things that I had wished were different about MacVim - specifically, the filenames in the tabs had the complete path name, nicely abbreviated, but still, there. I asked the support group about that and they came back with the most incredible answers. Now it's true, I've been using vi/vim for about 23 years, and I'm by no means a wizard at Vim - I use it, and I'm quick with it, but there are a lot of things about it that I'm not aware of. Today was an education.

The Current Directory of the File

The first thing that struck me was that when you opened a new tab with the File Open dialog, you'd get the complete path to the file as part of the name in the GUI tab. Yes, the path was nicely abbreviated, but it was still there, and if I had tabs that I'd opened from the command line, they didn't have the path. When I read the answer it was obvious - the current working directory of the command-line tabs was well know - the ones from the file open dialog wasn't. So what to do? Well, the answer was to change the directory on those files as well.

The brilliant answer came from one of the contributors on the mailing list: have an auto-command change the directory for each buffer you enter. The lines in my .vimrc that controls this are:

    if (has("gui_macvim"))
        :autocmd BufEnter * :cd %:p:h
    endif

The Format of the Tab's Filename

The second thing that can help is to force the tab to have just the filename and not the complete abbreviated path. This is done with a simple set command:

    if (has("gui_macvim"))
        :set guitablabel=%t
    endif

With these two additions I have two things that I really didn't like fixed. When they get the ATSUI renderer working with the mouse support, this is going to be one incredible editor. It's pretty awesome already.

That Fine Line Between Scripts and Applications

Monday, May 5th, 2008

GeneralDev.jpg

I've been working on fixing up a script that takes a Bloomberg field definition file and generates a bunch of SQL statements to populate a database with this field definition data and it's right on the edge of really needing to be an application. It's all in bash now, but it really ought to be in perl or maybe even Java or C++. It's thousands of lines, so an application that can detect the existing data easily and only update the relevant records is going to get a lot way towards making this better. Perl could do it, but this bash scripting is just awfully limited.

Oh, I'm going to finish it, because I'm nearly done, and there's nothing in the requirements of the task that can't be done with bash and the other unix tools, it's just that had I originally known the issues with this updated file from Bloomberg, I'd have probably opted for a Perl script from the get-go.

There are spurious backslashes in the file, so I have to sed them out and make a temporary file of that. Then there are the problems with the intended primary key - this time around the file has duplicates on the primary key because it's really not meant to be unique from Bloomberg's point of view, I was just using it as a primary key because the first version of the file was unique. Silly me.

So something that should have taken five minutes is now in it's second hour as I find each of these issues in the 9000+ line file. The box it's running on is not slow, it's just there's a lot of stuff to do, and I'm not being really efficient because each time I'm blowing away all the data and regenerating it. Again, I was thinking this was 'easy'. Silly me.

The lesson in all this is that no one is right all the time, and even if you're right today, tomorrow will bring facts and circumstances totally unknown today and they will make the decision wrong. We have to be flexible and willing to see what's right and wrong and fix it - even if it means re-writing the entire process.

One of My Favorite Databases – PostgreSQL

Monday, May 5th, 2008

PostgreSQL.jpg

About eight years ago I was doing a project at the place I used to work and I needed to have a linux-compatible database that I could use that had C/C++ and Java bindings. At the time, MySQL was very popular, but when I looked at it I was struck by the lack of foreign keys and support for stored procedures. PostgreSQL had both of these, and while it wasn't as fast as MySQL, speed wasn't as important to me then as completeness of the database features. Times have passed, and MySQL has gotten better, new databases for linux systems have come into being, and PostgreSQL is still going strong. I've never regretted the decision I made those many years ago.

Both MySQL and PostgreSQL have corporate backing now, and I'm glad for each as the competition is good for the users. Most of the time, my PostgreSQL databases just hum in the background, but today I'm messing with one and it's really a much a joy to use today as I ever remember it being.

Everything is there that I need. And it's fast. Sure, it may not beat a clustered Oracle installation at 22.5 million rows, but then again, when I get a database that big, I'm sure I'll have the money to get something equally as expensive, and possibly it'll be PostgreSQL on it's clustering solution. But for the databases I'm using - under 100,000 rows per table, this is more than enough. It's fast, dynamically configuring, and it simply just works.

If only more commercial products did that. PostgreSQL... get it. You'll be glad you did.