Archive for January, 2010

Apple Security Update 2010-001 on Software Updates

Wednesday, January 20th, 2010

This morning Apple released a new Security Update for Snow Leopard on Software Updates. It's got about a half-dozen updates to the core components: CoreAudio, CUPS, ImageIO, etc. You have to stay up to date or risk getting hit by one of the script kiddies. Sad, but true.

Coda 1.6.10 is Out

Wednesday, January 20th, 2010

This morning I noticed that the guys at Panic have released Coda 1.6.10 - which is great. It's still the all-in-one that I want to use for my next project on the Mac, I just haven't had time to get to it. The CSS, JavaScript and HTML along with the preview and publishing make it just what I want to use.

TextWrangler 3.1 is Out

Wednesday, January 20th, 2010

The guys at Bare Bones have updated their free editor TextWrangler to version 3.1. It's the little brother to BBEdit, and is a great alternative for those times you don't need everything BBEdit has, or for those that just can't see the need to pay that much for a text editor. In any case, it's nice to have as a backup.

Unison 2.0.2 is Out

Wednesday, January 20th, 2010

The guys at Panic have been busy... they also released Unison 2.0.2 this morning with the lovely new feature that the unread article count is now in the display. Very nice. There are still a few visual issues - most notably, the window width seems to be set up such that it can't be as narrow as I'd like and display things properly, but I can send them a note about that.

Still a terrific newsgroups client.

Finished the AJAX for the Fusion Page

Tuesday, January 19th, 2010

I spent the latter part of today getting all the AJAX and client (JavaScript) post-processing on the 'Fusion' page with the graph and table I started the other day. Getting the data wasn't bad, but unlike the previous table I created with the collapsing groups, these groups aren't clearly defined. In fact, the groups are contained in the table itself. This allows me to not have to worry about the group definitions, but at the same time, I needed to change everything about the old scheme because I didn't have the group definitions.

It wasn't horrible, but it took a little bit of thinking to get the summations, indentation, sorting, tagging and collapsing all working. A couple of hours of quiet work, which of course, turned out to be an entire afternoon because I don't have quiet, and making it (with my fingers in my ears) doesn't make for easy typing.

Yes... I wish I had quiet. But I don't.

In the end, I've got the code I needed, and it's off the the London users for them to kick around and send me any changes they might have.

Bad Code Just Keeps on Giving…

Tuesday, January 19th, 2010

GottaWonder.jpg

I know it's been a recurring theme of late, but I just can't help from commenting on the fact that a Good Coder can take Bad Code and keep it running far, far longer than it's probably good for the organization. Additionally, it keeps on giving to the coder long after he's tried to fix things up. Primarily because you can't really re-write the bad code - then it'd be a new project. No, management wants you to fix it, and quickly.

Well today I came across another lovely little "gotcha" with this application I've inherited. The designer tried to make a nice inheritance system for Stocks, Futures, Options, and to his credit, the ideas are basically nice, but the implementation is... well... let's just say not what I'd have done. But it's there. That's not the problem.

No, today I realized that a new portfolio has basically a bunch of stocks, but in order to get the pricing and calculated data for the instruments, I had to do a special type of query for futures that have no options on them. It's like the data source was written to supply data for stocks, futures, and options -- but only if you had options on the underlyings. If you just had a stock - too bad.

So they made this hack that a future was basically a stock. Not really - that whole expiration thing swept under the rug... but it's what I'm stuck with. I had to write code to make stocks from futures, and put it into the code base and write all the JUnit tests, etc. in order to get this one portfolio properly evaluating.

Not my idea of fun. And I'm sure it's not going to be my last realization about this codebase.

Dense Visualizations in the Finance Industry

Friday, January 15th, 2010

GoogleVisualization.jpg

I think one of the things I really like about the Finance Industry - certainly creating applications and visualizations for it, is the density of the visualizations and applications. Most finance applications are running on machines that have a ton of other things running, and the users want to see as much data as possible. I've seen traders with dual 30" monitors and an additional three 19" monitors all tied to a raft of machines - all just to get the data they need in front of their eyes for the trades. It's pretty impressive.

I have to say I'm the same way. I love the visual density of the information in graphs and well designed tables. So when the users in London asked me to essentially fuse two pages together - and two of the more complex, active, pages at that, I had to rise to the challenge.

This fusion page has the Google AnnotatedTimeLine widget on it for the intraday values of several portfolios, and it also has a Google Table widget to contain the product-level values for the components of those portfolios. There are a few other things, but these two are going to update independently of each other with the trigger being the same timer event. So they will be close, but not really in sync.

Today I did most of the HTML/CSS layout to get the components on the page. This is necessary because the AnnotatedTimeLine is really two stacked on top of each other to function as a double-buffered system. We draw to the 'back' one, then flip them. Pretty simple, but necessary because of the delays in updating the ATL with new data.

I'll be able to put the AJAX behind this when I get back on Tuesday. Nice to have a three-day weekend!

Optimizing jTDS packetSize for MS SQL Server

Thursday, January 14th, 2010

While doing some network testing/optimization recently, one of the network guys suggested I look at the jTDS parameter packetSize. He thought it might be something to look at if all else failed.

Since I had pretty much gotten to that point, I decided this morning to do those tests, and at the same time take a look at what H2 might say about performance tuning - as that was the destination of the data, after all.

The first step was to change the datatype in the database. According to the H2 docs:

Each data type has different storage and performance characteristics:

  • The DECIMAL/NUMERIC type is slower and requires more storage than the REAL and DOUBLE types.
  • Text types are slower to read, write, and compare than numeric types and generally require more storage.
  • See Large Objects for information on BINARY vs. BLOB and VARCHAR vs. CLOB performance.
  • Parsing and formatting takes longer for the TIME, DATE, and TIMESTAMP types than the numeric types.
  • SMALLINT/TINYINT/BOOLEAN are not significantly smaller or faster to work with than INTEGER in most modes.

The DBA I'd worked with to set up the back-end database that I read from didn't like using the double datatype primarily due to rounding. I said it was OK, but relented when he pressed. I then used the same DECIMAL(19,6) in the H2 in-memory database as existed in the MS SQL Server database. Seems reasonable, but it flies in the face of the suggestion from the H2 docs.

Since it's all Java, and a Java Double is OK with me, I decided to change all the DECIMAL(19,6) columns in the in-memory database to double. The results were amazing. I was able to achieve more than a 50% increase in the rows/sec processed by this simple change. Additionally, I was able to see a significant reduction in the memory used for the web app after making this change.

All told, a wonderful suggestion.

Then I took to running tests with different values of packetSize. I got:

packetSize Portfolio Product
512 11,496 11,904
1024 11,636 13,650
2048 11,902 13,941
4096 11,571 12,703
8192 12,560 14,774
16384 12,447 14,744
32768 12,753 14,017
65536 12,680 15,038

where the data is (rows/sec) processed from the back-end database into the in-memory database. Faster is clearly better.

What I found was that a size of 8192 was the smallest value that got good performance. So that's what I went with. With these changes, my 7 minute restart is down to about 2:20 - an impressive improvement.

Skitch 1.0b8.5 is Out

Thursday, January 14th, 2010

This morning I noticed that Skitch 1.0b8.5 was out with a few Snow Leopard fixes, but an additional extension on the beta period. I'm guessing this will continue for a while, but eventually they may have to charge for storage. If they do, I'll sign up as it's just amazingly valuable.

Nothing else like it in my opinion.

Performance Tuning jTDS Hitting MS SQL Server

Wednesday, January 13th, 2010

Today I've spent a lot of the day trying to get the restart time of my web app down to a reasonable time. The problem is that I need to load upwards of two million rows from a back-end MS SQL Server back-end database into an H2 in-memory database on restart using the fastest JDBC driver for MS SQL Server I've heard of - jTDS. The reason for this is access speed of the web app. There's just so much data that needs to be available to the servlets that if I were to get it from the database, I'd have an access time that's 10x what I have now.

So I need to load a lot of rows from the back-end database. In the past, I had a restart time that was about a minute. Not horrible. Today I realized that I'm looking at something more like seven minutes. That's too long.

So I pulled in the network guys to see if they could find anything in the wiring in the server room, or settings on the box, because there were machines where the SELECT statements were executed significantly faster than my box. The question was Why?

To their credit, the network guys did an impressive job of digging into the problem. Really amazing analysis of the problem. Unfortunately, in the end, they didn't have anything that was going to significantly change the performance of the query processing for me. But I wasn't too surprised, either. There had to be things I could do to clean things up, and I had always suspected it was going to be up to me in the end.