Archive for May, 2008

I Ordered a Kindle – Just too Many Books

Saturday, May 3rd, 2008

Kindle.jpg

Today I ordered a Kindle from Amazon. I tossed and turned over this for months. When I travel each day to work on the train, I read. A lot. So I have tons of books that are sitting on my shelves in my home office. I have been looking for an alternative, and the Sony feels nice, but it requires a PC - and I'm not about to go down that road. Also, I was very interested in the "shop and read" functionality in the Kindle. When I finish reading a book on the train, I have to sit for the rest of the trip because it's too bulky to carry two.

I'm not sure that the Kindle is going to be everything in eBooks, but I think it's got a decent chance. I was waiting for the second revision of the Kindle, but when Amazon came back and had them in stock, it was clear to me that the next version was not real soon now. So I decided to go ahead and get it.

It'll probably be like my iPod - got a 3G and used it for years before getting the Touch which is exactly what I want. I'm not convinced that Apple is coming out with a book reader - but it'd be nice if they did. I think that this is something that Amazon is in a unique opportunity to produce and provide. They have the bookseller reputation. So I'm going to give it a go.

I'll post updates as I get it and use it. Should arrive sometime this week.

Complex Systems can be… Well… Complex

Friday, May 2nd, 2008

servers.jpg

Today I have spent most of the day working on a problem we ran into this morning about the FX conversion of the dividend curves in my server. I had to dig into this problem for quite a while to figure out what the problem was. At first, it seemed pretty clear - the FX rate for USD/CNY was 1.0 and it should have been a tenth that. The problem was Why?

The first thing I did was to be sure that the FX rate was now not 1.0. It wasn't, but this is where the complexity comes into play - I forgot to realize that the server is a multi-machine, multi-process entity, and the FX rate I was looking at was, indeed, not 1.0, but that's not the FX rate that was being used to convert the dividend curves. The FX rate used for that was still 1.0.

After looking at this for about 10 mins., I realized what the problem was, and realized that by restarting one of the components I could have it reload it's FX rates, and this guy was the one doing the dividend curve FX mapping. In the end, it was an easy fix, but it made me realize that I needed to have a better way to have these dividend curves mapped, and so that's what I set out to do.

Interestingly enough, one of the strengths of the server is that the components are very loosely coupled. This means that they can be independently restarted and the 'whole' will not suffer. Things are re-tried, re-sent, and life goes on. Very resilient. Problem is, this means that you need to have an exceptionally good communications system to make sure that what you want done to one component will be done to all effected components. Case in point, telling the system to reload an individual FX rate.

Clearly, restarting the components is an option, but that's not very user-friendly. What I was looking for was a way to tell the components when something was changed and tell them of that change. Problem was, this would represent a significant addition to the protocol that was already in place between some of the components. Not something to do lightly. Especially, if there's an easier way to accomplish the same thing.

So I kept digging. It turned out that the only reason this one component had the FX rates was for this dividend curve mapping, and if the data coming back from the database didn't need to be mapped, then this would no longer be an issue. Idea: Make the FX conversions in the database calls. Problem: can't slow things down, so I can't add a lot of processing and I have to be careful about the FX rates I use.

Turns out, the first worry wasn't too bad. I simply looked at the original data as a 'rough cut' of the data. If any FX adjustments needed to be done, I did them en masse. This meant that most things didn't experience any slowdown, and those that did, most times the curve would be converted all at once and not a point at a time. Nice.

The last problem was a little more difficult. I was able to work towards it, and I only had to make two assumptions - that the 'latest data' for both parts of the FX conversion had to occur on the same date (good idea anyway), and that it's always the 'posted source' so that we had the best marks in use. I thought these two facts should be true, but I wanted to run it by someone who's had a few more years at the data-side of things than I. After I explained this to him, he was convinced that they were valid assumptions/rules and I then knew I could complete the FX conversion within the database.

What's the point? Well... complex systems are complex. It's in the name. Even when I've worked with this guy for years, there are parts and interactions that you might not realize and it plays tricks on what you think should happen. Then you try it and you get a different result and start to think about it, and then in a flash it comes to you that you were wrong and it was right.

Kind of like those simulations where you don't program in certain behavior, but the higher-order behavior is a direct result of the low-level rules, and so the complex behavior is displayed on your simulation. Wild stuff. Cool, but when you're trying to make a simple change and the complexity of the system is staring you right in the face, it's giggling at you. You are the one that needs to adapt.

Somedays I Wish My Life were a Documentary

Thursday, May 1st, 2008

cubeLifeView.gif

Today I had a disagreement with my manager - he said we never agreed to a certain set of pricing rules in the risk server, and I said we did - specifically at the request of users. These types of things happen all the time for a ton of different reasons, and I'm sure he remembers it as he does because he thinks it was implied that the other things would also be involved. Specifically, when you filter out events based on the data in those events you have to be careful not to filter out too much or too little. Today we were filtering out too little, and his opinion was that he never would have agreed to the rules in place that allowed that data to get into the system.

I begged to differ. I recounted for him the meetings with the users that started the revision of the rules, and what subsequent meetings discussed, and the modifications to the rules as those series of meetings continued - until we arrived at the rules that we now use.

However, it never pays to remind someone that they mis-remembered, so I added in yet another facet to the rules to check for the data that slipped in today. It took me about two hours to get it in all the systems that it was needed, but in the end, it's a better set of rules, and will give the users a higher-quality price feed - which is something that will please everyone.

Had my life been a documentary, I'd have been able to rewind to the date in question, play for him the relevant clip(s), and there would have been no reason for a conflict because there would have been no reason to disagree. Facts make arguments moot. And yet when you start to record things - like meetings, it generally makes people feel very uncomfortable because they instantly know that it's a possibility that they will be shown their poor memories at a later time. This tends to really put a damper on the process.

So while it'd be great, it's not likely to happen. Too bad.