Archive for April, 2008

The All-Time Slowest Messaging Middleware

Wednesday, April 30th, 2008

SwissJupiter.jpg

At the Shop, I've been part of an integration team that's trying to get this large system into place and running. There's a ton of connections to our other systems, and it's a big job, but the part that I'm working with most is the price injection - because I'm the guy that wrote the price feed, after all. Makes sense.

Part of this product is a messaging system - which makes perfect sense to me - build a strong backbone and then hang things off it and let the middleware take care of the delivery and communication. Good plan. Horrible execution. I mean the worst I've seen in ages. And I've got data to back it up.

The entire messaging system is single-threaded. In this day and age of multi-core, multi-CPU machines, this alone makes this message system a problem. The entire thing is a bottle-neck, as opposed to enabling multiple messages to go between multiple senders and receivers, this guy acts like he's the 93 year old mail clerk in the post office. He's only going to do one thing at a time, and it's a government job, so he can't be replaced.

Case in point, I sent a message - a simple message to be sure, probably less than 1kB, and it took 23 minutes to arrive. Now this is going from one box in the server room to another box in the server room. There's no reason it shouldn't be measured in msec. - let alone 20+ minutes. While I understand that the speed of delivery is based, in part, on the messaging database and the other traffic it has to deliver, there's still no excuse to deliver to a customer a messaging solution that's 20 minutes to deliver a 1kB message.

The funny thing is, I could read all the relevant data from the database (about 3000 records) in about 15 seconds, so while it's clear that something is holding it up, it's not the underlying database that's the problem.

I've had problems with this product before, and as such, I'm not about to name names, but if you're doing anything with a custom-built vendor-suppled middleware, it pays to look under the hood and make sure you can plug in something that's got known good performance like Tibco or IBM MQ. Make sure you're not stuck with this company's horrible middleware that's causing you problems for years to come.

[5/1/08] UPDATE: after a restart of the subscription connections, things seems to be working better today. I'm not sure what the problem yesterday might have been, but it's working better today. I'm going to watch it for several days, but now it's delivering the messages in under 7 sec., and while that's not the fastest message system in the world, it's certainly acceptable for this project. I hope it keeps up.

Updating CPU Timing Functions Using gfortran

Tuesday, April 29th, 2008

fortran.jpg

I've been working on my old simulation code a bit in the evenings and weekends now, just seeing if I could get the GaAs simulations to predict the oscillations of the 1D code. One of the things that hasn't bothered me - until now, is the timing methods used in the code. After all, I can time it on the wall clock and see how long it's taking, but I put in a decent level of effort all those many years ago to get the CPU split times for each phase of the simulation, and I thought it'd be nice to get them correct again.

I say 'again', because in the old f77 days, I'm sure dtime() was about as good as you could get. But in these new days of gfortran the values returned from dtime() are not in keeping with reality. I've looked at the GNU Fortran docs, and they say it's meant to return the elapsed seconds, but it's not. Maybe it's the build of gfortran I'm using, but I think it's more likely that dtime() is not what the new standard is using and I needed to move on.

So I did some digging in the code. Turns out F95 defined cpu_time() which returns the elapsed CPU seconds (as a real) for the execution of the app. This means that we need to put it into the code in a 'difference' mode - taking a reading at the top of the loop, and then at the bottom, and differencing the two for the incremental time that I was used to getting from dtime().

This wasn't all that hard, and in about an hour I had all the code re-fitted for the calls to cpu_time(). Thankfully, this is a much better timer and I get results that are making sense with the wall clock time I'm seeing for the runs. It's not like it's running any faster, or getting better answers, but it is at least more consistent, and I can look at the numbers and see what's more costly and from that see what I might need to do to alter the bias stepping, etc. Not amazing, but nice.

Twitter versus Instant Messaging – A Quick Update

Tuesday, April 29th, 2008

Twitterrific.jpg

I've been trying to use Twitter (via Twitterrific) for the last week or so, and I have to say that I agree with the majority of the comments that I've read about Twitter: it's a nice idea, but the reliability is just so bad it's almost not something you can really depend on. The nicest thing about Twitter is that it's less real-time than IM, which is a downside if you try and use it as a replacement for IM, but it's a good thing if you try to use it for a different kind of communication.

For example, if I'm trying to communicate with a friend about a coding project, then IM is going to be as close to ideal as possible. It's like a phone call, but you can cut-n-paste code clips to one another and it gets the job done very nicely. But if the conversation is about scheduling something like a dinner or a movie, if the person isn't online at that instant, IM is no use. There's no really good delayed message system for IM. But that's what Twitter really is good about.

Adium.jpg

Then there's the 'broadcast' versus 'point-to-point' difference in the two. If you're in IM you have the ability to chat one-on-one to someone. Sure, you can fire up a chat client and go to some place and get a message room, etc. But that's not the same thing as using the IM client. It's intended for one-to-one messaging. Twitter is based on the broadcast idea, and in that, there are times when it is significantly better than the point-to-point of IM. Think about a group of friends. If one is going to a coffee shop, he can tweet that he'll be there in 10 mins, and if others want to join in, that's great. It's less formal because there's no guarantee that anyone will be listening at the time, but if they are, it's possible that many people might be listening.

It's clever and unique. I like the idea. I just wish they could keep it working more. I know it's not going to be supported by the likes of me - using Twitterrific, it's got to be ad-based and then maybe selling the tweets or the contact information about me to others. I don't mind... if I tweet it, it's in the public domain - you have to realize anyone could be listening. I just wish they'd do something to make it more reliable. Even if it meant subscriptions.

Adium 1.2.5 is Out

Tuesday, April 29th, 2008

Adium.jpg

With all the fanfare of a little green duck, Adium 1.2.5 was released and I picked it up from the internal update system. I read a few of the release notes, looks like a lot of little bugs and maybe a few interactions with Spaces in Leopard, plus a few localizations. I've not had a problem with the sounds for a few releases, and just could not be happier with Adium. Incredible code. If you need an IM client on Mac OS X, get it.

One thing that would be nice is for Adium to remember the window you had open with the people (tabs) you were communicating with at the time. Then on restart, it'd just open up that same configuration. That would be nice to have. It's not a ton of time, but given that some of my conversations are with people that aren't online all day long, if they aren't online, then they don't show up in my buddy list and I can't easily pull them up. Minor issue, but a usability one that could be fixed if they maintained state across a restart.

Twitterrific and the Hope of Optional ToolTips

Monday, April 28th, 2008

Twitterrific.jpg

I was chatting with a friend of mine on Twitter - through Twitterrific and he mentioned that he had gone into the NIB for Twitterrific and removed the tooltips for the mouse-overs of the name and the image. This makes the UI a lot cleaner as it already says who it is and it already says when it came in - so it's all redundant, and if you have the mouse over the window you're almost guaranteed to have a tooltip pop-up. So he cleared it out. I don't blame him, but I thought it'd be nicer to let the developers know what we wanted, and maybe they could fit in an option on the tooltips. Worth a try.

So I emailed them and got a reply back that was not surprising, but a touch disappointing. They are focused on the iPhone version, and when that's released in June they'll go back to the Mac version and they plan a significant upgrade to the UI.

So it's going to get addressed, but it's not going to be ready until the Fall. Which in itself is an interesting statement on the speed of their development - or at least the time they can devote to it. Then again, maybe they have significant changes in store and a little option would only take a few days (tops), but totally re-coding the app might take months. Hard to see, but it's possible.

In any case, if I want to strip out the tooltips, I'm going to have to fire up InterfaceBuilder and hack at it. Not sure that's what I want to do, but it's something to think about.

New iMacs!

Monday, April 28th, 2008

iMac-20in.jpg

I'd heard about this on the rumor sites for the last few days but Apple updated the iMac line with faster processors and such. Nothing major, but it's nice to see that the iMac can now contain a 3.06GHz processor - which is a powerful little beast for a 'non-professional' machine. That's pretty impressive.

I'm not in the market for a new desktop - since the death of my iMac G5 and it's replacement by an Intel iMac, but it's nice to see things moving on the hardware-front. I'm still waiting for the quad-core, 8GB RAM, 500GB HDD MacBook Pro... that'd be something to have.

Knowing One’s Limits… and Staying Within Them

Friday, April 25th, 2008

cubeLifeView.gif

I was chatting with a friend today and he was telling me the story of a guy he worked with. This guy was holding up the project they were on because he refused to see that he really didn't have the SQL skills to complete this one part of the project, and didn't hand it off to someone with far better skills. He insisted that it would be better for the team if he learned how to do this, and in a sense, he's right. It could be nice for him to be able to do more for the Team, but he shouldn't get that knowledge at the expense of the project. Pick it up on something that has a nice, long timeline, and not the project they are on that's already behind and threatening to harm the business.

It's about understanding one's limits and staying within them. Oh, there are time to stretch your comfort envelope, but when other people are counting on you, and you have the option of handing something off or keeping it for yourself and trying to get it done - you need to think of the Team a bit. Otherwise, people aren't going to want to work with you and you'll end up getting only those things that other people know you can do.

I don't know how this particular story is going to end. I'll check back with my friend in a few days, but it's something I really feel sorry for the guy about. I mean, he probably doesn't even recognize how he is perceived. He probably thinks this is normal behavior. But it's going to get him into a load of hot water when management asks him why it's not done and he's still saying "...I'll have it done tonight."

After 20 Years, the Simulation Runs are Complete

Thursday, April 24th, 2008

shark.png

The final test of my thesis simulator code was a simple gaussian pulse traveling down the empty channel of a Si MOSFET. The simulation of the first picosecond took close to a month on the school's Gould NP1 supercomputer. Today I finally tried running the simulation on my MacBook Pro to see if I could get the same results as well as see how long it'd take.

I had typed in the input deck this past weekend, but hadn't had a lot of time to run it and see what the results were. Today I just fired it off in the background and came back to it every so often to see if it had completed. What I found out was that I hadn't typed in everything correctly and I needed to increase the maximum number of allowed Newton iterations to get to the convergence criteria I had in the code. No biggie, but it would have been especially nice to have gotten it right the first time.

When I finally got the upper limit set right, it turned out that I did the complete 10 psec simulation in about 72 mins. It's all single-threaded code, so there's a possibility of optimization there if I spend the time to make the matrix solution multi-threaded, but the work to do that would be significant. Still... under 2 hours versus a month for a tenth the work. Yeah, computers certainly have changed in the last 20 years, but it's times like these that really accentuate it.

I'll probably try to do some of the simulations that I simply could not do at the time - like the GaAs channel, because in those results are the real questions of the thesis work: Was it a 1D simulation effect? and What will the frequency be when taking into account the 2D v(E) vector? In the end, I know what the results should say... it's real and it's reasonable, as after I left, the next student actually got one of my designs working. But still... it's nice to be able to finally close the book on that chapter of my life.

Getting Faster Rendering Speed on MacVim

Wednesday, April 23rd, 2008

MacVim.jpg

I have been playing with MacVim quite a bit recently, and reading a few things off the Google Groups mailing list (vim_mac) and noticed that some folks were saying that there was a noticeable update delay on MacVim on their boxes. They didn't give details on the machines they were using, but they did say that the faster update scheme that the other Vim for Mac OS X also uses is the ATSUI renderer - MMAtsuiRenderer (as it's called in MacVim). So I decided to give it a go, you never know, it could be amazingly fast.

So I got the preferences setting that isn't available in the GUI and set it up:

    defaults write org.vim.MacVim MMAtsuiRenderer -bool YES

and then restart MacVim. Turns out, it's faster, but it wasn't really slow on my MacBook Pro. Now, maybe the others are using slower MacBooks, or even G4 iBooks, I don't know. But it doesn't hurt to have the faster renderer being used.

I'm still amazed by the job they have done. Really exceptional.

UPDATE: it turns out that if you turn the ATSUI renderer ON (at least for version 7.1/26) then the mouse clicks don't move the cursor to the location clicked. This is not a "good thing" and I've posted a message on the vim_mac message board to see if I can get this either fixed in the next release, or if there's something else I need to set in order to get the click/moves working again. We'll have to wait and see, but for now, I'm turning it off as the updating wasn't bad at all, and I need to be able to click-n-move.

[4/25/08] UPDATE: I got news back from the vim_mac mailing list. It seems the mouse support for the ATSUI renderer is not in the code. It's a known issue that they are working on when they have the time. For now, I'll have to just use the NSTextView renderer which is OK with me.

In Fast-Moving Environments, You have to Make Your Own Solutions

Wednesday, April 23rd, 2008

SwissJupiter.jpg

I realize it's hard to be a vendor to a lot of big companies. Some want stability and verification of easy little point release, and others want to see new features and fixes turned around as quickly as possible. Heck, I run into that every day here. Some groups want stability - which implies very few changes, and others want to see new things they've asked for as soon as possible - even if it means a little loss of stability. I get it... you can't please everyone.

But this can't be an excuse to do nothing. And by nothing I mean you can't have people on site helping to work on your multi-million-dollar product and have them (along with the users) identify bugs and needed features, and then just sit on them. That's not going to be 'OK' with anybody.

But it happens. Can't be helped. If a vendor is unresponsive, and you have to still make it work, you have to exhaust all possible options and avenues before throwing up your hands and saying "Well... we tried everything and still it's no-go. We have to wait for them to fix this."

I've had experience in both cases, and one is not more comforting than the other. You can't stop at the first sign of a vendor problem - you have to inform them, to be sure, but you can't stop there. You have to dig and dig, because you can't be sure that they are even going to look at the problem with the same intensity as you will. Plus, it's another pair of eyes - look and maybe you'll see something that they don't because they're too close to the problem.

At the same time, when you hit the end of the road, you have to accept that it is the end of the road and not try to change facts just because you want them to be different. You have to try and accept the facts. But you can't give up too quickly.

I've run into guys that have simply given up too quickly. Maybe that's all they had... maybe they couldn't see that there was another way to approach the problem. In any case, giving up is not going to make things work. You have to be more determined than that. If the answer can't be found, that's one thing, but you have to be sure that it's not that you're giving up too soon.

After all, you might be dealing with a vendor that's not going to be as responsive as you like, and you may only have a few weeks to get things working or you loose the management backing to continue with the project. You have to take things into your own hands a lot of the time. There's simply no other way to get things working.