Refactoring the Feed Recorders
This morning my manager stopped by to talk about the problems I've been having with the transfer of the data from the feed recorders to the archive server. He talked to my ex-manager about the issue, and he came up with the idea that we could just increase the frequency that we wrote to the filesystem from the recorder, and then stay using the filesystem where things look to be stable.
It's a pretty good idea. I needed to work out a few things - like the filenames, and how to deal with that, but in general, the idea is sound: if there's an existing file being "filled", then add to it, if not, create a new one and it becomes the current file. Once the file exceeds a certain size, don't write to it again, and let the next block of data create a new file.
The last trick I added was to have the writing of the file include the renaming of the file if appending data - to include the ending timestamp. This makes it such that the files are always consistent, and if the recorders crash, we're not loosing much as they only have 5 sec of data in them. The rest is written to the filesystem, with the updating filename so that it's easy to use at any point in time.
Empty block will naturally take care of themselves, and we're looking pretty good. It's a solid plan.
So I took out all the Broker-related stuff from the feed recorders, and then cleaned things up on the archive server, so that I got back to a nearly neutral state, checked that in, and then started updating the recorders to write out their data in this manner. The server was pretty much untouched as it now functions completely on the filesystem.
I started the tests, and sure enough, about every 5 sec, I get an update and the file gets a little bigger, and the name changes. The CPU load is a little bigger, but it's not bad, and the payoff should be significant -- the archive server should just work.
I need to let it run for a bit, and then hit it with my tests to see, but I'm very optimistic about the possibilities.