Scraping Logs vs. Exposed Stats APIs
I spent the morning today exposing another Broker service for my greeks engine - this one for stats on the running process. In the last few days, the operations folks, who have had months to decide what support tools they need, have put a halt on the deployment to production of my greek engine because they now need to have these stats for monitoring. Currently, they are running a script on the output of an IRC bot that's hitting the engine, but that parser bot depends on getting data in a specific format, and that's brittle, and doesn't allow us to expand the logging on IRC. So I built the better solution this morning.
It's all based on maps of maps, and I just put the data in what I felt made sense. It's organized by feeds and then the general engine, and within feeds, there are the stock feeds and the option feeds, and so on until you get all the data as values of leaf nodes in the maps. It's pretty simple, the only real issue was that there were several metrics that they wanted to see that I hadn't put in the code, and the person that had failed to make proper getters for the data, which meant that I had to make those before I could get at the data.
Not bad, but it took time.
The testing went really well, and they should be able to gather the stats they want at their convenience. Not bad.
As a personal aside, it really makes me wonder why it is that this is coming up right now, and why it's a show-stopper? I mean if it's a show-stopper, why wasn't it stated months ago at the beginning of testing? I think the reality is that it's not that critical, but the folks are starting to panic a bit, and are looking for the usual suspects to slow things down, or try to make this new system fit the same mold as the previous one.
It's kinda disappointing.