Archive for the ‘Coding’ Category

Updated Dash/Hashie Properties – Sore Subject

Friday, December 14th, 2012

Code Monkeys

I'm pretty sure I've ranted about this already, but once again today, it has been thrust in my face by the Code Monkeys that I work with. We start out the application design with a Hash. It's a wonderfully simple, flexible, data storage tool, and since we don't know all that we need, it makes far more sense to use it than enumerated ivars. So far, so good.

Then we pull in Hashie. This gives us "dotted methods" for all the keys in the hash. This includes the key? methods - one per key, that we can use as opposed to saying: m['key'].nil?. It's a win, and I love it. It does this without any restrictions or set-up on our part. I love it. So far, even better.

Then some in the group decide that we need to switch to a formatting/declarative version of the Hashie. Now if you want to use a value in the hash, you have to enumerate it in the class with a property keyword. All of a sudden, I'm not happy, and I say why I think this is a bad idea.

Up comes the ugly spectra of weak management and strong willed co-workers and Agile. They decide to re-write it even though we didn't all agree to it. Kinda bites, in my book. But I guess someday that'll work for me when I make unilateral decisions, and since I'm willing to work more hours, I'll get more of those opportunities. If I were a jerk.

So now, we have error messages and the code doesn't work because the incoming data source added a field and the code didn't. Had we left it alone, it'd be fine and we'd be working just as you expect. But because of this decision, we have production issues.

Where's the win, Boys?!

So I had to add some properties to the class just to make it not error. Yeah, this is good work guys.

Added Direct Deployment of Couch Design Docs

Thursday, December 13th, 2012

CouchDB

One of the problems with Couch is that when you change or create a view in Couch, it has to rebuild/reindex the entire view by running the map function on all the documents in the database. This sounds very reasonable because like any database, it needs to maintain it's indexes, and this is how it does this.

The problem is that when you're doing this, the view is completely unavailable unless you want stale data. Not really ideal, but again, you can see why it's implemented this way. It's possible to see the old view, but that's stale, or you can wait for the new one. Your pick.

In order to make this easier on our environments, one of my co-workers came up with the idea that if you deploy the new view in a different document, and then after it's done being built, you rename it to the one you want, there's no second rebuild. The rename is nearly instant, and everything is OK. He built something so that when we deploy to the UAT and Production Couch DBs, we deploy in these "temp" spaces, and then there's a crontab job that sees if the rebuilds are done, and moves things in.

Well… that's great for UAT and Prod, but for dev, I don't want the cron job - I just want to have a direct-deploy scheme where I can wait the two minutes to rebuild my (much smaller) database. So I added that into the rake task, and was then able to deploy my changes to dev first, and see that they were working just fine, and then to deploy them to UAT and Prod and let them wait.

The reason for all this was that the views in the Pinnings design document were out of date - people had changed the code and not updated the views, so that they weren't picking up the right documents as they were supposed to. Just not disciplined about what they were doing, I suppose.

Simplified Test and Fixed Bug at the Same Time

Wednesday, December 12th, 2012

bug.gif

One of my co-workers brought up a bug that I hadn't noticed up to now - one of the test methods - that is, a method that returns a boolean about the Merchant argument, was not working, and there was a far simpler way to implement it. Basically, when we were dealing with Hashes in the data structures, it was as efficient a way to handle the problem - modulo the bug, as we could have. But now that we have the Merchant and within it, the Opportunity objects, with their own boolean methods, this implementation really became - ask the Merchant to ask it's Opportunity if it's a live deal.

Far simpler in the code.

Plus, at the same time, we're doing the right test because when I coded that one up, I did it right. Go figure.

In any case, a simple fix, and we reduced the lines of code. Not bad.

Fixed More Production Problems

Wednesday, December 12th, 2012

bug.gif

For the second morning in a row, I had issues with production. Thankfully, this bug didn't effect any of the production data - only that data that was supposed to be written to Couch. So in a sense, it was bad, but not worthy of a re-run. The bug was pretty simple, and for once, I'm glad it wasn't me. The original code was:

  def to_augmented_hash
    to_hash.delete('accounts').merge({
      :category_counts        => category_counts,
      :number_of_new_accounts => number_of_new_accounts
    })
  end

where we were getting a NilClass error on the merge call. I had remembered reading that the delete() method would return a nil under some cases, so I looked it up again, and sure enough - it only returns the value of the key removed. What needed to happen was:

  def to_augmented_hash
    to_hash.delete_if { |k,v| k == 'accounts' }.merge({
      :category_counts        => category_counts,
      :number_of_new_accounts => number_of_new_accounts
    })
  end

With this in, it all worked just fine. When I went to check it in, I saw that a co-worker also fixed this, but split it out on several lines, and mutated the value with the delete(). I decided to leave mine in as it was cleaner, and more to the way it would have been re-written by someone else anyway.

Working on Demand Service in Clojure

Tuesday, December 11th, 2012

Ubuntu Tux

Today I've spent a good deal of time reading up on clojure and getting up to speed with my co-worker that's got several years of clojure experience on me. I knew going in it was going to be like this, and the learning curve is about what I expected. It's Java, under the hood, and that's got all kinds of pros and cons with it, but there are decent build tools, and I'm able to spend a lot of time getting the build/deployment environment up.

Today I also got the init.d scripts going so that we can start and stop it from a Makefile. Basically, all the things we can do from the rake files of the sister project in Ruby.

There were also issues with building PoestgreSQL 9.2.2, but those weren't hard to solve. Lots of nice progress today.

Fixed Issues with Production

Tuesday, December 11th, 2012

bug.gif

This morning we had a lot of warnings in the summaries with regards to the Merchant Status Reasons (MSRs). Salesforce exists on string fields, and while they make drop-downs that limit the applicable values, it's still a mess if something goes amiss, and we have unusual string values in fields. Such is the case this morning.

It turns out that our Salesforce team added several new merchant status reasons to the drop-down, and so now we were unable to know where in the sales cycle these merchants were. This is not good because it means that the only thing we can assume is that they are at the beginning, and that's clearly not right for all of them.

The solution was to learn what additions had been made, how they translate into our internal status codes, and put in the mappings so that tomorrow we will not fail. So that's what I had to do this morning.

Google Chrome dev 25.0.1354.0 is Out

Monday, December 10th, 2012

This afternoon the Google Chrome team released 25.0.1354.0 to the dev track. Unfortunately, the release notes are more than terse, and of no help to someone at all. I wonder why they even bother?

Listing Active Requests in PostgreSQL

Monday, December 10th, 2012

PostgreSQL.jpg

I needed to find out how to list all the active queries in PostgreSQL, so I did a little googling, and found the following:

  SELECT * FROM pg_stat_activity;

and the response comes back with all the activity you can imagine. I'm pretty sure it's by the connection, as a lot of my connections are 'idle', but that's great! This gives me the knowledge of what's going on in the database in case a query it taking forever.

Working Through The Possible vs. The Pretty

Monday, December 10th, 2012

Clojure.jpg

I know this about myself - I'm not afraid to get dirty. I don't mind it, and as long as I'm in for it a little, I'm in all the way. Mud up to my elbows - no problem. So it's come to no shock to me today that Archibald, one of my co-workers, who's a little fussy, and loves clojure is a bit unsettled by some developments in the project today.

Basically, we need to have something that conditionally formats objects into JSON from clojure so that the data flowing out of the system we're building is easily read by the downstream system without modification. That's the key.

Now it's certainly possible that we can modify the downstream system to make it take the data we'll be sending, but previously in the day Archie didn't want to do that either - for obvious coupling reasons (another big clojure design point) - so we discarded that idea, and went full steam ahead. But when we got to the "dirty" part of the code - where we were going to have to have a very customized output formatter to send only the fields we need to send - based on the data we're going to send.

For example, we can have two different location data points. One that's based on a zip code:

  {
    "name": "Downtown",
    "zip": "47664"
  }

and one that's based on a division:

  {
    "name": "Downtown",
    "division": "indianapolis"
  }

The idea is that a zip code is very targeted - geographically, but there are certain demands that are much larger in scope, and for those, we want to make it clear that the entire area is "fair game". The problem is that the table this data is coming from has both columns:

  CREATE TABLE locations (
    id          uuid PRIMARY KEY,
    demand_id   uuid,
    latitude    DOUBLE PRECISION,
    longitude   DOUBLE PRECISION,
    name        VARCHAR,
    zip         VARCHAR,
    division    VARCHAR
  )

so if we sent data directly from the table, we get:

  {
    "latitude": null,
    "longitude": null,
    "name": "Downtown",
    "zip": null,
    "division": "indianapolis"
  }

and if the reader of this data looks at the existence of the keys for what to do, it's going to be confused. We need to have something that intelligently outputs the data so that null fields are only sent when they are required null fields.

This is getting "messy", and I could see it on his face. It was something he was finding quite distasteful. It wasn't clean, like math. And I could tell it was grating on him.

For me, it's just part of the job. Very reasonable, and something we should be doing for our clients. We need to send them what they need, everything that they need, and nothing they don't need. I'd like that if I was a client of the service, so why shouldn't I want to do that for the clients of my service? Seems only reasonable.

I can tell this is going to be one of those projects that I'm going to wish was over before it ever really got started. Archie is a nice guy. He's funny and personable, and smart. But all too often he decides what he is willing to do, and many times that's not what really needs to be done because it's "messy". Please just do the job, or move on… there really is very little room in life for people that only do what they want.

Fixed Issues with Production

Monday, December 10th, 2012

bug.gif

This morning I had a production problem that I'm sad to say I probably should have seen coming. I added some new data from Teradata into the system and it was a bunch of metrics for a bunch of deals, all aggregated up to the Merchant level, and thrown into a JSON file for parsing. The original code didn't allow for there not being data for a division, and it could have easily been solved with something like this:

  @division_cache = source_data[division] || {}

but I forgot the "|| {}", and so it was possible to return a nil. This caused a nil pointer problem, and that hurt.

The solution was simple, and thankfully, I had time to re-run everything, but it was something that again I've strayed from - good, solid, defensive coding. I miss it.

I wish I had more control over this project and could enforce this without the knowledge that it'll get ripped out in a few days by someone looking to "improve" the code.