Archive for the ‘Coding’ Category

Struggling With a Lot of Things Lately

Sunday, January 6th, 2013

This last week at home has been a real struggle for me. I'm trying to figure out why I'm working as hard as I am in a job with pretty much no bonus structure and co-workers that think that coming in at 8:00 am is too early. I'm not saying they are wrong, I'm wondering why I'm working as hard as I am if they are seen as OK?

There's similar stuff going on at home. I'm working harder than ever, but it appears to be a completely thankless job - even from those that should know better.

So the latest is that the project manager for the project at The Shop said that he had to have certain transparency features that he has now - when we move to the new demand system - that we've already moved to, by the way. He said that he needed two features that I an specifically told him he was not going to have if we moved forward with this project, and he said he was "OK" with that. I've come to learn that he's the one individual that I've talked to at work that consistently never really listens to me.

Oh, I don't think I'm singled out in that regard - I'm sure there are lots of peopler he doesn't listen to, but when I take a lot of time to lay things out clearly, and carefully, and he buys off on them, it's more than a little annoying to see him do a complete 180 when it's now deployed. It shouldn't be a surprise, but it's incredibly frustrating, and it's really gotten me thinking.

Like why do I even try? Why not just give him the crap he's asking for, and then when he gets it tell him he didn't listen to me, show him the emails, and then stare at him in the face. If he thinks he doesn't have to listen to me, maybe he doesn't? Who am I to say?

At the same time, I'm ready to quit. Of course I won't, but this is exactly what I hate most about the places I've been recently - crappy management. Really crappy management. If they want me to do something weird, or talk to this legacy system, or whatever, at least I can understand that reason and move on. But when the manager of the project is countermanding his own orders, then it means there's nothing I can count on at all. There's no reason to say anything, and in short, everything is a waste if time.

That's what's so frustrating. Feeling like I'm wasting my time. I hate that.

Anyway, tomorrow is the first day back, and I'm going to present my case, point out that he was told about these, and then let him pick. After he chooses wrongly, I'll just talk to my management (other management, this place is like Office Space) and tell them that this is the one thing I hate, and has caused me to leave other places. I'm not being mean, or nasty, but if they understand the total crap this is, I think they'll at least understand my position - even if they don't applaud my response to it.

They can't like wasting their time anymore than I like wasting mine.

I got a lot of anger towards a lot of people right now. Wanna just throw it all away, but I have to deal with these people, and that makes it very very hard.

Moving Forward with Low-Priority Work

Wednesday, January 2nd, 2013

Building Great Code

This evening I did a little work on a few low-priority tasks in the project I'm on. Normally, I wouldn't worry with stuff like this, but a nice guy in the group in Palo Alto really wanted this stuff added to the code, and so I took the time to finish the work I started several weeks ago that he needed.

Normally, I agree with the priorities we have. It keeps my work at a manageable level. But there are times - like this, that I feel kinda bad that I don't have a few more hours in the day/week/month to push a few tangential things forward a bit. Yeah, it's gotten me scolded in the past, but my argument this time is simple - I'm at home, I already did a lot of work for the day on the stuff that is meant to be my high priority, and it was after 5:00 pm.

Basically, I did it on my own time.

Thankfully, it wasn't that hard - took me a few hours, and now it's on a pull request into the main codebase and we should have something ready to test in a day or so. I feel like I've been a "nice guy" today. Makes me feel good.

Refactored the Closed Deal Code

Wednesday, January 2nd, 2013

Dark Magic Demand

Today I spent a good chunk of the day refactoring the clojure code with regards to the Closed Deals out of Salesforce. Specifically, after I got all the original work done, I realized that as it stood, I didn't have a change of working. Why? Because the deals would change over time. The moment a deal popped up, it's have a fill count of zero and then over time, it'd increase to some final value.

This means we'd have to have mutable database records, and that's totally against the immutable concept that my co-worker had for the entire clojure-based project. So to make it mutable would make it impossible to re-run the code for any point in time - and that's no good. So it meant that I needed to re-do the code and get it to work with closed deal sets and then compare the sets I load in from Salesforce to the sets available in the database.

It wasn't horrific, but it wasn't trivial. I'm getting better at clojure, and that's nice, but it's also the korma library and all the other supporting tools that I need to get up to speed on to really be productive. In the end, there were a few issues, and yet I was able to get them resolved pretty easily.

In the end, I was able to get the imports running again in UAT and that was a great feeling. It was (albeit tiny) progress for the day.

Added Closed Deals to Dark Magic

Sunday, December 30th, 2012

Dark Magic Demand

This morning I've finally finished up on something I've been working on and off for several days now - adding Closed Deals from Salesforce into the Demand Service that we're building in clojure. At some point, I'll probably drop the statement about what it's written in, but it's still too soon, as I think it was picked for all the wrong reasons. But here nor there this morning, it is the tool for now, and the future will bring what it brings.

The reason for needing the Closed Deals from Salesforce is that we are still totally dependent on Salesforce for holding all the actual bookable deals and merchant data. If we want to adjust the demand forecast by what's in inventory, then we need to get the data from Salesforce on at least a nightly basis, and use that data to update the demand forecast and "back it off" by the deals that have been closed since the demand was generated.

So if there's a demand forecast point of 1000 units, generated three days ago, if the sales reps closed a deal for 500 unit yesterday, we need to really only show the demand of 500 units today. The problem with all this is that Salesforce is not known for being exactly useful data and effective schemas. It's all there, but it's by no means easy to get to, or easy to use.

The first thing to do was to spend a day or two on just getting the data from Salesforce. Not as easy as I'd have hoped, as everything seems to be a REST interface - what… these people never hear of sockets? Anyway… I had a lot of grief with the paging that you have to do with Salesforce as it can't (won't) send you all the data at once. And it's not a size-limit thing, though they may advertise that as the reason, I've gotten "pages" with three small elements in them, so it's more than that, and for whatever reason, it's there and I have to deal with it.

I thought I had it all figured out, but I was slightly mistaken on the functionality of the take-while function in clojure. It seems that it continues as long as the value returned is "truthy" in some sense of the word. Meaning, it automatically stops on hitting a nil, but I made a function to test for that. Simple mistake, and it worked, but it wasn't the "clojure way", and when in Rome…

After I was able to get the data, I spent a couple of days just figuring out the PostgreSQL database schema so that we can load up the data easily and then get it out of the database as easily. We also need to make sure that we create the clojure entities for these tables, and that they are related to one another in the proper way. It's a usable, but manual ORM for clojure, and when in Rome…

With the schema working, I then had to try to load the data into the tables. This started out OK, but then as soon as I tried to read it back out, I ran into problems. They way the code is structured, we read out the potentially matching data, compare it to the next one, and then based on the results of that comparison, we either stop what we're doing (it's already there) or we insert the new data.

My code was failing on the pulling out the data, as the comparisons weren't working as planned. What I saw was a nice opportunity to chagne the logic a bit, so I did. I created a function that simply looked in the database to see if the deal I had, in hand, was already in the database. If so, it returned the ID of that deal. If not, it returned nil. This was really nice in that I don't care to read it all out and then compare it. I just want to know if it's already there!

This made things a lot nicer, and then things really started working. Very nice. No duplicates are loaded, but we can run this script for an historical two week period every day and be assured that we're missing nothing. Very sweet.

Took a while, but I learned a lot, and it's working well now.

Ruby Matrix Tools – Could be Something Interesting

Saturday, December 29th, 2012

This afternoon I saw this re-tweet from the JRuby Dev Team:

so I looked at it, and it seems he's started something pretty nice.

While the real purpose of using clojure in the most recent project is to make it appear that we're cutting edge and retaining some talent that's not interested in doing work that's not clojure work, the published reason is that it's ideal for this type of computationally intensive work, and all other languages really don't even come close.

As an old modeler and having delta with a lot of data in my days, this is complete hogwash. But it's how they are selling it - with a wink, to management. I think it's still a horrible mistake, but I understand that there are the political reasons for it, and those are the real meat of the issue.

While I'd love to see that there's something really powerful for dealing with large data sets and matrices in clojure - and there might be, the first tip I got was a real loser as I found out that it wasn't really complete - not even a skeleton at this point. So again, I think it's the hope and promise of clojure that's got these guys all pick this tech, and not the cold, hard facts.

This, it seems, is the starting point for a really viable alternative. Sure, it's all object-oriented, which in the functional space is taboo, but when dealing with problems like these, it's very practical. The java code is probably reasonably fast, certainly when compared to standard Ruby, but I'm not sure it's going to hold a candle to C++ code based on really solid toolkits like BLAS and LAPACK. But again, he's just starting, and that's the important first step.

I'm going to keep an eye out on this as I think it's likely that we'll get the order to switch back to jruby from clojure just because it's so hard to find people and it's so bloody cryptic that it makes ruby code look absolutely verbose. But hey… I'm easy - if they want o do this in clojure, I'll go along. If they decide to switch, I'm OK there too.

If they do, this will be something I'll be looking at using. Very interesting.

Added Checks for Database Version in Code

Friday, December 21st, 2012

PostgreSQL.jpg

In the clojure project I'm on, we have adopted the idea of migrations for the database. Basically, there's no one way to directly build the schema, you have to build the first version of the schema, and then migrate it through all the different changes until you get to the current, and final version. The idea here being that any version of the database can be brought up to snuff with the same code.

I look at it as kinda silly if we have to go through too many of these because it's unlikely that the databases will be out of sync for more than a day or two, and most likely updated at nearly the same time. But hey, what do I know? Right?

The issue with this migration scheme is that it's possible to have the code running against an old version of the schema. I suppose it's possible in the other way as well, but here it seems much more likely with all the changes that this migration strategy seems to empower. So I needed to make something that would look at the database we're about to run against, and then look to the migration path and see if this database was up to date. If not, then stop right there. Why? Because it'll make sure that we don't run the code against a database schema that the code wasn't intended to run against.

As an aside, this totally goes against the idea that the code should be more adaptive, but that seems to be not as well received in the group. They seem to want to know where the database schema is, and that it's where it should be for all the code - as opposed to using views and stored procedures to insulate such schema changes from the code. It's even possible to add another layer in the code to provide further insulation, but this works, and I'm certainly not going to change their minds on this. Pick you battles.

Now that we have this, it's safer to know that there's little chance of deploying code and running it with a mis-matched database underneath it. That's reassuring, however it's done.

Updated Demand Service to Time-Series Demand

Thursday, December 20th, 2012

Dark Magic Demand

Today I spent most of the day adding time-series demand to the existing system - server and client. This meant that I needed to migrate the database tables a bit - probably did it in a way that was a little more industrial strength than I needed - but it was a nice way to get back into the swing of things. While I could have gotten away with renaming the old column in the table, adding in the new, correctly structured column, then migrating the data from the old to the new column, and then dropping the old column, I choose to rename the table, drop all the foreign keys, make a new table, migrate all the data in the table, and then add back the keys as needed.

It took a little more time than I could have gotten away with, but it wasn't bad, and in the end, I'm glad I did it as it got me back into the swing of things with SQL and PostgreSQL so that when it comes time to do a much grander migration, and these steps will be required, I'll be ready.

Other than that, it was pretty straightforward. I needed to make sure that the old version of the API was unchanged, and that wasn't too hard, but then the new version had to be handled in the client (ruby) code, and that proved to be a little more challenging than I had thought.

The scheme we have is that if the demand is a time series of points, they will be in an array in the output, and the size of that array will determine the interval of the points - but always spanning a year. Twelve points on the array means the first point in the array is the demand for this month, and the second point is for next month, etc. If there are 52 points in the array, then the first point is for this week, and the next is 7 days out, etc.

Pretty simple, but then we needed to know the starting date for the series. After all, if this data is served up a week from now, how is the client to know which points to use? It makes sense to add a generated_at field in the API which is the starting point for the data in the time series. Once I had that in the ruby code, it was a matter of seeing what kind of data I was getting, it's length (if it's an array), and then looking forward from the generated_at time to the point in time that I'm interested in.

In all, not bad, and I'm glad I put this code into the main app now, as I want to pin this stuff down, and it's quite often the case that the guys in the group are kinda waffly about things like this. Get it in, get it done, and then their natural laziness keeps them from messing with it too much.

Simplifying SQL Arrays in Clojure

Wednesday, December 19th, 2012

Clojure.jpg

I've just spent several hours working with a co-worker on simplifying the way we are handling SQL Array values in the code. Previously, we had to make the INSERT statements and execute them directly - thus avoiding the korma generation of the PreparedStatement and setting it's values. This is OK, but it'd be nicer to be able to update korma to have it do all this properly.

So we did. It was a little frustrating because we got off-track several times, but in the end, we have something that will make it vey nice to use for the other times we need to use arrays in the code.

So time well spent.

Dealing with PostgreSQL Arrays in Clojure

Wednesday, December 19th, 2012

Clojure.jpg

One of the things I want to use a little more in PostgreSQL is the ARRAY datatype. It's pretty simple to imagine why it's a good thing -- face it, I can have the data flattened out:

  CREATE TABLE seasonality (
    id              uuid PRIMARY KEY DEFAULT uuid_generate_v4(),
    service         VARCHAR,
    jan             INTEGER,
    feb             INTEGER,
    mar             INTEGER,
    apr             INTEGER,
    may             INTEGER,
    jun             INTEGER,
    jul             INTEGER,
    aug             INTEGER,
    sep             INTEGER,
    oct             INTEGER,
    nov             INTEGER,
    DEC             INTEGER
  )

or we can put it in an array, and it's so much cleaner to deal with:

  CREATE TABLE seasonality (
    id              uuid PRIMARY KEY DEFAULT uuid_generate_v4(),
    service         VARCHAR,
    factor_by_month INTEGER[12]
  )

The problem is that when dealing with Korma, it doesn't like the array datatype as it's coming back as a Jdbc4Array and not something clojure can easily compare to a standard array of integers.

So what's a guy to do? Well… it's a pretty simple conversion:

  (defn extract-factors
    [record]
    (-> record :factor_by_month .getArray seq))

and the output of this is easily comparable to a simple array of integers.

Not bad. I spent a little time on this, and StackOverflow helped, but clojure isn't too bad at this. A lot more like erlang.

UPDATE: a co-worker pointed out that I could really make this a lot nicer, and I have to agree. The first thing is to make the function just convert the Jdbc4Array to a sequence:

  (defn array->seq
    [array]
    (-> array .getArray seq))

then we can make use of Korma's transform method on the entity to transform the data on the select calls:

  (defentity seasonality
    (transform (fn [rec] (update-in rec [:factor_by_month] array->seq)))

Then, as soon as we see the data from the select, it's already converted to a sequence and we don't have to worry about it. Works great, and really makes things cleaner.

Google Chrome dev 25.0.1364.2 is Out

Wednesday, December 19th, 2012

This morning I noticed that the Google Chrome team released dev 25.0.1364.2 and once again released completely useless release notes. It's really amazing how little effort some people will put into their craft. But then again, I've worked with a lot of folks like this, and thankfully I'm still totally baffled by their attitude.

I hope I never understand them.