Archive for the ‘Coding’ Category

First Day at The Shop

Monday, May 21st, 2018

TractorWell... it's the first day at the new job, and I've got a lot of good feelings about this place. First, it's about giving back - at least a little to the folks that grow all the food we eat. After spending a lot of time making "business" - it's nice to be a part of something that is really making a thing. Nice.

Then there's the idea of possibly staying with Clojure - there's a good bit of it here, and it's supported at the highest levels, and that's both a good sign, and a nice chance to keep doing some of the really good work I've been doing for several years now.

I have no idea what will happen… it’s only the first day, but I’d really like to think that this could be a place to stay for a while. I’m getting tired of the stress of the last year.

I Love the Sound of the Train

Monday, June 6th, 2016

Metra Engine

I was just sitting in my office at home and heard the sound of a train rolling by my house. I can't be 50 yards from the tracks, and I just love it. When I was a little kid, I'd visit my grandparents in a little town in upstate Indiana, and from the room we'd see in, we could hear the trains go by. They were a lot further away than the trains that run by my house, but I loved the sound then, and maybe I love the sound now because of what it meant to me then.

Memories are powerful things. I'd like to hold onto the good ones, and let go of the bad, but life isn't like that. You have to accept the bad ones, and enjoy the good. That's what life is about.

Pulling Query Params from URL in Javascript

Wednesday, June 24th, 2015

SquirrelFish.jpg

As part of this continuing project at The Shop, one of the things we're going to need is to be able to put URLs in emails and have the user click on them, and have it take them right to the document in question. This means we need to have the index.html page accept a query param of the ID of the thing to load. Then, if we get to the page, and this query param is there, then we load the requested data, if not, we put up a blank page, and then let the user search for the document they want.

Seems simple. If we can get the query params from the URL in Javascript.

Thankfully, someone posted this lovely little function:

  /*
   * This function extracts the provided query string from the URL for
   * the page we're on, and it's a nice and simple way to get the parts
   * of the URL that we're looking to see if they provided.
   */
  function qs(key) {
    // escape RegEx meta chars
    key = key.replace(/[*+?^$.\[\]{}()|\\\/]/g, "\\$&");
    var match = location.search.match(new RegExp("[?&]"+key+"=([^&]+)(&|$)"));
    return match && decodeURIComponent(match[1].replace(/\+/g, " "));
  }

Then we can have a little script block at the bottom of the index.html page, after loading all the javascript libraries, that checks to see if this parameter is being provided:

  // when the page is done loading, then load up the form
  var lid = qs('loan');
  $( document ).ready(loadLoan(($.isNumeric(lid) ? lid : null)));

This snip does everything we want - checks for the numeric value, and if it's there, uses it, if not, shows the blank page. Very nice.

Nice Bootstrap Trick for Clean Search Box

Wednesday, June 24th, 2015

SquirrelFish.jpg

I've been working to build a little collaboration tool at The Shop with a clojure back-end and a simple Bootstrap front-end. I have to admit that Bootstrap and Handsontable are some amazing tools, and I could not imagine doing the project without them.

Part of this was to have a simple 'Search' feature in the top menu bar where the users could input the known ID of a document to pull up, and the system would do it. Thankfully, Bootstrap supports this capability nicely:

better colors

But the problem with this is that it's put in the page as an HTML form:

  <form id="search_loan" class="navbar-form navbar-right" role="search">
    <div class="form-group">
      <input id="loan_id" type="text" class="form-control" placeholder="Search">
    </div>
    <button id="find_loan" class="btn btn-default">Go!</button>
  </form>

so that when you hit 'Enter' in the text box, or click the 'Go!' button, it performs the POST, and your code either has to be ready for the POST, or you have two refresh of the data - and that's not simple or clean. The solution is to intercept the form submission and hijack the event to do your bidding.

At the bottom of your HTML page, where you load all the javascript libraries, you can put a little script block, last of all, and it'll do some cool stuff:

  // set up the form components to work as we need them
  $("#search_loan").submit( function(e) {
    e.preventDefault();
    loadLoan(document.getElementById('loan_id').value);
  });

this little bit of code will capture the form submission event, prevent it's default behavior from occurring, and then call the loadLoan function with the contents of the text box in the search field.

Given that this function is what you want to have happen, this will make the search box work just like you want. All from one page, no redirections, no calls to refresh the page. Just load up the data on the search. Very cool.

Twitter Ad Service API

Thursday, May 14th, 2015

Twitterrific.jpg

Today I spent a good bit of the day trying to figure out how to authenticate with Twitter's OAuth 1.0 system, and I think I'm getting close, but I'm still a bit away because I don't control these accounts, and the sheer volume of ways to authenticate on Twitter is daunting. Let allne the different APIs.

There is the client-facing Tweets API, and then there's the Ad Server API, and it's not at all clear that there needs to be different authentication schemes for these APIs. But it should be clear that access to one set of APIs probably should not guarantee access to another set - and maybe they handle that in the authorization, but it's not clear from the docs I'm reading.

And speaking of docs, wow... these are really something else. There are at least four ways to authenticate, but they ask people to use libraries - that they don't provide. Sadly, I don't see one that does 100% of what I need, but I do see an OAuth 1.0 library, but the Client ID and Secret are nowhere to be found on their site.

So clearly, I'm missing something.

What I believe is that you have to create an App that then gets you the redirect URL and ID and Secret. There were none defined to base a new one on, and so I sent off an email to the Twitter representative to see if this was, indeed the preferred way.

While I was waiting, I decided to try and make an app. Yet in order to do that, you need to assign a mobile phone number to the Twitter account, and I can't really do that because the account is not mine. SO I sent another email to the relationship folks in The Shop about that.

In short, it's just a waiting game. But it's also so much more of a mess than the other systems I've been integrating with. Wow...

Heroku Adds Redis

Tuesday, May 12th, 2015

Heroku

This afternoon I saw a tweet from Heroku about them adding Redis to the add-ons for their service. This just a few days after their announcement that Postgres was available for the free tier, and the new "Free" tier for apps. They are getting aggressive with what services they are providing. This makes a ton of good sense to me, as I'm a huge fan of redis from clojure, and this makes all the experience I've got in building apps directly transferable.

While I know the Free tier isn't all that great, the idea that there is a Free tier is amazing, and it means that I can write something and throw it up there, and as it's needed, I can scale it up. Very cool. They also have a hobbyist tier that's only something like $8/mo. - similar to GitHub.

If I needed to be firing up a web service, it'd be clojure, redis, and postgres - all on Heroku. What an amazing service.

Wild Bug in ZooKeeper

Friday, May 8th, 2015

ZooKeeper

I read on twitter about a bug in ZooKeeper found by the folks at PagerDuty. The story is quite remarkable, and reminds me that some companies still invest the time to get to the bottom of things - as opposed to just putting it off once a work-around is found. The level of detail and investigation they did is simply... inspiring. I'm stunned.

I'd like to work at a place where that kind of stuff is done. Not all the time, of course, as I'm sure they were all glad that it was over when it was over, but to be able to devote the time to solving the problem as opposed to stopping it - that's nice.

I don't imagine anything I'd do would hit this series of bugs. Too many components that simply aren't something I'd ever want to use. But it's nice to know someone is there digging deep.

Some days things work out…

Friday, May 8th, 2015

Great News

This morning I saw on HipChat messages from two folks at The Shop:

Chris M. said that you helped him to setup our new hardware. Guess what? I just ran a test ETL in new hardware, it is 5 times faster. The full MMS ETL cycle takes about 2-2.5 hours. In new server, it takes 0.5 hour. THANK YOU for whatever you helped Chris M. 🙂
-- Okji

and from Chris:

So, you were 100% right on the hardware specs for the Pentaho stuff.

Okji is running initial stuff now and it's insanely fast.

thank you for dealing with a stubborn asshat me through the ordeal and lighting a fire under my rear.

tbh I'd probably be flogging the dead virtualization horse at this point w/o that back and forth we had.

So yeah, thanks 🙂
-- Chris

I don't often have people sending me these kinds of notes for work I did for them. The problem was simple and obvious - to me, but if you have never seen the other way, you often think your way is the only way. I've seen it a million times. The point is to get them started, let them see, and then be very gracious when they thank you.

That last part is key.

You want to build up everyone - not just yourself. Help others feel good about what they did, and they will want to work with - or for - you again. It's simple. Who wants to be around someone that makes them feel bad about themselves? No one I know.

Interestingly enough, this is going to make things work a lot better for the short-term goals. There's a consultant at the shop, and this is going to make his Uber Plan much less attractive, and necessary.

Postgres has Added UPSERT

Friday, May 8th, 2015

PostgreSQL.jpg

One of the things I've always wanted in Postgres is the UPSERT - an INSERT that would update certain fields if the row (defined by the primary key) already existed. In the past, I've had to implement this as a custom function (stored procedure) in pl/sql, by checking for the existence of the row and then doing an UPDATE, or failing that, do an INSERT. It's workable, and it's not horrible, but it's also something that's in several other databases, and I wanted it in my database. 🙂

This morning I read a tweet that said:

better colors

and read the commit log message that described it. I love it! This is exactly what I've been hoping for.

The only question now is - When is it released?

LinkedIn API for a Recruiting Tool

Thursday, May 7th, 2015

LinkedIn

I was asked today to look at the LinkedIn API to see if I could access the data at LinkedIn to make an advanced recruiting tool for the Recruiters here at The Shop. The idea was to take a resume we received, match it to a LinkedIn profile (I'd venture that 90%+ of them are there), and then use advanced analytics to rate the prospective resumes for potential success at this job.

It's an interesting idea. The real advantage for LinkedIn is that companies like ours pays several thousand dollars a month to have access. With this kind of tool, that same data could be used to classify candidates by the data they have entered, and then use a nice predictive model to say who is most likely to succeed. It's simple feedback.

We take everyone that's been successful at this company. Reference their LinkedIn profiles for the training data, and then use any and all reviews to say which of these people are likely to be the successful ones - based on all the classified data that's on LinkedIn.

It's kinda neat. We don't have to wonder what factor(s) matter most - we can get all that LinkedIn has in their API, and then use the success factor - say a 1 - 5 rating as the outcome, and then train away. After that, every submitted resume can run through the trained net, and come up with a score and a confidence number. Pretty simple.

It's not meant to be fool-proof, but when you have a ton of openings, it's nice to be able to have something that whittles down the list of thousands to hundreds, or less - so that you can really focus on these people.

We'll see where it goes - if it goes anywhere.