I'm not a fan of OAuth2... not in the least. It's excessively complicated, it requires call-backs, and in general it's no more secure than anything else, it's just more complicated. Add to that there's no really good library for it as the Google folks keep changing things, and you have something that's always going to require hacks... always going to require fixing, and never going go provide a seamless way to authenticate on a remote system.
But that's just an opinion. I have had to make it work at The Shop, and when I finally got it to work, I wasn't about to let this evaporate into the ether... I needed to make a gist of it, and document what I was doing so that I could come back to this and be able to remember it all at a later date.
The State of OAuth2 Clojure Libraries
The first really depressing thing was that there seemed to be no decent OAuth2 libraries for Clojure. And while there seemed to be a lot of forks of the clj-oauth2 library - but many, like the original, were years old - and they didn't work. Not even close. Now I'm not silly enough to think that the spec changed, but I do believe that Google changed things on it's end to make it more secure, and in so doing, broke all the clj-oauth2 work, and it's derivatives.
Still, there is the code I can look at. And some have pulled in the features that are needed, and so it's not impossible to make this thing work... though it's likely to take a lot of time.
The project.clj File
When I was able to get something working, I made a gist of all the important files, so that I could include them here as reference. I also wanted to post the link to the #clojure room in IRC because one of the guys there gave me a hint as to which library to use. He wasn't right, but he was close, and that's all I needed.
The project.clj file has all the versions of the libraries I used:
What I found was that this version of clj-oauth2 had the most complete mapping of the data coming from Google, which included the expires-in time - which I think I still may be able to put to really good use soon. While it didn't have the functions to renew the access-token, it turns out that it's not hard to write, and I pulled that from another fork of the master project.
The server.clj File
OAuth2 still requires that the user go to Google on a redirect, and then the call-back from them is where we get the first bit of the authentication data. I'm not convinced that this is at all necessary, but it's how things are. Given that, we needed to have the server.clj have an endpoint /google that gets redirected to the right place at Google for the user to login and accept the app.
There is also the callback endpoint, and then a few that return the token data, and renew the token. Nothing special, really, but the targets for the OAuth2 are really important, and it's just sad that we have to have them in the first place.
The dcm.clj File
The final piece is really the meat of the problem.
We start off with the Missing Functions in the clj-oauth2 library, and then jump right into the static config for our application. These are all generated by Google, and you can get them from Google when you register your project/client.
We then have the authentication and re-authentication functions, which took an enormous amount of time to get right, but don't look overly complex in the least. Lovely.
Finally, we have a few calls to test that we got the user profile information properly, and that we can make subsequent calls to Google and get the data requested. It's not a lot, but it works, and it proves that things are working up to that point.
In the end, I'm glad I have it all done, and I'll be integrating the Custos Server in as a secret store of the credentials soon. Then I'll be using redis as the back-end and then pulling data from Google and loading it there. All this is a complete, stand-alone, back-end data collector for the ad messaging data for a client from Google.
Today my daughter and I went to BestBuy to get a few things - and among them was the Sonos system for the house. I've been looking at this for a long time, and even gone to the store on at least three occasions to get them, but then walked out of the store without a thing. I just didn't see it as a need. Then I saw her putting together her IKEA furniture listening to some tunes from her phone. I knew this was pretty standard for my kids, but still... it was just a little silly when we can make the house be filled with the sound from the same phone.
So we got them. She's got a Play:1 for her room, and a Play:3 for the living area downstairs. I got a Play:5 for the living room upstairs, and a Play:3 for the kitchen, and finally a Play:1 for the office. She gets to control all the sound downstairs, and I can control it upstairs. It's really amazingly impressive.
The sound fills the house, and as you walk around from room to room the sound appears to already be there - well... because it is! There's no "walking away" from the music, and it's not too loud in any room - as you can control the speakers independently. This is just exactly what I was hoping for.
The set-up could not have been easier. Set up the base unit - a "Boost" unit - connected to the router, and then just go around plugging in one speaker at a time, and hitting a few buttons on the iPhone (or Android) app and on the speaker to register each one, and then it's all done. I was a little worried that this would require something on my Mac, but not so - just an iPhone will do nicely.
Now I am able to stream music and it's honestly a little better than the AirPlay from Apple, as I've had drop-outs there, but have experienced nothing like that with Sonos. Very cool. And it's just in time to be playing all my Christmas favorites!
This morning I've been fussing with MacVim and Sublime Text 2 and trying to come to terms with the fact that Sublime Text 2 has crashed on me a few times, and I have no desire to go through that again. I have downloaded the beta of Sublime Text 3, and it appears to be decent - though because it's using Python 3 virtually none of the packages I use in Sublime Text 2 are going to work properly, but at least I get the default behavior, and that's about 95% of what I use day-to-day anyway.
So I decided that it's time to give it a shot, and I switched from Sublime Text 2 to Sublime Text 3 on my main MacBook Pro. This isn't my work machine, but my machine, and as such it's not getting the same number of keystrokes as the work machine, but still… as I work on it I'm hammering on ST3, and hopefully, they will get it out of beta soon, and the Package maintainers will upgrade their code to be compliant with Python 3 and I'll be able to get back all the functionality I have/had in the older version.
Among the new features in ST3 is the extensive use of C++11, and that includes the move semantics which should make for a much more efficient app. I'm hoping to see that the crashes I've had go away, and if so, I'll be more than happy to foot the bill for the upgrade price.
This isn't to say I've given up on MacVim - I just read a message that the maintainer is looking to add the window/file/tab restore feature to it, but that it's "hard", and so I'm not expecting it anytime soon. Should that come to pass, I think I might have a much harder time choosing an editor. Yeah, it'd be tough not to use MacVim if it had that feature. It's such a powerful editor.
Posted in Coding, Vendors | Comments Off on Switched Over to Sublime Text 3 Beta
This morning I noticed that Apple had a Java 1.6 update for the Java exploit that's been going around, and Oracle had another for 1.7. It's interesting how vulnerable Java is these days - maybe because the other, historically more likely, back-door (open ports on boxes), has dried up of late. So the jerks turn to Java, and find problems and exploits there.
Seems reasonable to stay up to date because some Facebook and Apple engineers have had their machines compromised by visiting a hacked web site. Amazing that they can do this from visiting a website, but that makes sense in the context of Java plugins.
Crazy what poorly written software combined with kids with too much time on their hands can lead to. Kinda annoying…
Posted in Apple, Coding, Vendors | Comments Off on Updated Java from Apple and Oracle
Well… I've really held off about as long as I can, and even then, it's probably a bit too long for what I'm doing. This morning I downloaded the Oracle JDK 1.7.0_13 with the latest security fixes for my main laptop. This is a big switch as I'd been holding out for Apple to step back in and really do Java right, but I think they are past that now, and it's up to Oracle to make Java work on Macs, and I pray that they do.
I'm doing more clojure coding, and with that, I really need to have the 1.7 JDK underneath, as there are optimizations that are done in the compiling if it's on the 1.7 platform. Since we're using that on the linux boxes at The Shop, and I have 1.7 on my work laptop, it makes sense to give it up, and switch.
I guess it's like my 17" MacBook Pro… legacy hardware. I'll move to the 15" with retina display for my next machine, and it's a beauty, and it's nice that it's smaller and lighter with the same pixel count, so it's a win - really, but it's a big change from all the years I've had these 17" machines.
Face the future, Bob… it might not be what you want, but it's all you have.
Today I worked on hitting Teradata from within clojure using clojure.java.jdbc, and I have to say it wasn't that bad. There are plenty of places that a few paragraphs of documentation could have saved me 30 mins or so, but all told, the delays due to googling weren't all that bad, and in the end I was able to get the requests working, and that's the most important part. I wanted to write it down because it's hard enough that it's not something I'll keep in memory, but it's not horrible.
First, set up the config for the parameters for the Teradata JDBC connection. I have a resources/ directory with a config.clj file in it that's read on startup. The contents of it are: (at least in part)
so that the next time we run leon, we'll get the jars, and they will know how to connect to the datasource.
Then I can simply make a function that hits the source:
(defn hit-teradata
""[arg1 arg2](let[info (cfg/config :teradata)](sql/with-connection info
(sql/with-query-results rows
["select one, two from table where arg1 = ? and arg2 = ?" arg1 arg2]
rows))))
Sure, the example is simplistic, but it works, and you get the idea. It's really in the config and jar referencing that I spent the most time. Once I had that working, the rest was simple JDBC within clojure.
This afternoon I realized that we really need to have a few additional fields about the closed deals for the demand adjustment. Specifically, we have no idea of the start date for the deal, and while we have the close_date, that's not much use to us if it's empty, as many are until they have an idea of when they really want to shut down the deal. Additionally, one of the sales reps pointed out that there are projected sales figures on each 'option' in a deal, and rather than look at the total projected sales and divide it up, as we have in the past, we should be looking for those individual projected sales figures and using them - if no sales have been made.
Seems reasonable, so I added those to the Salesforce APEX class, and ran it to make sure it was all OK. There were no code changes to the ruby code because we had (smartly) left it as a hash, and so additional fields aren't going to mess things up… but in out code we can now take advantage of them.
Surprisingly time-consuming because I had to drop tables, and add properties and get things in line - but that's what happens when you add fields to a schema… you have to mess with it. Still, it's better than using a document database for this stuff. Relational with a simple structure beats document every time.
Posted in Clojure Coding, Cube Life, Vendors | Comments Off on Pulled Additional Fields from Salesforce for Demand Adjustment
This morning I went to create a new Gist, and saw that GitHub has once again, upped the ante on the source control tools with a complete facelift on the Gist pages. This is really amazing. It now allows for tabs or spaces, and even allows for the size of the indentation. This is just superlatively cool!
I'm amazed that these guys are able to keep raising the bar. It's so unusual for me to see things that I love, and yet had no idea I felt I needed them. These changes are just more of the same from GitHub. What an amazing group of people.
Posted in Coding, Vendors | Comments Off on Once Again, GitHub Totally Amazes Me!
This morning I was tracking down a bug that was reported by our project manager related to the prioritization phase. This particular sales rep wasn't getting a good call list, and I needed to dig into Why?
After I added a bunch of logging, I was able to see that it was all a data problem. The fields in Salesforce are often just strings, and this leads to not easily enumerable sets. It's not necessarily Salesforce's fault, it's the way in which it's used, and we seem to be having a little problem with consistency here. But be that as it may, it's still our problem and we need to figure out the proper way to get at these sales reps regardless of how they seem to be classified.
Sigh… these pseudo-business decisions are always the worst. They are made for "today", and change "tomorrow", and we're always going to be correcting for problems in the mappings.
Posted in Coding, Cube Life, Vendors | Comments Off on Tracking Down Problem with Salesforce Data
Today I've been having real troubles trying to get our system scaled up to new hardware in the datacenter. Moving from hosts in Amazon EC2 to machines in our own datacenter are a big step in the right direction, but going from iffy bandwidth in EC2 to solid switches in the datacenter and 2 cores to 24 are something we simply have to do in order to scale up to handling the global data load that we're going to need.
The problems I've run into today are all about loading. I've already added a queueing system to the CouchDB interface, in order to minimize the connections to the Couch server so as not to overload it - there were times when the Couch server would simply shutdown it's socket listener, and therefore refuse all updates sent from the process. Not good.
There's also a lot of problems today with Salesforce. I don't think they expected the kind of loads we're delivering. This morning, at 3:00 am, Salesforce called the support guys at the shop and told them that a process was bringing one of their sandbox clusters to it's knees, and that, it turns out, is but one of four boxes we need to bring online. They're having a hard time handling this much - I can't imagine what's going to happen when we try to really ramp it up.
I'm starting to have real concerns about both these endpoints of the project. I know there's a lot of data getting moved around, and while we're able to handle it, it's these endpoints that are having the hardest time. I've talked with the project manager and the technical manager about this, and I think we need to start thinking about potential bail-out scenarios.
It's certainly possible to read from Salesforce. We're planning on re-doing the complete demand system, so there shouldn't be an issue there. Persistence? Go back to MySQL or PostgreSQL and store it all in tables. The data is getting pretty nicely finalized, so a nice schema should be able to be made. Save it all in a SQL database, make a simple service that reads/caches this data and offers it up to the clients, and the pages already built on the existing data sources are pretty easily modified.
Odd to think that I was looking at MySQL before Couch popped up. Funny thing is, I have a strong feeling that Salesforce can come up with hardware that makes the grade, but it's the bugs in Couch that worry me the most. You just gotta wonder if it's in the Erlang code, or on the boxes, or what.
So many unknowns, but it's clear that we can't scale to one nice box - there's no way we're going to make it work globally without some serious changes.