Archive for the ‘Vendors’ Category

Finally Figured Out New Relic and JRuby

Monday, November 5th, 2012

JRuby

This afternoon, I decided to dig into the New Relic Ruby API and see if I couldn't figure out the problem that's been bugging me for a while. Basically, it's all about putting New Relic in a jruby-1.7.0 jar and getting the instrumentation to load.

I knew that New Relic had their code on GitHub, so I went and forked the code in hopes of fixing the problem.

I started digging in the first place I've come to suspect in jruby-1.7.0 - Dir.glob.

Sure enough, they had code that was using this in much the same way that the Configulations gem was: it was scanning directories, in this case, their own code, and loading the scripts it found. These scripts, it turns out, are their "instrumentation" directory.

Makes perfect sense.

The call to Dir.glob returned the empty array, so there was no error. But there was also nothing loaded. Simple. I changed the code from:

  def load_instrumentation_files pattern
    Dir.glob(pattern) do |file|
      begin
        log.debug "Processing instrumentation file '#{file}'"
        require file.to_s

to:

  def load_instrumentation_files pattern
    Dir.glob(pattern.gsub(/^jar:/, '')) do |file|
      begin
        log.debug "Processing instrumentation file '#{file}'"
        require file.to_s

and, as expected, everything started to work just fine!

I sent in a pull request to the New Relic folks so that they could fix their code, and I also sent them an email, as I was back talking to their support guys.

Got a nice note back from one of the support guys thanking me for the fix. I know jruby will get it fixed eventually, but this is really an issue that needs fixing sooner than later.

Maybe I'll fork that and fix it? 🙂

Testing NewRelic with JRuby

Sunday, November 4th, 2012

I've had a few questions about what's happening at The Shop with New Relic and JRuby 1.7.0, so today I decided to take a little time and really prove to myself what's up and what's broken with the two. My initial tests were with jruby-1.7.0, but then I backed off and started from what I remember worked - jruby-1.6.7.2. Because I needed to see this interact with New Relic, I thought that it'd be easy to just sign up for a free New Relic account.

This is really kinda neat. I have liked what New Relic can do, and it seems there's a pretty decent free system going on there, and it makes it easy to try things out. So I signed up, got things going, and started getting some results:

So it looks like the jruby-1.6.7.2 both as a script and as a self-executable jar work, and then on the same graph, I was able to get jruby-1.7.0 as a script. So far, so good.

The last test was to run this under jruby-1.7.0 as a self-executable jar:

With these tests out of the way, I know that I can run New Relic with jruby-1.7.0 -- I just need to make sure it's getting started and that the Net::http instrumentation is being loaded at the right time. I have high hopes for the runs that will kick off in a few hours!

[11/5] UPDATE: OK… bad news. Yes, for my simple tests, New Relic worked fine from a jruby-1.7.0 jar. However, if there is a more complex project - with dozens of classes, then the instrumentation isn't loaded, and we have a problem. I've sent another request into New Relic, and we'll have to see what they say. If I could manually load the instrumentation, I'd be OK, but I don't know of a way to do that.

[11/5 3:00pm] UPDATE: Found the problem! It's the old JRuby Dir.glob issue and I fixed the New Relic Ruby API and sent them a pull request with the fix. I also sent them an email, so I'm hoping they take it to heart, but we have the fix we need, and that's the main issue.

Finally Solved a Nagging NewRelic Problem

Friday, November 2nd, 2012

JRuby

Ever since we've moved to JRuby 1.7.0, we've had a problem with our New Relic stats, and it's really been quite annoying. New Relic makes all these nice graphs of the instrumentation you place in your apps, and then makes it dead simple easy to show that and drill down to any performance issues you may be having.

In the last few weeks we've been battling quite a few Salesforce performance issues, and without the New Relic graphs, it was essentially impossible to know the details of the calls we were making to Salesforce. This was nasty. I knew it was all about JRuby 1.7.0, but why? Thankfully, the New Relic support guys gave me a good idea.

Basically, it boiled down to my app wasn't loading the net/http instrumentation at the right time. It needed to be loaded very early in the cycle, but it was supposed to be automatically loaded, and it clearly wasn't. I had made DEBUG loggings - nothing seemed to help. Then I looked at the code again:

  require 'ext/rpm/lib/new_relic/agent/error_collector'
  require 'net/http'
  require 'newrelic_rpm'

and I started to wonder: What if there's a problem with that agent? So I changed the code to read:

  require 'net/http'
  require 'ext/rpm/lib/new_relic/agent/error_collector'
  require 'newrelic_rpm'

BINGO! Works!

I'm so glad! This has been a real annoyance, and now I have a work-around. Don't really care if it's something I have to manually do, I'm just glad it's working.

More Salesforce APEX Code

Thursday, November 1st, 2012

Salesforce.com

Based on a few rules we're trying to implement in the system, I needed to dive back into the Salesforce APEX code for our endpoint(s) and add a few things that weren't there now. Specifically, I needed to take one more step to getting the two endpoints that deal with Merchants a little bit closer together. This wasn't all that easy as APEX is like Java, but not, and testing the code is not easy, and even editing the code isn't all that easy.

Salesforce.com is a web site. So to edit the classes, you need to copy it out of a text field, paste it into some local editor, do the changes, copy that out and paste it back into the text field (assuming it hasn't timed out), and then save it. To test it you need to go to a different web page and put in your URL, with parameters, and then see what results you get.

It's workable, but its nowhere near "efficient". I know from a friend that works there that things are changing, and that would be great. Something to make the development process with Salesforce easier would go a long way to making the entire experience, and therefore the impression about Salesforce, a lot better.

Still… it's possible to write code, and test it. But in a group, it means that you have to be very good with communication as you can easily step over people's changes if you aren't careful. And if you have someone changing the mapping code while you're trying to change what you get out of the API, then it's going to get ugly.

And it did.

Very.

It was a pretty stressful time, but in the end, I got what I needed done. Just not very happy with the overall experience. Certainly something to be avoided in the future, if at all possible.

Doing timing comparisons for Salesforce

Wednesday, October 31st, 2012

Salesforce.com

This morning I need to do some simple timing comparisons for the Salesforce Team here at The Shop so they can push this back to the Salesforce.com guys and ask what's going on. I'm taking the overnight runs of Production and UAT hitting the production and staging sandboxes, respectively, and comparing that to running the UAT hardware against the new API sandbox, and simply comparing times for actual runs.

Hardware Sandbox Time
Production (EC2) Production 20:03
UAT (EC2) Staging 20:03
UAT (EC2) API 29:07

At the same time, I'm gathering some DEBUG logging information for NewRelic as their newrelic-rpm 3.5.0.1 Gem is not sending back to them the method call instrumentation data in jruby-1.7.0 that it is in jruby-1.6.7.2. I know this because our NewRelic graphs go silent the same instant we moved from 1.6.7.2 to 1.7.0. However, we do get the CPU usage and the memory usage and even the deployment markers - so I know we're sending something, but it's just not sending the right instrumentation data.

So they asked me to send them a DEBUG log, which seems really reasonable. So I'm getting it as I'm doing the final UAT-to-API timing test.

UPDATE: they have sped up API significantly from yesterday. Whatever they did - it's working and that's OK with me.

Horrible Performance from Salesforce

Tuesday, October 30th, 2012

This morning I come in to see that we're horribly behind in processing the UAT divisions! The new Salesforce sandbox is a joke! I've got to trim back what we're doing in UAT and get the Salesforce Team on this because we can't let this go on… it's simply unusable.

UPDATE: it took 16 hours to finish this run! That's up from about 3.5 hrs last night - all due to the new Salesforce sandbox. This can't stay this way… I'm moving UAT back to the Staging sandbox as it had decent performance.

Moved to Another New Salesforce Sandbox

Monday, October 29th, 2012

This afternoon I had to move our development and UAT environments to a new Salesforce sandbox. This one is supposed to be the place we'll be doing our development and releasing code to the Salesforce Team's promotion process, so it made sense to get it right and just do it.

The process wasn't hard - I've got it all documented from the last time I had to do this, so it went pretty smoothly. Just needed to get it done.

Skitch is Dead – Long Live Skitch!

Thursday, October 25th, 2012

Skitch.jpg

Today I was using Skitch to add a simple screen grab to a web page that I was building, and when I tried to save it to the Skitch.com server, is told me that Skitch had transitioned to Evernote. In short - the server shutdown they had talked about was upon us.

Skitch was dead.

But I was ready. I had signed up with PinkHearted.com, and that gives me everything I need from Skitch's back-end. Even if that weren't enough, I have a CloudApp account, and can use that as well.

So Skitch is alive and well. Just not with the guys that brought it to life.

It's sad to see something like this, but the market works in sometimes sad and gloomy ways. I'm just hoping that someone picks up the mantle of the original Skitch team, and makes a worthy successor. With a reliable back-end.

Struggling with Salesforce Sandbox Refresh

Wednesday, October 17th, 2012

The Magic School Bus

I knew as soon as it happened, it was going to eat up hours of my time. I first noticed it as a failure of one of the tests I was running this morning. Nothing back from Salesforce, and when I looked at the log files, everything was failing. I had heard that they would be refreshing the staging sandbox from production, but I had no idea it'd be this morning.

So first things first… Wait.

Let them finish the refresh and then try to get the Remote Access going again, given that I've never really gotten it going in the first place. You have to create an application in Salesforce, and then make sure that's all OK to get the first two pieces of the authentication tokens. Then you have to have the new user send it's security token back to you, and now you just need to battle with the uncompiled, or mis-permissioned classes.

It's enough to make you want to drink. Heavily.

I had some help, and that's nice, but it was still an amazing waste of time to get things back to where they were. But hey… I'm sure someone had a good reason for this. I just can't possibly imagine what it is.

Signing up for Amazon S3 – Skitch Storage

Wednesday, October 10th, 2012

I saw a tweet from a friend the other day:

Mike's Tweet

and it got me thinking about a better back-end for Skitch 1.x. I've planned to use CloudApp and just drag things into it and get URLs, and that would work, but it's not as nice as having a great back-end that works with Skitch as it's meant to. That means being able to pull up the history, etc. It's all there. That would be nice!

PinkHearted uses Amazon's S3 datastore to save your images, so it doesn't really cost them anything to store the images - it costs me. That's nicer than CloudApp because they are going to have to mark it up to make a profit, and I can pay Amazon directly. So I decided to give it a try.

Signing up with Amazon is simple, but there's a few steps to it, so it's not as immediate as you might think. No problem, I just needed to slug through it. Then there was the issue of creating the first bucket. In short - an S3 bucket is a top-level directory in your S3 storage account. I simply used Transmit to create one, and it was very simple.

Now I had a simple bucket on S3:

My S3

At this point, I could sign up with PinkHearted and then configure my Skitch to use the WebDAV service on PinkHearted as the source. I've got it all ready to go, and if they shut down the Skitch.com service, that's where I'm going to go.

I like that it's all me now, and on S3 it's redundantly stored and should be fine.