Archive for the ‘Coding’ Category

Gravatar Mix-Up and Solution

Thursday, June 28th, 2012

I have been wishing that GitHub for Mac would properly show my Gravatar since the first release so long ago. The problem is that on GitHub, my Gravatar shows up just fine:

GitHub Gravatar

but on GitHub for Mac, it's all the default I have no idea who you are… image:

GitHub for Mac

The problem stems from the issue that Gravatar assumes that the email is all lower-case before it's sent to Gravatar for image lookup. This is compounded by the fact that Gravatar does not allow you to have two emails registered for the same Gravatar that only differ by the case in the email. This means that drbob@themanfromspud.com and drbob@TheManFromSPUD.com are totally different Gravatars, and the latter will never be found.

This is highlighted by the PHP code snippet that I found for successfully getting a Gravatar:

  $out = "http://www.gravatar.com/avatar.php?gravatar_id=".md5(strlower($email_addr));

I'm willing to bet that GitHub for Mac is not doing the lower-casing of the email address before it's taking the MD5 hash and sending that to Gravatar. To test this, I changed the user.email parameter on my MacBook Pro and did a few checkins:

GitHub

The commits I did today have the lower-case version of the user.email setting in the commits, but the older ones don't. This is a clear indication that I'm right. Now it's possible, I suppose, to go back and completely update all the repos I'm using, but I think that's a bit excessive. I'd like Gravatar to allow for mixed-case, but they just take the gravatar_id, and it's already MD5 based at that point.

Nope, this is a GitHub for Mac issue, and they need to lower-case the email address before they send it to Gravatar and then all will be well. I've sent in the request to GitHub, and we'll see what they have to say. I'm hoping they fix this, I do hate to see the little grey shadow for all my checkins.

UPDATE: Great News! I heard back from the GitHub for Mac guys:

  From: Josh Abernathy (GitHub Staff) 
  Subject: GitHub for Mac - Gravatar Mixup

  Hi Bob,

  Ahh, thanks for tracking that down for me. I've created an issue for it.

  Thanks!

so it looks like it's going to get fixed in an upcoming release of GitHub for Mac. That's such wonderful news as it means that all my avatars will be me and not the gray shadow. Super nice!

Setting Git Diff Tab Size

Thursday, June 28th, 2012

gitLogo_vert.gif

This morning I was putting together a GitHub pull request for the XMLRPC library I'm using in SyncKit, and I realized that I'm really tired of the tab size on the git diff being 8 when all my code uses a tab size of 4. So I finally started googling to find an answer, and it turns out that the presentation of the git diff is really nothing more than the Unix command less. Interesting. Makes sense.

So you can make the tab size 4 by simply running:

  $ git config --global core.pager 'less -x4'

and if you omit the --global argument, it'll set it just for the repo you are in. This will be great when I have a repo with a tab size of 2, for instance. I can set the global to 4, and then override it as needed in the repo. Very nice.

Once again, Git rules.

AppleScript to Resize Safari Windows

Wednesday, June 27th, 2012

Safari.jpg

I have taken to using Safari for my GitHub views, and the problem there is that I'm typically looking at landscape pages in Safari, and not wide screen pages. But for GitHub, it's much nicer to look at things wide screen so that we get the entire width. What I'm left with is two different preferred sizes, and Safari will remember the last size you had, so it make sense to look into a way to make things a little easier for setting these sizes. Enter AppleScript.

It's actually pretty simple, once you know the things you can ask for, and get. The script to put the front-most Safari window into landscape mode is:

  tell application "Safari"
    activate
    set myPos to bounds of front window
    set x to item 1 of myPos
    set y to item 2 of myPos
    set bounds of front window to {x, y, x + 601, y + 629}
  end tell

placed into your ~/Library/Scripts/Applications/Safari/ directory.

To set the front-most Safari window to wide screen mode, I simply used a different geometry:

  tell application "Safari"
    activate
    set myPos to bounds of front window
    set x to item 1 of myPos
    set y to item 2 of myPos
    set bounds of front window to {x, y, x + 678, y + 468}
  end tell

With these in the aforementioned directory, I can use them to resize the front Safari windows very nicely. Sweet.

I'm not sure if I'm going to be doing a lot of AppleScript, but it's nice to have when you need to throw simple things like this together.

Google Chrome dev 21.0.1180.11 is Out

Tuesday, June 26th, 2012

Google Chrome

This morning I noticed that yesterday while I was at the interviews, the Google Chrome team release 21.0.1180.11 to the dev channel. The changes are sounding pretty routine these days: new V8 javascript engine 3.11.10.12, more Retina (HiDPI) enhancements for the new MacBook Pros, and several other crash fixes. Not bad for an update. I'm pleased that they are keeping the speed up after those few sluggish releases, so we'll see what they have planned for the 22.x series.

Getting ZeroMQ 3.2.0 Compiling on Mac OS X 10.7 Lion

Wednesday, June 20th, 2012

ZeroMQ

This afternoon I decided that maybe it was time to see if I could get ZeroMQ built and running on my MacBook Pro running OS X 10.7 as well as my Ubuntu 12.04 laptop. I'm thinking it might be nice to write a few little test apps again with the latest ZeroMQ APIs between the machines to make sure that I have everything I need - should it come to that and I need to implement a little ZeroMQ action into DKit, or some other library.

The first step is downloading it from the ZeroMQ site. I picked the POSIX tarball as it's the one with the created ./configure script, and I needed that in order to get things kicked off.

Next, we try to build it on OS X 10.7 and Ubuntu 12.04. There are a few changes that have to be made to the OpenPGM code in order for it to compile on OS X. They are basically the includes needed, and not allowing duplicate definition of values.

In ./foreign/openpgm/build-staging/openpgm/pgm/include/pgm/in.h :

Replace:

  1. /* sections 5 and 8.2 of RFC 3768: Multicast group request */
  2. struct group_req
  3. {
  4. uint32_t gr_interface; /* interface index */
  5. struct sockaddr_storage gr_group; /* group address */
  6. };
  7.  
  8. struct group_source_req
  9. {
  10. uint32_t gsr_interface; /* interface index */
  11. struct sockaddr_storage gsr_group; /* group address */
  12. struct sockaddr_storage gsr_source; /* group source */
  13. };

with:

  1. #ifndef __APPLE__
  2. /* sections 5 and 8.2 of RFC 3768: Multicast group request */
  3. struct group_req
  4. {
  5. uint32_t gr_interface; /* interface index */
  6. struct sockaddr_storage gr_group; /* group address */
  7. };
  8.  
  9. struct group_source_req
  10. {
  11. uint32_t gsr_interface; /* interface index */
  12. struct sockaddr_storage gsr_group; /* group address */
  13. struct sockaddr_storage gsr_source; /* group source */
  14. };
  15. #endif // __APPLE__

In ./foreign/openpgm/build-staging/openpgm/pgm/sockaddr.c :

Replace:

  1. #include <errno.h>
  2. #ifndef _WIN32
  3. # include <sys/socket.h>
  4. # include <netdb.h>
  5. #endif

with:

  1. #include <errno.h>
  2. /* Mac OS X 10.7 differences */
  3. #ifdef __APPLE__
  4. # define __APPLE_USE_RFC_3542
  5. # include <netinet/in.h>
  6. #endif
  7. #ifndef _WIN32
  8. # include <sys/socket.h>
  9. # include <netdb.h>
  10. #endif

In ./foreign/openpgm/build-staging/openpgm/pgm/recv.c :

Replace:

  1. #include <errno.h>
  2. #ifndef _WIN32

with:

  1. #include <errno.h>
  2. /* Mac OS X 10.7 differences */
  3. #ifdef __APPLE__
  4. # define __APPLE_USE_RFC_3542
  5. # include <netinet/in.h>
  6. #endif
  7. #ifndef _WIN32

The final change is to the ZeroMQ source itself:

In ./src/pgm_socket.cpp :

Remove lines 88-92, make this:

  1. pgm_error_t *pgm_error = NULL;
  2. struct pgm_addrinfo_t hints, *res = NULL;
  3. sa_family_t sa_family;
  4.  
  5. memset (&hints, 0, sizeof (hints));
  6. hints.ai_family = AF_UNSPEC;
  7. if (!pgm_getaddrinfo (network, NULL, &res, &pgm_error)) {

look like this:

  1. pgm_error_t *pgm_error = NULL;
  2. if (!pgm_getaddrinfo (network, NULL, addr, &pgm_error)) {

At this point, we can get ZeroMQ to compile on Mac OS X 10.7 as well as Ubuntu 12.04. But there's a slight wrinkle… while I'm fine with the linux library being a 64-bit only architecture:

  drbob@mao:~/Developer/zeromq-3.2.0$ file src/.libs/libzmq.so.3.0.0
  src/.libs/libzmq.so.3.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (GNU/
  Linux), dynamically linked, BuildID[sha1]=0x71a160f17833128c864811b25942cdacdb54
  f6d0, not stripped

I'd really like the Mac OS X 10.7 dynamic library to be Universal, with both 32-bit and 64-bit architectures in it. Currently, it's 64-bit only:

  peabody{drbob}82: file src/.libs/libzmq.3.dylib
  src/.libs/libzmq.3.dylib: Mach-O 64-bit dynamically linked shared library x86_64

Hmmm… it's really amazing why some folks choose to write Makefiles in a way that doesn't allow you to use multiple -arch arguments. There really aren't all that many way to mess this up, but it seems that the OpenPGM and ZeroMQ guys have done it really pretty nicely. I can't simply remove the offending compiler flags and add in the necessary -arch i386 -arch x86_64. So I have to make it twice: once for i386 and again for x86_64 and then use lipo to stitch them together.

In the Makefiles, I added the -arch i386 command to the CFLAGS and CPPFLAGS variables after successfully building the 64-bit version of the libraries. I then did a simple:

  $ make clean
  $ make

and then when I looked at the resulting libraries they were 32-bit. I then just created the Universal binary with the lipo commands:

  $ lipo -create libzmq.3.dylib .libs/libzmq.3.dylib -output .libs/libzmq.3.dylib
  $ lipo -create libpgm-5.1.0.dylib .libs/libpgm-5.1.0.dylib -output \
    .libs/libpgm-5.1.0.dylib

Now I can copy these away and install them in the right location for usage. Typically, I'll do a make install and place everything into /usr/local/ and then copy these Universal binaries over the installed mono-binaries and be ready to roll.

UPDATE: I can get ZeroMQ to work with the simple transports, but the OpenPGM encapsulated transport fails miserably. I'm looking at the code and I'm becoming convinced that no one is really using or testing this code. It's a mess and there's no way some of these method calls are going to work. It's all in the PGM portion, so if they stay away from that, it's fine. But that's not good enough for me. So I'm giving up for now. I have asked a few things in IRC, but there aren't any responses (yet), and I don't expect to get any about the OpenPGM part.

I think it's dead, and I need to just move on. Sad to see, but it happens.

[6/21] UPDATE: after a lot of work trying to get the OpenPGM 5.1.118 code to work, I see that the developer knows about the problems, and is just not ready to support Mac OS X 10.7 at this time. I can respect that, but the things he had to have done to make his code this non-standard must be pretty wild. So there's nothing I can do at this point. OpenPGM is a deal-breaker, and that's the entire reason I wanted ZeroMQ.

Appreciating Ubuntu 12.04

Wednesday, June 20th, 2012

Ubuntu Tux

This morning I was once again updating my Ubuntu 12.04 laptop and realized that were it not for a crummy hardware platform - the trackpad is horrible, and the display is very inexpensive, this would be a really nice laptop/development box. It's got all the tools I could ask for, it's got some wonderful fonts for the Terminal and such, it's got Google Chrome for all the web stuff… it's really pretty slick.

I gotta tip my hat to the Ubuntu folks. This is a much better distro than RedHat/Fedora was. I'm guessing Fedora is getting better, and it's probably as nice as you need, but the ability to do the updates easily - and have them on by default, is really nice. It's nice to have it all right there.

Certainly, I'm not giving up my MacBook Pro anytime soon, but I've looked at BestBuy, and you can get nice Wintel hardware for $700 to run this on… all of a sudden, it's something I might actually carry from time to time. Certainly nicer to work with than the crummy trackpad and display.

It's a great compliment to the MacBook Pro. Nice to have.

Google Chrome dev 21.0.1180.0 is Out

Wednesday, June 20th, 2012

Google Chrome

This morning I noticed that Google Chrome dev 21.0.1180.0 was out with several changes for the Retina MacBook Pro as well as the latest V8 javascript engine. These are really nice updates, but what I noticed right off is the mistake in the rendering with the previous version left a slight tell-tale horizontal line in the background every "page" or so. It wasn't horrible, and wasn't even directly repeatable all the time, but it was somewhere on the page, and it was enough that it made me wonder if my machine was bad.

So it seems to be gone, and that's great news, but so are all the other fixes. These HiDPI changes are nice for a lot of people getting the new MacBook Pros, and one day I'm sure I'll have one too… but it's not where I'd hoped Chrome were adding features right now. But that's OK… it'll all work out I'm sure.

UPDATE: Spoke too soon:

Chrome Render Bug II

Interestingly enough, these disappear if I scroll this section out of range, but will return to another location on the page if I just keep scrolling. Nasty bug, but it's in Chrome, and I'm not going to worry too much about them fixing it. They'll get to it - just like they did with the rendering of the Finance page on Zoom.

[6/22] UPDATE: this morning I see 21.0.1180.4 is out and this time, they say they fixed several "alignment issues", and I'm hoping they mean these lines. They also put in the V8 javascript engine 3.11.10.10, which is nice. I'm hoping these lines are gone.

Installing JDK 6 on Ubuntu 12.04

Monday, June 18th, 2012

java-logo-thumb.png

This afternoon I wanted to get a few things going on the new Rackspace Cloud Server Joel had reconfigured to run Ubuntu 12.04. Specifically, I wanted to get the latest JDK 1.6.0 installed on the box as we need that for Mingle, which is the point of all this reconfiguration and such.

As it turns out, it's not that bad you just have to know where to go and what to get, and what to do with it. Isn't that the same thing with all new software for linux? Yes.

OK, first, get the latest JDK 6 download from Oracle here. Then it's a simple matter of unpacking it:

  $ chmod +x jdk-6u32-linux-x64.bin
  $ ./jdk-6u32-linux-x64.bin

Then you need to move that to someplace useful. For me, that's /usr/local and then make a few symlinks to make it easy to upgrade:

  $ sudo mv jdk1.6.0_32 /usr/local/
  $ cd /usr/local
  $ sudo ln -s jdk1.6.0_32 jdk1.6
  $ sudo ln -s jdk1.6.0_32 jdk

Then we can put it into the path using the alternatives system on Ubuntu 12.04:

  $ sudo update-alternatives --install /usr/bin/javac javac /usr/local/jdk1.6/bin/javac 1
  $ sudo update-alternatives --install /usr/bin/java java /usr/local/jdk1.6/bin/java 1
  $ sudo update-alternatives --install /usr/bin/javaws javaws /usr/local/jdk1.6/bin/javaws 1

and then set the default JDK (if needed):

  $ sudo update-alternatives --config javac
  $ sudo update-alternatives --config java
  $ sudo update-alternatives --config javaws

Finally, we can verify that the JDK was installed properly:

  $ java -version
  java version "1.6.0_32"
  Java(TM) SE Runtime Environment (build 1.6.0_32-b05)
  Java HotSpot(TM) 64-Bit Server VM (build 20.7-b02, mixed mode)

Added Lots of Docs to SyncKit Project at GitHub

Monday, June 18th, 2012

GitHub Source Hosting

Today I moved my SyncKit project from Codesion (wanting more than $1000/yr) to GitHub (less than $100/yr) and with that move, it made a lot of sense to add in the standard GitHub README.md file so that the main page of the repo has some nice introductory documentation. While I was at it I did a lot of documentation - including how to set up the server-side box, and how to verify the set-up, and what the organization of the project was, and how it was all wired up to work. It was a lot of docs for a single day. But important to make sure that we are ready for any kind of documentation checking.

I'm glad to have it on the private GitHub side as well - there are just so many nice things about GitHub, I like to support it and use it when I can.

Built Templated Conflation Queue in DKit

Tuesday, June 12th, 2012

DKit Laboratory

This morning I finished up work on a conflation queue for DKit. The idea is pretty simple - take a queue and a trie, and based on the key value for the elements in the trie, allow updates to the values still in the queue, but keeping their relative placement in the queue such that when they are popped off, the latest value is pulled off, and the element is considered removed from the queue. It's a very common thing to have in processing market data when you don't need every tick, but what you do need is the very latest information when you can get it. Such would be great for risk systems, but not so great for execution systems - depending on the strategy.

Anyway, the key to all this was really that I had all the elements of the conflation queue in DKit - I just needed to bring them all together. For instance, since it's a FIFO queue, we based it off the FIFO superclass, and it's template signature supports this:

  namespace dkit {
  /**
   * This is the main class definition. The paramteres are as follows:
   *   T = the type of data to store in the conflation queue
   *   N = power of 2 for the size of the queue (2^N)
   *   Q = the type of access the queue has to have (SP/MP & SC/MC)
   *   KS = the size of the key for the value 'T'
   *   PN = the power of two of pooled keys to have in reserve (default: 2^17)
   */
  template <class T, uint8_t N, queue_type Q, trie_key_size KS,
            uint8_t PN = 17> class cqueue :
    public FIFO<T>
  {
  };
  }  // end of namespace dkit

The idea is simple: you have to know:

  • What to store - this is the type of data you're going to place in the queue, and if it's a pointer, then the conflation queue is going to automatically destroy the old copies when new values come in and overwrite the old
  • How many to store - this is a maximum number, as we're using the efficient circular queues. In practice, this isn't a real limitation as memory is pretty cheap and a queue meant to hold 217 elements is not all that expensive
  • The type of access - this is so you can control how many producers and now many consumers there are going to be. This is important as you can get higher performance from limiting the producers and consumers.
  • The key size of the trie - this is really what you're going to use to uniquely identify the values you are putting in the queue. If you know how you'll identify them, then you can choose the correct sized key to make that as efficient as possible.
  • (Optionally) The size of the pool of keys - this implementation allows for a set of pooled keys for the queue. This is nice in that you don't have to worry about getting keys into or out of the queue, but in order to be as efficient as possible, it makes sense to have a pool of them around. This optional parameter allows you to specify how many to hold onto at any one time.

Things went together fairly well because I had all the components: the different queues, even how to use the different access types in the pool. I had the pool, and so it was just a matter of putting things together and testing them out.

One thing I did find out was that when I call key_value() I'm not sure what exactly I'm going to be getting back. If we assume that we're using a key structure like this:

  struct key_t {
    uint8_t    bytes[KS];
  };

and the reason being is that we want to be able to create and destroy these without having to put the array form of the delete operator, then we can't simply do this:

  key_t      *key = _pool.next();
  if (key == NULL) {
    throw std::runtime_error("Unable to pull a key from the pool!");
  }
  // copy in the value for this element
  *(key->bytes) = key_value(anElem);

because the compiler is going to think that the LHS is just a uint8_t and not a series of bytes, capable of holding whatever is returned from key_value(). We also can't do this:

  // copy in the value for this element
  memcpy(key->bytes, key_value(anElem), eKeyBytes);

because the return value of key_value() is a value and not a pointer. So we have to be a little more involved than this. What I decided on was to use the fact that the compiler will choose the right form of the method and function, so I added in setters to the key:

  struct key_t {
    uint8_t    bytes[KS];
    // these are the different setters by size
    void set( uint16_t aValue ) { memcpy(bytes, &aValue, 2); }
    void set( uint32_t aValue ) { memcpy(bytes, &aValue, 4); }
    void set( uint64_t aValue ) { memcpy(bytes, &aValue, 8); }
    void set( uint8_t aValue[] ) { memcpy(bytes, aValue, eKeyBytes); }
  };

and with this, I can say:

  key_t      *key = _pool.next();
  if (key == NULL) {
    throw std::runtime_error("Unable to pull a key from the pool!");
  }
  // copy in the value for this element
  key->set(key_value(anElem));

and the compiler will pick the right set() method based on the right key_value() function the user provides. This is not as nice as simply copying bytes, as there's a call here, but it's not horrible, either. I need it to make the code simple and make it work.

Other than that, things went together very well. The tests are nice, and it's all ready to be hammered to death in a real setting. I'm so happy to have gotten to this point in DKit. These things took me a long time to get right before, and they aren't nearly as nice as what I've got now, and these will be out there for me to use no matter what. That's a very nice feeling.