Archive for the ‘Coding’ Category

iPhoto, iMovie, Thunderbolt – Lots of Updates from Apple

Tuesday, June 12th, 2012

Software Update

This morning I saw that there were updates to iPhoto, iMovie, Thunderbolt drivers, and even the AirPort stuff… so of course I had to update all that. This is nice, but I'm not really using iPhoto all that much, and I haven't had the time or material to really get into iMovie, so the updates are really just so-so to me. But what wasn't so-so was the reboot.

For the first time ever, when I logged back in, BBEdit was back in the right place on Spaces. Xcode was too. The only exception was MacVim. Everything else was back in place and just where I left it. This was awesome!

The updates in OS X Lion that started this "clean reboot" were really nice, but a few apps never really worked with it, and BBEdit was one of them. It'd restart, but it wouldn't put the windows back where they "came from". It wasn't a horrible issue, but there were just a few apps that didn't do it right. Well… no more.

Now BBEdit and Xcode are back where they were, and the only thing I have to do is get back into the directories I was in with my Terminal.app sessions. Too bad they don't save that, but who knows? Maybe they will in 10.8?

Google Chrome dev 21.1171.0 is Out – Fixes Finance Rendering

Tuesday, June 12th, 2012

Google Chrome

This morning saw that there was an update to Google Chrome dev to 21.0.1171.0, and I needed to get it and check to see if they had gotten my bug report on the rendering of the Google Finance page. As I pulled it up, I was very happy to see that they had, indeed, fixed the problem. It seemed to be related to the zoom out level, but I can't be sure. In any case, it's nice to see the fix back to the way it was.

The release notes don't say a lot, but that's OK. The proof is in the rendering.

Adding a Flexible Sized Key Templated Trie to DKit

Saturday, June 9th, 2012

DKit Laboratory

Today I spent a good chunk of time writing a 64-bit key trie as the next basic component of re-writing the exchange feeds in a lot better component library than I had previously written. The purpose of the trie is to have a lockless storage mechanism that has very fast insertion and access methods. The last version I'd written was based on a specific element being stored - a pointer to a Message. In this version I wanted to be able to store any template type - including pointers, and handle the disposal as needed.

I've made adjustments like this for the dkit::pool where I handled plain-old-datatypes as well as pointers, and disposed of the pointer values properly. I needed to be able to do the same thing for the trie.

The second improvement this time around is to stop trying to make the trie do everything associated with processing of its contents. In the past, I'd had the trie scan the Messages for symbol limits, values, etc. This was very inefficient, as it was all in the trie, and therefore very specific. I needed to move away from this and use functors that could be passed into the trie, and then operated on each of the valid elements in the trie. This is a far better scheme as it allows the user to make any processing on the Nodes in the trie, and this is far more flexible than trying to implement the methods in the trie itself.

The solution to this was to define a class within the templates trie so that I could create functors like this:

  class counter : public dkit::trie<blob *, dkit::uint64_key>::functor
  {
    public:
      counter() : _cnt(0) { }
      virtual ~counter() { }
      virtual bool process(
          volatile dkit::trie<blob *, dkit::uint64_key>::Node & aNode )
      {
        ++_cnt;
        return true;
      }
      uint64_t getCount() { return _cnt; }
    private:
      uint64_t     _cnt;
  };

They are easy to create, and you can apply them to all the elements in the trie with a simple call:

    counter    worker;
    m.apply(worker);

The final improvement was to switch from loop-based indexing into the trie to recursion-based accessing. There are a few reasons for this - but the most is flexibility and code simplicity. When you have a fixed key length, and you use looping to traverse the trie, you have a fixed series of nested for loops. This seems all well and good, but it makes for a very fixed structure and the code isn't all the easy to read.

By going with a better class structure for building the trie, we're able to take advantage of recursion, and then the only parameter to the "depth" of the trie, and the corresponding size of the key, is the number of branches used in the construction of the tree in the trie. If we put the parameter in the creation method then there's a single point where we stop creating branches, and instead make the leaf nodes. From this point, the same code that traverses a 64-bit key trie works for a 128-bit trie.

At least insofar as the movement in the trie.

Once I got it all working, the next thing was to look at the recursion vs. looping design. I wanted to make sure that I had the most performant design. What I found was that the recursion was about 25% faster than the looping structure. I'm guessing it's due to tail-recursion optimization by the compiler, but I'm not positive. I repeated the tests, and the difference is real, that's good enough for me.

Once it was all done, I wondered how hard it might be to make the key size another parameter in the template. Well… since we're using recursion, and therefore, the size of the key space is only used in one space, the solution was pretty simple. Start by defining the different sizes we'll accept:

  namespace dkit {
  enum trie_key_size {
    uint16_key = 2,
    uint32_key = 4,
    uint64_key = 8,
    uint128_key = 16,
  };
  }      // end of namespace dkit

and then we convert this to the test we need in the branch creation:

  enum {
    eLastBranch = (N - 2)
  };

In the code, we then have:

  virtual volatile Node *getOrCreateNodeForKey( const uint8_t aKey[],
                                                uint16_t aStep )
  {
    volatile Node   *n = NULL;
 
    // get the index we're working on (re-used a few times)
    uint8_t     idx = aKey[aStep];
    Component   *curr = __sync_or_and_fetch(&kids[idx], 0x0);
    if (curr == NULL) {
      // create a new Branch or Leaf for this part of the trie
      bool    createBranch = true;
      if (aStep < eLastBranch) {
        curr = new Branch();
      } else {
        curr = new Leaf();
        createBranch = false;
      }
      // throw a runtime exception if we couldn't make it
      if (curr == NULL) {
        if (createBranch) {
          throw std::runtime_error("[Branch::getOrCreateNodeForKey] "
                   "Unable to create new Branch for the trie!");
        } else {
          throw std::runtime_error("[Branch::getOrCreateNodeForKey] "
                   "Unable to create new Leaf for the trie!");
        }
      }
      // see if we can put this new one in the right place
      if (!__sync_bool_compare_and_swap(&kids[idx], NULL, curr)) {
        // someone beat us to it! Delete what we just made...
        delete curr;
        // ...and get what is there now
        curr = __sync_or_and_fetch(&kids[idx], 0x0);
      }
    }
 
    // now pass down to that next branch the request to fill
    if (curr != NULL) {
      n = curr->getOrCreateNodeForKey(aKey, (aStep + 1));
    }
 
    // return what we have dug out of the tree
    return n;
  }

This little test allows us to place the key size as a parameter to the template, and that makes it very easy to make different sized keys. For convenience, I added in the convenience methods to deal with the different sized keys from the contained values. It's not strictly necessary, but it'll make using the template class a lot nicer.

Created SyncKitTouch for Better Code Sharing to iOS

Monday, June 4th, 2012

xcode.jpg

This morning I finally put the finishing touches on the first good cut of SyncKitTouch - the iOS library based on the AppKit Library, SyncKit that I had built for a project Joel and I are working on. It's a simple cloud synchronization scheme where we know the data that's getting synced, and the database in the cloud is a peer to the clients on iOS and OS X. The interesting part of this is that I didn't initially expect to have to build a second version of the library - I thought that all the NSDataSources and ComboBox DataSources were going to be used on both platforms. Oh no, no, no… That would make too much sense.

It seems that the data sources, while pretty much functionally the same, they aren't actually the same, and since the headers for one platform don't exist on the other, the best way I found to make the code as reusable as possible is to have the complete library in one project/target, and then put the AppKit-specific DataSources into categories on the appropriate classes. Then in the UIKit-based 'touch' library, I simply included all the classes - and re-implemented the categories this time using UIKit DataSources and Controllers.

It wasn't too bad, as most of the code was in the original library, and while it's recompiled, it's not duplicated in the project, and that means change it in one place, and that's it. Much better than making a complete duplicate of the library - or making three - one for the base functionality, one for the AppKit extensions, and another for the UIKit extensions. That's too much - better to have two - one that's "Touch", and one that's not.

So I got all the code in and compiling - but I haven't really create a SyncKitTickle app to try it out. I'm just not sure how this will look and function. I'm hoping Joel can give me a few ideas there. If he does, the next thing will be to make SyncKitTickle and start trying this out on the simulator, etc. Just to make sure that what I'm building is really working as it should.

Still… it's a nice milestone to get done and send back to Joel.

Bug in Google Chrome dev 21.0.1155.2

Thursday, May 31st, 2012

I was a little shocked to see that Google Chrome dev has a serious rendering bug in it. I think it's most obvious (to me) on the Google Finance page, but I'll bet it's a lot of places. The blocking for the graphs is way off for the page. Hugh blank spaces in the layout should have been filled up with content:

Re: skitched-20120531-164934

I'm sure they have people working on it now, as I can't have been the first person to notice this, but it's a little surprising still… I find it hard to believe this got through Q/A. Or even simple checks of common web sites - like their own!

Installing Ubuntu 12.04 on Spare Laptop

Thursday, May 31st, 2012

iMac-G5-20.jpg

I have been thinking about decommissioning my wonderful G5 iMac, as I just don't use it any more, and pulling a spare Gateway laptop out of the closet and installing the latest Ubuntu release for a platform to test DKit on in a more generic C++/linux environment. My iMac has been a blast. It's been with me for a long time, and I've really done a lot of great things with it. But the truth is that with Mac OS X 10.7 or even 10.6 running on it, it's long in the tooth, and there's no real need to keep it around. It's not hitting the dumpster, but I've got very limited desk space these days, and it really makes more sense to have the Ubuntu machine on the desk than the iMac.

So today's the change.

Last night I downloaded the Ubuntu 12.04 32-bit and AMD64 distributions and burned them onto DVDs. I think CD's would have worked, but I didn't have any, and a spindle of DVDs is something we almost never go through, so it was easy to burn off two copies. I'm glad I got both, as I remembered that the laptop in question had an AMD64 CPU in it, and so I'll be able to make the 64-bit and 32-bit libraries for DKit. Also, I'm sure I'll be able to fire up WindowMaker on Ubuntu 12.04, as I had it working on Ubuntu 10 at my last job. So all in all, it should be just what I wanted to test out the builds and compiles of DKit.

Ubuntu Tux

The boot directly into Ubuntu 12.04 is nice, but it might have been nice to have been given the option of boot or install a little earlier in the process. Here nor there, it's up, and this is not fast hardware, so I'm willing to cut Ubuntu a lot of slack here. It's nice that it has to run/install option, and so easy to get going.

I've been doing a lot of linux installs in the last decade, and they all have come to the point that a nice GUI is standard. I think Ubuntu's installer is no different. I do think that the use of the twitter feed on the install screen is very clever. It's an easy way to see what's happening, and what people are saying about Ubuntu as you're waiting for the files to copy and download. Clever.

Interesting Installation Challenges

The first thing I ran into is that sshd is not enabled by default on Ubuntu 12.04 - so I had to google how to install it:

  $ sudo apt-get install openssh-server

Once I had sshd installed on mao, I was able to log in from my MacBook Pro and move over my ~/.ssh directory so I could have my keys. After that, I needed to make sure that I had Vim:

  $ sudo apt-get install vim vim-gtk

then I moved over my .vimrc, and ~/.vim directory. In order to use the gists from Vim, I needed to have curl:

  $ sudo apt-get install curl

and then to have nice bitmapped fonts, I needed to remove the "don't" condition on them and regenerate the font table:

  $ sudo rm /etc/fonts/conf.d/70-no-bitmaps.conf
  $ sudo dpkg-reconfigure fontconfig

I certainly needed to have the compilers and source control tools:

  $ sudo apt-get install g++
  $ sudo apt-get install libboost-all-dev
  $ sudo apt-get install git

And then finally, I wanted to have the 'dev' channel of Goggle Chrome:

  $ sudo dpkg -i Downloads/google-chrome-unstable_current_amd64.deb
  $ sudo apt-get install libxss1

One thing I noticed is that the motd was a little verbose, and I wanted to trim it up. Turns out in Ubuntu 12.04, it's a symlink to /var/run/motd, so I needed to do a little to set things up right. First, copy the motd to motd.static, then relink the /etc/motd to the static version, and finally edit the static version for content:

  $ sudo cp /etc/motd /etc/motd.static
  $ sudo rm /etc/motd
  $ sudo ln -s /etc/motd.static /etc/motd
  $ sudo vi /etc/motd.static

With this, I get the leaner and cleaner motd and things are good again. I just wanted to put all this here so I had something to work off of, if I decided to do this all again.

After I got all this down and installed, I was able to go to work on making sure DKit compiled and ran on Ubuntu 12.04. It wasn't all that hard, but there was one thing I was a little surprised about. In LLVM gcc 4.2.1 I was able to have the following struct and use it in the boost::unordered_set container:

  typedef struct {
    boost::thread   thread;
    auint32_t       use_count;
  };

But gcc 4.6.3 on Ubuntu 12.04 didn't like this at all. It didn't like the default constructor or the default copy constructor. What I needed to do was to explicitly include them in order to make the compiler happy:

  typedef struct base_thread_info {
    mutable boost::thread   thread;
    auint32_t               use_count;
    /**
     * We should throw down the default constructor and copy constructor
     * as the boost container is going to need these, and we can't allow
     * the thread itself to be copied. That's just not right. So we mask
     * that and just let the count be copied. This isn't great, but we
     * have to have this capability, and this is the simplest way to
     * make it work.
     */
    base_thread_info() :
      thread(),
      use_count(0)
    { }
    base_thread_info( const base_thread_info &anOther ) :
      thread(),
      use_count(anOther.use_count)
    { }
    virtual ~base_thread_info()
    { }
  } thread_info;

The problem appeared to be that the copy constructor for the boost::thread was private, and that meant that we couldn't just "use it". In this case, we didn't even try, and that was OK. We aren't really trying to copy these, anyway, but the container needed to have this so that it could create and then copy into, as needed.

Interesting, but it's all cleared up now. Good.

[6/6] UPDATE: I forgot about PostgreSQL, PHP and Apache on the box. In order to get these guys going, I needed to install the relevant packages:

  $ sudo apt-get install postgresql-9.1
  $ sudo -u postgres createuser --superuser $USER

This gets me PostgreSQL installed, and me as a super user on the database instance - as long as I log in from the box. It also starts up the instance with the default parameters that are good enough for what I need at the moment. If I need to mess with the security permissions later, that's OK. But I think it's pretty much default behavior for now.

Then I needed to get Apache, PHP, and include the integration pieces for these so that they all work together:

  $ sudo apt-get install apache2
  $ sudo apt-get install php5 libapache2-mod-php5 php5-pgsql
  $ sudo /etc/init.d/apache2 restart

In order to get rid of an annoying warning, I needed to make a config file with the fully qualified domain name of the server:

  $ cd /etc/apache2/conf.d
  $ sudo vi fqdn

and include the single line:

  ServerName mao.themanfromspud.com

and then restart apache again. At this point, a simple info.php file shows that I have everything up and running and it's all wired up so that it all works from within apache web pages and PHP. Nice. Now I'm ready to do the other XML-RPC work I've got going right now.

Added UDP Transmitter to DKit to Round Out I/O Package

Wednesday, May 30th, 2012

DKit Laboratory

This morning I wrote a simple, yet effective, UDP transmitter for DKit. The idea is simple - if we have a UDP receiver for datagrams, then it makes sense to have a transmitter of the same. Simple make it a subclass of the sink and we should be ready to go. We can have it share io_service threads like the receivers, but since the transmitters have a slightly different usage on the io_service threads, it's probably a good idea to leave them separate for now.

We can certainly make a simple io_service base class for both the receiver and transmitter and pull these into this base class if we need to, but since the transmitter's need to have a boost::io_service::work defined for them - lest the io_service's run() method return too quickly due to nothing to do, it's different than the receivers, and simple enough to pool on the transmitters, as needed.

The one trick was the virtual template classes problem that I solved previously. I do not want to make the inclusion of the UDP transmitter require that the user know what type to use, so I made a very simple, and very clean subclass that then becomes the basis of the UDP transmitter:

  namespace detail {
  template <class T> class basic_udp_transmitter :
    public sink<T>
  {
    public:
      basic_udp_transmitter() { }
      virtual ~basic_udp_transmitter() { }
 
      /**
       * This is the main receiver method that we need to call out to
       * a concrete method for the type we're using. It's what we have
       * to do to really get a virtual template class working for us.
       */
      virtual bool recv( const T anItem )
      {
        return onMessage(anItem);
      }
 
      /**
       * This method is called when we get a new datagram, and because
       * we are expecting to instantiate this template class with the
       * type 'T' being a <datagram *>, this is the method we're expecting
       * to get hit. It's just that simple.
       */
      virtual bool onMessage( const datagram *dg )
      {
        /**
         * This method will be overridden by the specialization of
         * this template class.
         */
        return true;
      }
  };

and then the main class looks like this:

  class udp_transmitter :
    public detail::basic_udp_transmitter<datagram*>
  {
 
    // …
 
    /********************************************************
     *
     *                Processing Methods
     *
     ********************************************************/
    /**
     * This is the specialized template method that will be called from
     * the generalized recv() method, above, and will, in fact, be the
     * place where we do all the work of sending out the datagram to the
     * UDP multicast channel.
     */
    virtual bool onMessage( const datagram *aDatagram );
 
    // …
 
  };

At this point, it's easy to use the udp_transmitter because I've wired it all up to send the datagrams out via boost ASIO:

  udp_transmitter  xmit(multicast_channel("udp://239.255.1.1:30001"));
  rcvr.addToListeners(&xmit);

The threads are all handled in the same manner as the UDP receiver, so we know they are clean with the reference counting and the clearing of the sockets. It's a nice, clean implementation. You just need to wire it up to a source and it's going to send out those datagrams as they arrive.

Sweet.

It's nice to get this done and really get the next cut of the library done. It's now got receivers and transmitters, and I can build on these as needed or add in more receivers and transmitters (ZeroMQ, etc.) as needed.

Google Chrome dev 21.0.1155.2 is Out

Wednesday, May 30th, 2012

Google Chrome

This morning I noticed that Google Chrome dev 21.0.115.2 was out which marks the next major version number update in a while. It's got a new V8 javascript engine, as well as a few more experimental features enabled by default. It's nice to see it move along, and the number of times that I've been disappointed in the update - primarily due to screen redraw, are getting fewer and farther between. That's a great thing.

Glad to see the progress.

Added Adapter Class to DKit

Tuesday, May 29th, 2012

DKit Laboratory

Today I spent the time to finish up the addition of the adapter class to DKit. The idea is pretty simple: place a source and sink back-to-back and have their template types independent, and there you go! The multiple inheritance virtual template class is a little tricky in getting all these templates working, but I got it - eventually.

The original implementation was designed to simply use the API of the source and sink, but then I ended up getting into trouble with the accessing of protected methods in the source and sink, so I needed to lean a little more heavily on the actual implementations of the source and sink. This works well, too, as we then only have one codebase that we have to maintain - and that's always a good thing.

The implementation is really just a combination of the source and sink, but it affords me the flexibility of making a base class that takes a certain type, and generates a certain type, and with a little template specialization, it all comes together very simply and easily for the user.

I worked up a simple test adapter, and it's pretty clean:

  template <class TIN, class TOUT> class MyAdapter :
    public dkit::adapter<TIN, TOUT>
  {
    public:
      MyAdapter() { }
 
      /**
       * This is the main receiver method that we need to call out to
       * a concrete method for the type we're using. It's what we have
       * to do to really get a virtual template class working for us.
       */
      virtual bool recv( const TIN anItem )
      {
        return dkit::adapter<TIN, TOUT>send(convert(anItem));
      }
 
      /**
       * This method is called when we get a new datagram, and because
       * we are expecting to instantiate this template class with the
       * type 'T' being a <datagram *> this is the method we're expecting
       * to get hit. It's just that simple.
       */
      std::string convert( const datagram *dg ) {
        std::string     out = "<null>";
        if (dg != NULL) {
          std::cout << "converting: " << dg->contents() << std::endl;
          out.assign(dg->what, dg->size);
        }
        return out;
      }
  };

As with the sink, we need to include the recv() method, and then call the specialized method that will do the actual work. In many cases, this specialized method can be a virtual method, making the implementation of the subclasses even simpler. I think it's a pretty solid design, and I'm happy with it.

I'd still like to add a UDP transmitter, and I hope to get to that tomorrow - just to round out the library for completeness.

Completed the UDP Receiver for DKit

Friday, May 25th, 2012

DKit Laboratory

This afternoon I finally finished the UDP receiver for DKit - and it's everything I'd hoped for in this incarnation. When I started all this work, I wanted to make something that was easy to use, free, and something I could build on easily should I ever find myself in the same place where I needed to create an exchange feed system. Some of the limitations that I've run into in the past are:

  • Too Much in the Receiver Class - this happens all the time at a job: you want to do it "right", but there's no time. So as soon as you get something going, you add to it, as opposed to taking the time to see where it's appropriate to cut the design off and leave the component alone. In the past, this has really hurt the complexity of my UDP receivers.
  • Too Much Copy-n-Paste Reuse - as with the lockless containers and pools, I ended up not working hard enough to make the important template classes, and rather just hammered out code that worked - regardless of the line count. That's a horrible mistake to make.
  • Queueing in the Wrong Places - because of these other two mistakes, I have ended up putting in queueing in the wrong places. This makes things far less reusable as there is a built-in limitation that can't easily be overcome. In the past, this has even further introduced complexity as I've tried to make these queues "switchable".

So this time I was trying to really get it right. And I think I'm a lot closer now than I ever have been in the past. This new implementation in DKit has all the things fixed that I didn't like about my previous versions: better templates, less lines of code, and most importantly: it's the right tool for the job.

Specifically, there are several features in this version that make many of the limitations and problems of the last no longer issues at all:

  • The ability to have any number of receivers on the same service thread - this was a problem of context switching with all the threads doing a lot of waiting. In the old version, I had one thread for each pair of UDP sockets, and that was far too few. I could have easily gone up to 15 or 20, but the architecture of the receivers didn't allow the sharing of the boost::io_service instances like DKit allows. This will make it much easier to effectively load up a box and not pay a high price for context switching.
  • The ability to have any type for a source and sink - the old implementation had a single type of message moving between all sources and sinks, and while this may seem OK, it really meant that I had to have a source generate a message - it couldn't generate a datagram. The ability to have the source generate a datagram means that I can leave that component alone and now make components that sink datagrams and source messages. These are then universal in what they do, and we don't have to worry about putting too much logic into any one component.
  • The minimal feature set helps efficiency and performance - there's a lot to be said for a simple component that does one thing very well, and the templates make the compile time a little long, but in the end, it's incredibly flexible. I've always said that the best designs are the simplest designs, and this is certainly true of the receiver as it exists now. There will be no need for subclassing this guy - he's done. I now just need to make the combined sink/source to act as the subclass for those components that will mutate/translate/convert the output of one source to the input of the necessary sink. This design is far better than what we had.

The usage of this is pretty simple. You first need to make a subclass of the sink and have that call out to a method of the right type. Then you just need to create one, start it listening, and you're there:

  /**
   * This is the tests for the udp_receiver
   */
  //  System Headers
  #include <iostream>
  #include <string>
 
  //  Third-Party Headers
 
  //  Other Headers
  #include "io/udp_receiver.h"
  #include "sink.h"
  #include "util/timer.h"
 
  using namespace dkit::io;
 
  /**
   * I need to have a subclass of sink<T> that's going to handle my messages.
   * Because it's still a template class, I need to call out to a specialized
   * method. It's all still in the class, but it's required for the virtual
   * template class to actually work.
   */
  template <class T> class MySink :
    public dkit::sink<T>
  {
    public:
      MySink() :
        _cnt(0),
        _last_time(0)
      { }
 
      /**
       * This is the main receiver method that we need to call out to
       * a concrete method for the type we're using. It's what we have
       * to do to really get a virtual template class working for us.
       */
      virtual bool recv( const T anItem )
      {
        return onMessage(anItem);
      }
 
      /**
       * This method is called when we get a new datagram, and because
       * we are expecting to instantiate this template class with the
       * type 'T' being a <datagram *>, this is the method we're
       * expecting to get hit. It's just that simple.
       */
      bool onMessage( const datagram *dg ) {
        if (dg == NULL) {
          std::cout << "got a NULL" << std::endl;
        } else {
          std::cout << "got: " << dg->contents() << std::endl;
          _last_time = dkit::util::timer::usecStamp();
          ++_cnt;
        }
        return true;
      }
 
      /**
       * This method will return 'true' if we've received ANY datagrams
       * and if the last one was more than 5 sec ago. That's the timeout
       * for "no more data is coming our way."
       */
      bool allDone()
      {
        using namespace dkit::util;
        static uint32_t __limit = 5 * 1000000L;
        return ((_cnt > 0) && ((timer::usecStamp() - _last_time) > __limit));
      }
 
    private:
      uint32_t    _cnt;
      uint64_t    _last_time;
  };
 
 
  /**
   * This is the main testing app where we'll listen on a specific URL for
   * UDP multicast data, and then process it until there's a timeout. It's
   * going to also use the "shared io_service" capabilities just to make
   * sure that the reference counting in the udp_receiver is working right.
   */
  int main(int argc, char *argv[]) {
    bool  error = false;
 
    /**
     * To wire up as a listener to the udp_receiver, we need to be a
     * subclass of sink<datagram*>… so now that it's made, construct
     * one with it's own io_service.
     */
    MySink<datagram*> dump;
    udp_receiver  rcvr(multicast_channel("udp://239.255.0.1:30001"));
    rcvr.addToListeners(&dump);
    rcvr.listen();
    /**
     * At this point, make a new udp_receiver but share the io_service
     * thread from the one we just made. This will mean that both these
     * sockets are serviced on the same thread. Just a nice way to prove
     * that the reference counting is working.
     */
    udp_receiver  hold;
    hold.shareService(rcvr);
    hold.init();
    /**
     * Now let's stay in this loop as long as we need to...
     */
    while (rcvr.isListening() && !dump.allDone()) {
      sleep(1);
    }
    std::cout << "shutting down due to inactivity..." << std::endl;
    rcvr.shutdown();
 
    std::cout << (error ? "FAILED!" : "SUCCESS") << std::endl;
    return (error ? 1 : 0);
  }

The next thing I need to add to DKit is the adapter which com pines the sink and source in a single template class to be able to do all the steps in between. Once I have that, it's a simple matter of making the exchange codecs exchange adapters and then I'm out the door with a ZeroMQ transmitter that can take datagrams or payloads of some kind, and send them out with a reliable multicast transport.

It's looking pretty good.