Archive for the ‘Coding’ Category

Working Around C++ Virtual Template Issues

Thursday, May 24th, 2012

DKit Laboratory

Today I had a real problem, and I didn't even know it. It took me to the point of getting it all written up to realize that C++ virtual methods in template classes just aren't really possible. I ended up doing a big search on google, and there are enough examples of people trying to do this that I was convinced that it really isn't possible. And that left me in a lurch.

The original design I was working with in DKit was a source and sink, as I've written about in the past. What I did with these classes was to make them template classes, just recently realizing that all their method implementations needed to be in the header, but the idea was that there would be a template for the source and sink to be used as the base class of a series of sources and sinks moving different values around.

In order to do this, the sources would allow the sinks to be registered as listeners, and the sinks would then me messaged when a source needed to send a message. The source template class method for adding a listener looked like this:

  namespace dkit {
  template <class T> class source
  {
    public:
 
      virtual bool addToListeners( sink<T> *aSink )
      {
        bool        added = false;
        // first, make sure there's something to do
        if (aSink != NULL) {
          // next, see if we can add us as a source to him
          if (aSink->addToSources((const source<T>*)this)) {
            // finally, make sure we can add him to us
            added = addToSinks(aSink);
          }
        }
        return added;
      }
 
  };
  }   // end of namespace dkit

and then the code for sending a 'T' to all the registered listeners looked like this:

      virtual bool send( const T anItem )
      {
        bool       ok = true;
        if (_online) {
          // lock this up for running the sends
          boost::detail::spinlock::scoped_lock  lock(_mutex);
          // for each sink, send them the item and let them use it
          BOOST_FOREACH( sink<T> *s, _sinks ) {
            if (s != NULL) {
              if (!s->recv(anItem)) {
                ok = false;
              }
            }
          }
        }
        return ok;
      }

While I've got the virtual identifier in the template - I've learned that this is OK - so long as we're still dealing with template class definitions. So they stay, but the problem was soon approaching.

When I want to make a sink for the particular source I've created (we'll get to that in a minute), I need to subclass the sink which has the one important method:

  namespace dkit {
  template <class T> class sink
  {
    public:
 
      virtual bool recv( const T anItem )
      {
        /**
         * Here's where we need to process the item and do what we
         * need to with it. We need to be quick about it, though,
         * as the source is waiting on us to finish, and send the
         * same item to all the other registered listeners.
         */
        return true;
      }
 
  };
  }   // end of namespace dkit

This method is called in the source's send() method, and is the way we'll be able to get what we need. My original plan was to implement the listener like this:

  template <class T> class MySink :
    public dkit::sink<datagram*>
  {
    public:
      MySink() { }
 
      virtual bool recv( const datagram *anItem )
      {
        // do something with 'anItem' - the datagram pointer
        return true;
      }
  };

but while it compiled, it didn't work. Why not? Because there is no support for virtual methods in partially implemented template classes. None.

No matter what I did, there was no getting around it. My test app was simple:

  int main(int argc, char *argv[]) {
    bool   error = false;
 
    MySink     dump;
    udp_receiver   rcvr(multicast_channel("udp://239.255.0.1:30001"));
    rcvr.addToListeners(&dump);
    rcvr.listen();
    while (rcvr.isListening()) {
      sleep(1);
    }
 
    std::cout << (error ? "FAILED!" : "SUCCESS") << std::endl;
    return (error ? 1 : 0);
  }

I created the sink - called MySink, and then wired it up to the UDP receiver (a source), and started listening. All the datagrams were sent to the base class implementation of sink::recv(). Nothing was coming to the subclass' method.

OK, it's just not in the language. Boy, I sure wish it were, but it's not. So how to fix things? Well… I have a FIFO template class in DKit, and it's got working virtual methods. But the trick there is that the subclasses are still template classes. There's no attempt at specialization. So maybe the answer is as simple: Don't specialize!

So if I don't specialize, then my MySink class is still a template class:

  template <class T> class MySink :
    public dkit::sink<T>
  {
    public:
      MySink() { }
 
      virtual bool recv( const T anItem )
      {
        return onMessage(anItem);
      }
  };

and I've introduced a new function: onMessage(T). This, I just need to create for the kind of data I'm moving:

  bool onMessage( const datagram *dg ) {
    if (dg == NULL) {
      std::cout << "got a NULL" << std::endl;
    } else {
      std::cout << "got: " << dg->contents() << std::endl;
    }
    return true;
  }

Now my test application has to include the type in the definition:

  int main(int argc, char *argv[]) {
    bool   error = false;
 
    MySink<datagram*>   dump;
    udp_receiver   rcvr(multicast_channel("udp://239.255.0.1:30001"));
    rcvr.addToListeners(&dump);
    rcvr.listen();
    while (rcvr.isListening()) {
      sleep(1);
    }
 
    std::cout << (error ? "FAILED!" : "SUCCESS") << std::endl;
    return (error ? 1 : 0);
  }

What I'm doing is keeping the template subclass as the same template so that the virtual methods are going to work for us. Then I make use of the fact that I only need to make the one version of the method I need, as the templates are resolved at compile time. If I need a new subclass, then so be it. But I can make these implementations very easily and very quickly, and it's nearly as good as the fully virtual template methods - just one small step in there to glue things together.

I'm pleased with the progress. I'll be checking all this into GitHub soon, and then it'll be up there for good, but for now, I'm happy that I've figured out a clean way to make this work for me.

[5/25] UPDATE: I was working on this this morning, and there's a slightly cleaner way to do the polymorphic behavior with template classes:

  template <class T> class MySink :
    public dkit::sink<T>
  {
    public:
      MySink() { }
 
      virtual bool recv( const T anItem )
      {
        return onMessage(anItem);
      }
 
      bool onMessage( const datagram *dg ) {
        if (dg == NULL) {
          std::cout << "got a NULL" << std::endl;
        } else {
          std::cout << "got: " << dg->contents() << std::endl;
        }
        return true;
      }
  };

and at this point, it's all contained in the subclass of sink.

Minor point, but it's contained in one class, and that makes things a little easier to read and deal with. I tried to put the call in the sink superclass, but it's the same problem all over again - virtual methods in template classes are just not very well supported. But this at least is not too big of a hack to get things working properly.

Building Solid Template Classes – Stick to Headers

Tuesday, May 22nd, 2012

DKit Laboratory

Today I was starting to compile my latest addition to DKit - the UDP receiver that's a source of datagrams in the DKit sense. This is really just the first step of many to get the exchange feeds re-written into DKit in a far better way than I had recently done. The second time is always better, and the third better still.

So I had built the source and sink template classes with headers and implementation files. This, it turns out, really isn't such a hot idea. Because the templates are expanded at compile time, it's not possible to compile the source once, and then use it over and over again. I should have know this, but it wasn't smacking me in the face until I started to try and compile the UDP receiver and it was talking about missing implementations of the source::~source() for the specialization (datagram *).

Then it was instantly clear - I needed to keep everything in headers so that they could be expanded and built as needed, at compile time. Thankfully, there wasn't a lot of work to do - just move the code from the implementation file to the header and drop the implementation files from the directory and Makefile. It didn't take long, but it was clear that this was the right thing to do. The compiles started working, and things progressed nicely.

Not done, that's for sure, as I still need to throw in the async reader method to the UDP receiver, and then I can pull in that pool of datagrams and we should be in good shape. But it's still a little bit of work.

But it's nice to know that template programming is not only tricky and convoluted, it's also like Java code - all in one file. OK… that's a stretch, even for me. 🙂

General Template Usage with Pointers

Monday, May 14th, 2012

DKit Laboratory

This evening I finished up adding pools to DKit, and there are a few things that make it so bloody cool I had t write about it. In short, the problem is one that I've had to deal with before: How to make a simple pool of things such that I only create what I need, but don't over create, and can use a manageable set of things from a pool.

Like say I wanted to have a messaging system. I might want to have a bunch of std::string values that are created with a minimum size to make it easy to move things around. Then, all I would need to do is to get one from the pool, clear it out, add in the data, and then when I'm done, return it to the pool. The location in the code that gets from the pool is at a totally different place than the place that recycles these instances, and it's possible that due to threading or queueing, we may need to have several in play all at once. But eventually they will all come back to be recycled.

In the past, I had one class for a std::string pool, and another for datagrams, and so on. Each of these was almost identical to the other, just in how the instance was created. Typically, I'd have some sense of the 'default' size of the container I was creating. I based each off a single type of FIFO queue, so that it would always be a SP/SC pool, etc. This was necessary because I hadn't super classed the FIFO queues as I have in DKit.

Still… this was a lot of copy-n-paste reuse, and it was clear that it wasn't anywhere near as flexible as it could be. So today I decided to try and see what I could get away with if I tried to make a complete template pool class. The challenges were pretty clear:

  • Include the Type of the Queue in the Template - I knew that now that I had the FIFO abstract template class in DKit, it was going to be possible to have the constructor make a queue of the right type, and then just "use it" via the FIFO abstract template class and be able to allow the user to define what access type they wanted for the pool.
  • Include the Max Size of the Pool as a Power of Two - this was in keeping with the queues I'd be creating, and so shouldn't prove to be too hard.
  • Allow Pointers and Non-Pointers to be Pooled - this was the biggest challenge I faced, to be sure. In the pool, I wanted to have two basic methods: next() to get a new one from the pool, and recycle(T) to return it to the pool. The problem is what if the type 'T' is a pointer versus a non-pointer? How do we make sure we can delete a pointer, but simply let a uint64_t fall on the floor and be cleaned up?

Thankfully, I was able to look on Stack Overflow and see this interesting question which lead me to the answer I needed. What it really boiled down to was that I could use the boost::is_pointer(), but that only really lets me know if I need to clean up the contents of the queue in the clear() method. The trick to constructing and destructing was to realize that the templates worked outside the scope of the class definition, and so after the class definition, I added in these functions:

  namespace dkit {
  namespace pool_util {
  /**
   * In order to handle both pointers and non-pointers as data
   * types 'T' in the pool, we need to take advantage of the
   * template methods and make create() and delete() methods
   * for pointers and non-pointers.
   *
   * For create(), it's pretty easy - we allow for nothing to be
   * done for the non-pointer, and a standard 'new' for the pointer.
   * For delete(), it's the same - we delete it and then NULL it
   * out if it's a pointer, if it's not, we do nothing.
   */
  template <typename T> void create( T t ) { }
  template <typename T> void create( T * & t )
  {
    t = new T();
  }
 
  template <typename T> void destroy( T t ) { }
  template <typename T> void destroy( T * & t )
  {
    if (t != NULL) {
      delete t;
      t = NULL;
    }
  }
  }      // end of namespace pool_util
  }      // end of namespace dkit

Then, in the critical next() and recycle() methods, I simply used these functions:

  /**
   * This method is called to pull another item from the pool, or
   * create a new one if nothing is in the pool. This is the classic
   * way of getting the "next" item to work with.
   */
  T next()
  {
    T      n;
    // see if we can pop one off the queue. If not, make one
    if ((mQueue == NULL) || !mQueue->pop(n)) {
      pool_util::create(n);
    }
    // return what we have - new or used
    return n;
  }
 
 
  /**
   * This method is called when the user wants to recycle one of
   * the items to the pool. If the pool is full, then we'll simply
   * delete it. Otherwise, we'll put it back in the pool for use
   * the next time.
   */
  void recycle( T anItem )
  {
    if ((mQueue == NULL) || !mQueue->push(anItem)) {
      pool_util::destroy(anItem);
    }
  }

By using the create(n) and destroy(n) functions, I allow the compiler to see the template functions and pick which one to use. In the case of a pointer for 'T', it chooses the pointer-reference argument, in the case of a non-pointer, it's the pass-by-value argument. This selectivity allows me to partially implement these any time I want in order to make the actual construction and destruction as complicated as I need without requiring it for the simple default constructor and destructor.

Once I had this, the remaining problems weren't too bad at all.

I created an enum for the type of access to use:

  /**
   * We need to have a simple enum for the different "types" of queues that
   * we can use for the pool - all based on the complexity of the access. This
   * is meant to allow the user to have complete flexibility in how to ask for,
   * and recycle items from the pool.
   */
  namespace dkit {
  enum queue_type {
    sp_sc = 0,
    mp_sc,
    sp_mc,
  };
  }       // end of namespace dkit

and then it was pretty simple to make the template and the constructor:

  template <class T, uint8_t N, queue_type Q> class pool
  {
    public:
      pool() :
        mQueue(NULL)
      {
        switch (Q) {
          case sp_sc:
            mQueue = new spsc::CircularFIFO<T, N>();
            break;
          case mp_sc:
            mQueue = new mpsc::CircularFIFO<T, N>();
            break;
          case sp_mc:
            mQueue = new spmc::CircularFIFO<T, N>();
            break;
        }
      }
 
  …
  };

In the end, the code worked wonderfully. I built a test app that made sure the destructor was properly being called when recycle() was being called and the queue was full - check. I also made sure that when the pool was destructed, any remaining elements were properly destructed - if they were pointers.

It was really an amazing little bit of code. This is far more flexible and better than the previous single-purpose pools I've written. Less code is always better.

Added Source and Sink Template Classes to DKit

Friday, May 11th, 2012

DKit Laboratory

Today I wanted to add a little bit more to DKit, so I decided that the next best thing to add were the concepts of a template source and sink. When I built the MessageSource and MessageSink back at PEAK6, I did it in the context of a Message object. That was fine, because that's all we needed, but this time I wanted to make it a little better - no a lot better, so I made the source and sink template classes.

This will allow me to use the pointers like I did before, but it will also make it very easy to use integers, or doubles - or anything. This will be a far better solution to the problem than a fixed class, or pointer to a class.

The only real problem I ran into - if there was indeed any real problems, was the syntax for specifying the template class methods in the implementation file. The header was pretty clear and straight forward, but the implementation was a bit trickier.

Thankfully, I was able to find some examples on the web, but the syntax was pretty bad. I mean really bad. For example, let's say I had the following template class:

  namespace dkit {
  template <class T> class source
  {
    public:
 
      virtual void setName( const std::string & aName );
      virtual const std::string & getName() const;
      virtual bool addToListeners( sink<T> *aSink );
 
  };
  }     // end of namespace dkit

then the implementation file would have to look something like this:

  namespace dkit {
 
  template <class T> void source<T>::setName( const std::string & aName )
  {
    boost::detail::spinlock::scoped_lock  lock(mMutex);
    mName = aName;
  }
 
 
  template <class T> const std::string & source<T>::getName() const
  {
    return mName;
  }
 
 
  template <class T> bool source<T>::addToListeners( sink<T> *aSink )
  {
    bool       added = false;
    // first, make sure there's something to do
    if (aSink != NULL) {
      // next, see if we can add us as a source to him
      if (aSink->addToSources(this)) {
        // finally, make sure we can add him to us
        added = addToSinks(aSink);
      }
    }
    return added;
  }
 
  }     // end of namespace dkit

Now I understand the need to clearly identify the class - hence the template <class T> on the front, but then it seems to really go overboard there. While I'm sure there's a wonderful reason for all this, it seems to have not undergone any simplification over the years. That's probably part of the problem that folks have with C++ - it's complicated looking. And if you don't get it just right, you're not going to even get it to compile.

But for me, it's just what has to be done in order to have template programming. It's a pain, yes, but it's so incredibly worth it.

Looking at C++ Unit Testing Frameworks

Thursday, May 10th, 2012

bug.gif

I've used JUnit and the Unit Testing Framework in Xcode 4, and both are pretty nice - they allow you to write tests, not headers and a lot of boilerplate code. Just the tests. Lots of simple assertions and tests, and it's really not bad at all to use these guys. But with C++ I'm finding it a lot harder to start using a testing framework. Maybe it's me, maybe it's just the language, but it's not a lot less than writing test apps, and having explicit return codes.

I'm not saying that testing isn't useful, what I'm wondering is if C++ as a language is really set up to have a nice, simple, unit testing framework like JUnit or SenTesting Framework. After all, there's a lot of flexibility in Java and Obj-C that simply isn't in C++. You can add methods to Java and Obj-C in the implementation file and run with it. Reflection (introspection) allows the language and framework to see what's available to run, and then run it. Not so for C++.

So I'm wondering if there's really any better solution than a series of good testing apps with proper return codes, and then you just run one after the other until you have what you need. Not ideal, to be sure, and it takes some effort and discipline to make sure the test apps are done properly, but I just don't see a way to make it happen in a significantly easier way.

I'll keep looking, though. Maybe someone is going to crack this nut soon.

Fun with Boost and DKit

Thursday, May 10th, 2012

DKit Laboratory

This morning I finished up a little coding I was doing on DKit and was really happy about the outcome. The thing that really set the stage was the building of boost for OS X 10.7. It was really pretty simple, and it allows me the flexibility to use boost in DKit. Now there was nothing really stopping me before, but without it running on my MacBook Pro, it was a little hard to know that it was going to work.

I suppose I could have pulled out my old Intel laptop and downloaded Ubuntu 12 and put it on there, and run it in a terminal, but I didn't really feel like bring that guy out of the closet, but maybe it's time. I could really use to have Ubuntu 12 working now.

In any case, boost is built, and I was able to make the "hammer" and "drain" thread tests on the LinkedFIFO queues. The idea is that I have a "hammer" that can place items on a queue, and then a drain that can empty a queue. By putting these in different combinations, I can test a multi-producer/single-consumer queue as well as a single-producer/multi-consumer queue.

Starting with the Right Base Class

One of the neat things I did this morning was to make a base abstract template class for all the FIFO queues, as they all (on purpose) have the same basic API. This then allows me to treat all the different implementations as just that - implementations and not as something of significant difference.

It's not too exciting, but I was pretty pleased after I had created it and then used it as the base class for all the FIFO implementations I had in DKit. The really important part was to make sure that the core API methods were pure virtual methods - a.k.a. virtual abstract methods:

  /*******************************************************************
   *
   *                        Accessor Methods
   *
   *******************************************************************/
  /**
   * This method takes an item and places it in the queue - if it can.
   * If so, then it will return 'true', otherwise, it'll return 'false'.
   */
  virtual bool push( const T & anElem ) = 0;
  /**
   * This method updates the passed-in reference with the value on the
   * top of the queue - if it can. If so, it'll return the value and
   * 'true', but if it can't, as in the queue is empty, then the method
   * will return 'false' and the value will be untouched.
   */
  virtual bool pop( T & anElem ) = 0;
  /**
   * This form of the pop() method will throw a std::exception
   * if there is nothing to pop, but otherwise, will return the
   * the first element on the queue. This is a slightly different
   * form that fits a different use-case, and so it's a handy
   * thing to have around at times.
   */
  virtual T pop() = 0;
  /**
   * If there is an item on the queue, this method will return a look
   * at that item without updating the queue. The return value will be
   * 'true' if there is something, but 'false' if the queue is empty.
   */
  virtual bool peek( T & anElem ) = 0;
  /**
   * This form of the peek() method is very much like the non-argument
   * version of the pop() method. If there is something on the top of
   * the queue, this method will return a COPY of it. If not, it will
   * throw a std::exception, that needs to be caught.
   */
  virtual T peek() = 0;
  /**
   * This method will clear out the contents of the queue so if
   * you're storing pointers, then you need to be careful as this
   * could leak.
   */
  virtual void clear() = 0;
  /**
   * This method will return 'true' if there are no items in the
   * queue. Simple.
   */
  virtual bool empty() = 0;
  /**
   * This method will return the total number of items in the
   * queue. Since it's possible that multiple threads are adding
   * to this queue at any one point in time, it's really at BEST
   * a snapshot of the size, and is only completely accurate
   * when the queue is stable.
   */
  virtual size_t size() const = 0;

Then it was easy to use these methods in place of the actual methods in the subclasses. It's just a better way to define the API. With that done, I was able to make the Hammer and Drain because when I needed to reference a Queue, I just used dkit::FIFO and that was good enough. Pretty nice.

Handling the Logging

One of the problems with multi-threaded testing is that the output tends to garble itself pretty totally at times, and it makes it near impossible to determine what was really meant by the output of the code. For example, I had the following in the Drain:

  std::cout << "[Drain::doIt(" << mID << ")] - popped "
            << mCount << " items off the queue" << std::endl;

but because it was dealt with as six different components, there was more than adequate time to have another thread jump into the output stream and have its output intermingle with the message I was trying to send.

So what's a better way? Well… have all the components in one place, like this:

  std::ostringstream msg;
  msg << "[Drain::doIt(" << mID << ")] - popped "
      << mCount << " items off the queue";
  std::cout << msg.str() << std::endl;

What I realized was that this still looks like two items, and the newlines were getting intercepted and causing the output to look bad. The final result I went with was the "all-in-one" idea:

  std::ostringstream msg;
  msg << "[Drain::doIt(" << mID << ")] - popped "
      << mCount << " items off the queue" << std::endl;
  std::cout << msg.str();

At this point, things looked a lot better, and I wasn't getting the scrambled messages. Now this is only good if the console writer is atomic, but it's a lot better bet doing it this way than hoping that the entire streaming operation was going to be atomic. That's just not happening.

Adding a Little Polish

Once I had the LinkedFIFO tests done, and the classes all ready to go, I checked it all in and then updated the docs to reflect the addition of the base class as well as the new dependency on boost and the rationale for it. I'm not sure that many are going to care - it's pretty isolated, if you really don't want to use boost - even though it's on even platform you could imagine, so it's not hard to remove. But it makes life and portability much easier.

It's been a productive morning. I need to go back and put the tests in for the CircularFIFO implementations, but that's not going to be too hard - after all, the same code will work on them that did on the LinkedFIFOs. I just need to move it over and let it run.

Apple Released Mac OS X 10.7.4 on Software Updates

Wednesday, May 9th, 2012

Software Update

Just saw on Twitter that 10.7.4 was released and includes a few bug fixes that even effect Acorn, my favorite image editing application. I'm not sure now much it'll effect me, as I'm not loading Photoshop images with more than 200 layers, but it's nice to see that it's getting fixed, and all these fixes will be going into 10.8 Mountain Lion this Summer.

I'm glad that it's all coming together today - a few updates, a building of boost, and then I'm starting to feel like the day hasn't been a waste.

Building Boost 1.49.0 on Mac OS X 10.7

Wednesday, May 9th, 2012

Boost C++ Libraries

I wanted to add in threading to DKit, but in order to do that I needed a threading model, and I had no desire to use straight pthreads, nor to include all I needed to encapsulate pthreads into a decent threading library. So I decided to give boost on OS X a try. I didn't want to use the Homebrew stuff as it's an entire package maintenance system, and I didn't want to even go near MacPorts. So I decided to do a simple boost install myself.

Turns out, it's exceptionally easy.

First, simply get the latest package from the Boost web site. Then put it in a directory - any directory, and then run the following:

  $ cd path/to/boost
  $ ./bootstrap.sh
  … some config output …
  $ sudo ./b2 architecture=x86 address-model=32_64 install

And what you'll get is everything built as 32- and 64-bit universal binary libraries and deposited in /usr/local/include/boost and /usr/local/lib. It's all there, and it's trivial to uninstall:

  $ cd /usr/local/include
  $ sudo rm -rf boost
  $ cd /usr/local/lib
  $ sudo rm -rf libboost_*

What could be more simple?

At this point, you can write simple little apps that use boost:

  #include <string>
  #include <boost/unordered_map.hpp>
 
  int main(int argc, char *argv[]) {
    boost::unordered_map<int, std::string>   a;
    a[4] = "yoyo";
    return 0;
  }

and then simply compile them without any unusual flags:

  $ g++ boost.cpp

Because it's all in /usr/local/include and /usr/lib - GCC automatically finds them. Sweet!

Now I can get to adding those threading ideas to DKit.

[5/23] UPDATE: if you plan to do any debugging, you need to make sure that the built shared, debug, versions of the libraries are available to you. This is easily done with the following after you build:

  $ cd path/to/boost
  $ sudo chown -R your_login:staff bin.v2

When the build is done as 'sudo', the directories created are all owned by root. You just need to revert them to you, and then gdb works wonderfully.

Google Chrome dev 20.0.1130.1 is Out

Wednesday, May 9th, 2012

Google Chrome

I just noticed that the Google Chrome team has released a new dev version: 20.0.1130.1, and the release notes say that it's focusing on a new version of the V8 javascript engine (3.10.8.4), and a raft of stability fixes. Glad that they are still working on things, but it's been a while since I've seen a really new feature or capability released that even remotely impacts me. Still… it's to be expected, really. The browser has become stable. There's HTML5, and it's widely supported in WebKit, and that's in Chrome, Safari, and many others. It's just not that dynamic a platform any more. And that is good news. Stable platforms are nice to write for.

Google Chrome dev 20.0.1123.1 is Out

Wednesday, May 2nd, 2012

Just noticed that the Google Chrome team has updated the dev pipeline to 20.0.1123.1 which is supposed to include the V8 javascript engine 3.10.6.0 as well as a few low-level mouse over fixes and a few Windows-specific updates. Nice to see, but it's really running amazingly well these days, so I can't imagine what they will be doing next.