Archive for March, 2004

Java Threads

Wednesday, March 31st, 2004

OK... I've decided that I like pthreads in C++ a lot more than the threading model in Java - at least if you're not totally familiar with the Java model. One of the classes that I've been dealing with this morning is a FIFO Queue. It's thread-safe because those methods that need to be synchronized to ensure single instance access to the data are, and those that might require multiple thread access are left unsynchronized. This seemed reasonable because in the methods, where necessary, I had a synchronized block on the variable of import. I was so very wrong.

The problem is that I had created two mutexes and therein lay my problem. The confusion I had was that a synchronized method with a synchronized block in it containig a wait(); would block all further access to that instance by other threads. The effect of having a synchronized method is really no more than encasing the entire method contents in a synchronized block on this. So... if the wait() was on this then I could simply synchronize the method and the wait() would release the lock and let other threads message this instance.

This made the code a lot easier as the push() method then simply did a notifyAll() when placing something of interest on the queue. Again, if the push() method was synchronized, then the control would pass to the waiting threads and then back to the push() thread and things would just work out.

So really, it's when I didn't understand the actual mutexes in place in the Java code and tried to work up something to do what was necessary that I got into trouble. Now I just lock on the queue instance itself and the synchronized methods do all the hard work for me.

Passing Data to Threads

Tuesday, March 30th, 2004

One of the things that I seem to be doing on a somewhat frequent basis is the re-inventing the wheel. Case in point: I had a section of code in a project that was single-threaded, could easily have been multi-threaded and all I really had to do was to handle getting data into the threads and the processed data out.

I thought about it and built something that should have worked, but had totally forgotten that I'd built a thread-safe FIFO queue that would be a wonderful tool in this situation. In the original version, I tried to handle the passing of each bit of data from the controller thread to each worker thread and that turned out to be a major hassle. When I changed focus to the FIFO queue, it was exceptionally easy and fast.

Next time, I'm really going to try to look at the components that work well and build around them. It was amazing the difference.

Great CLASSPATH Trick

Thursday, March 18th, 2004

A lot of Java projects have several JARs you need to put into the CLASSPATH in order to get things to compile properly. Whether you're using jikes or javac, you need to build up the CLASSPATH so that the user's environment isn't expected to provide the correct CLASSPATH for building.

Using gnu make, and assuming that all the JARs are in a single directory - as they usually are, you can do the following:

empty:=
space:= $(empty) $(empty)

JARS:= $(shell find ../libs -name *.jar)
CLASSPATH:= $(subst $(space),:,$(JARS))
list:= find src -name *.java | grep -v /testers/
SRC:= $(shell $(list))
CLASSES:= $(SRC:%.java=%.class)

my.jar: $(SRC)
        javac -classpath $(CLASSPATH) $?
        rm -f my.jar
        jar cf my.jar `find src -name *.class | grep -v /testers/`

This allows you to simply place JARs in the lib directory and they will qutomatically be picked up into the CLASSPATH. Also, this compiles all the java source files that have changed with respect to the jar and then creates a new version of that jar.

UPDATE

I have completely updated the makefile for BKit and here it is:

#
#  This is a very simple makefile for BKit.
#

#	If we're on a platform with jikes, use it as it's faster
ifeq ($(shell uname),Darwin)
JAVAC:=jikes -bootclasspath /System/Library/Frameworks/JavaVM.framework/Classes/
classes.jar:/System/Library/Frameworks/JavaVM.framework/Classes/ui.jar -extdirs
/Library/Java/Home/lib/ext:/Library/Java/Extensions:/System/Library/Java/
Extensions -nowarn
RMIC:=rmic
else
JAVAC:=javac -J-ms32m -J-mx32m
RMIC:=rmic
endif

#	get the CLASSPATH we'll need
CLASSPATH:=/usr/local/MQClientV2:/usr/local/VantagePoint/Jars/vpJava2JFC.jar:
/usr/local/jConnect/4.2/classes:/usr/local/jep/jep.jar

#	get all the Java files that need building
list:= find one -name *.java | grep -v /ado/
SRC:= $(shell $(list))
CLASSES:= $(SRC:%.java=%.class)
#	get all the classes that need rmic-ed
rmics:= find one -name *.java | xargs -J % grep -l UnicastRemoteObject % | 
	sed -e 's/.java//' -e 's:/:.:g'
REMOTE:= $(shell $(rmics))

all: .compile

jar: .compile
	@ rm -f bkit.jar
	@ jar cf bkit.jar `find one -name *.class | grep -v /testers/ | 
	grep -v /ado/`

.compile: $(SRC)
	@ $(JAVAC) -classpath .':'$(CLASSPATH) $?
	@ $(RMIC) -classpath .':'$(CLASSPATH) $(REMOTE)
	@ touch .compile

install-applet: jar
	@ cp bkit.jar $(HOME)/Sites/applets/classes/

clean:
	@ rm -f bkit.jar
	@ rm -f `find one -name *.class`

We get all the rmic classes as well as getting done just what we need. Not bad.

Disappointment in HippoDraw

Sunday, March 14th, 2004

OK, clearly I spent a lot of time trying to get HippoDraw working on my Powerbook. It wasn't obvious nor easy, but there was a certain satisfaction up to the point of hacking at the Python includes. Yet when I get it all complied - and it did all compile, it didn't run for spit. Now I don't think of myself as a pig, or extremist, but when I spend a lot of time building something that's supposed to work - and has been working for a long, long, time, I get a little disappointed.

I'm sorry... I can't believe that they guys that say they got this working on Mac OS X really have. At least not with the versions of the libraries that I have. Of course, the mailing list is down, so I can't find anything about what I might do differently to get this darn thing to work.

So, I trash it. Bummer, but Qt is running fine so it's not that, and the Boost and Python seem to be OK but with no Python experience I'm really not able to debug what's going on there. It's a bummer that Apple broke the NXHost-ing as well otherwise I'd use the NeXTSTEP box for HippoDraw - the one that works!

Well... maybe in a few versions I'll try it again.

Building HippoDraw 1.51

Saturday, March 13th, 2004

OK, I've loved HippoDraw on NeXTSTEP as a wonderful plotting package. It was Open Source and that's great. I believe that CERN did a lot of work on it - they still may, I'm not positive. Anyway, I wanted something for my Powerbook and so I decided to go through the troubles of building HippoDraw for Mac OS X.

  1. First, get the sources for Boost-Jam. This is a build tool that the Boost folks have put together. Interesting aside on this point - the Boost folks are putting together what they believe to be the missing object library for C++. Interesting that they went the library route when the STL group (obviously) took the templates route. But having read the docs for HippoDraw, it's clear that I'm going to need to get the Boost.Python library going, so I'm going to need Boost.

    So... go to the Boost web site and pick up the sources for boost-jam as well as Boost itself. There are pre- built binaries for Linux, etc. but nothing for Mac OS X. I got boost-jam 3.1.9 and Boost 1.31.

  2. Now we need to build boost-jam. After unpacking it into a directory, build it with:

    % ./build.sh darwin
    

    and then put the results of the build into /usr/local/bin with:

    % sudo cp bin.macosxppc/* /usr/local/bin/
    
  3. Next, unpack and build Boost for Panther with Python 2.3:

    % cd ~/Developer/boost_1_31_0/
    % setenv PYTHON_ROOT /System/Library/Frameworks/Python.framework/Versions/2.3
    % setenv PYTHON_VERSION 2.3
    % sudo bjam "-sTOOLS=darwin" install
    

    and then to clean up the installation by:

    % cd /usr/local/include/boost-1_31
    % sudo mv boost ..
    % cd ..
    % sudo rm -rf boost-1_31
    % cd /usr/local/lib
    % ln -s libboost_python-1_31.dylib libboost_python.dylib
    
  4. Now build Qt/Mac. First, download the source to /usr/local and then rename the download version (3.3.1 in my case) to /usr/local/qt - or link it if you want to keep the version information. Then:

    % configure -thread
    % cd /usr/lib
    % sudo ln -sf /usr/local/qt/lib/libqt-mt.3.3.1.dylib libqt-mt.dylib
    % sudo ln -sf /usr/local/qt/lib/libgui.1.0.0.dylib libgui.dylib
    

    This is really all in the Qt/Mac docs.

  5. On Panther, in the Python includes, I've found a little problem that I haven't really dug into as I'm not a major Python fan. In the Python include directory: $PYTHON_ROOT/include/python2.3 there's a file, object.h. It seems that there's a compile time parsing problem with line 343:

    PyObject *name, *slots;
    

    that needs to be changed to:

    PyObject *name, *user_slots;
    

    in order for things to compile correctly. I've tried to find references to 'slots' but to no avail. I'm not worried about it as I'm not sure the impact, but it's something to consider - and certainly back up object.h if you make the change.

  6. On Panther, I've found that isnan(x) is not properly but it's easy enough to correct. In HippoDraw 1.5.1 it turns out that you need to add the lines:

    #ifdef __MACH__
    #ifndef isnan
    #define  isnan(x) ((sizeof(x) == sizeof(double)) ? __isnand(x) : 
                      (sizeof(x) == sizeof(float)) ? __isnanf(x) : __isnan(x))
    #endif
    #endif
    

    at the top of minimizers/BFGSFitter.h to get it to compile. Also, the libtool needs to have a modification. At line 2889 the original libtool reads:

    # Add a -L argument.
    newdeplibs="$newdeplibs $a_deplib" ;;
    

    which needs to be changed to:

    # Add *only* a -L argument
    case $a_deplib in
      -L*) newdeplibs="$newdeplibs $a_deplib" ;;
    esac
    

    Also in libtool you need to add the lines:

    -framework | Carbon | QuickTime | System | OpenGL | AGL)
      deplibs="$deplib $deplibs"
      continue
      ;;
    

    around line 1785 right before the line:

    %DEPLIBS%)
    

    After that, it's a standard build with a few arguments:

    % configure --with-Qt-dir=/usr/local/qt --with-Qt-lib=qt-mt 
                --with-boost-includes=/usr/local/include/boost 
                --with-boost-lib=/usr/local/lib 
                --with-python-include=$PYTHON_ROOT/include/python2.3
    % make
    

Java Serialization Loops

Friday, March 12th, 2004

In general, I appreciate the work the Java folks have done to make remote procedure calls easy. And to a large part, they have done a pretty good job. The single weakness that I can find is their misunderstanding of their own data structures.

For example, say I have two HashMaps in JDK 1.3.1. Each has a single key/value pair: a String "other" as the key and a HashMap (the other) as the value. It seems pretty obvious that if you're looking at the structure you'd see one, move to the other, and see the first again and stop. But Java isn't that smart. It creates an infinite loop trying to evalutate the HashMaps as opposed to seeing that they are in fact one in the same.

This means that to serialize the example structure I have to create four objects - two of each with the "other" link disabled in two of the four. Now, no matter where I am, I can "see" the relationship, but it's not circular.

This is a pain and there's no reason for it as the serialization would be able to identify the objects and provide placeholders for de-serialization. But that's life with Java...

Code Rot

Thursday, March 11th, 2004

I'd like to think that I do good design. But even so, things change. The needs of the system changes, and the system responds. So it's really unavoidable that you get into situations where code is old and while it's working, it's not working well. You've got Code Rot

Yesterday it became clear that the large system I'm working on had a fundamental limitation due to the size of the dataset it's working with. I couldn't let it remain like it was. So I knew it was time to dive into the code that was at least a year old and fix it. But it wasn't going to be a simple fix - it was going to be a massive re-write.

Amazingly, I re-wrote the entire system in less than 24 hrs. The reason it was so easy was that I knew there were a lot of simplifications that could be made to the way in which events were handled in the code. These simplifications meant about a 30% memory savings as well as lots of speed gains as things weren't done over and over again for similar data. In the end it's amazing how nice and clean it is.

It'll probably last another year! 🙂

Cross-Platform #defines

Friday, March 5th, 2004

I've been doing a lot of cross-platform work - linux, solaris, and Mac OS X and I figured out the important machine-dependent #defines that are used on each. Thankfully, all three platforms are using gcc, with Linux being the only 2.95.x-based compiler.

Anyway, here they are:

Machine Vendor CPU
Mac __APPLE__ __MACH__
Solaris __sun__ __sparc__
Linux __linux__ __i386__

So now that they are in the journal, I won't have to go hunting for them on the scrap of paper I used to use. That was just silly.

GKrellM and perfmeter

Thursday, March 4th, 2004

For the production, and development for that matter, machines that I use, I like to run gkrellm - a GTK+ based monitoring app. It's nice and informative and it's got a very small footprint compared to, say, top. I was looking for a version for Solaris 8, but realized that there was already something on Solaris that's almost exactly what I wanted - perfmeter.

OK, perfmeter doesn't do all the skins, and it's not got the extensive list of modules for monitoring email, temperature, etc. But that's not what I use it for anyway. I'm primarily interested in CPU and memory usage, and while perfmeter isn't really great at the last of these, it is decent with swap and context monitoring. And it's on every Solaris box. Nice.

Just goes to show - there's no reason for new tools if you've got what you need. Go with it and get on with solving the real problems of the day.

Subtle Earthquakes

Wednesday, March 3rd, 2004

It's amazing how complex systems can really bite you in the rear. I'm reading Prey as well as trying to get a few fixes out that have been impacted by a seemingly minor change in one of the data sources I use. Seemingly, I say, because the ripple effects were far wider than I had originally thought, and to fix each one has taken more time than I had originally planned. Oh sure, the changes I knew about have been put into place just fine... it's the ones that I didn't know I'd be facing that are real pains.

For example, I'm using the JEP Java Expression Parser, and while it's a nice piece of work, it's got some structures based on Hashtables as opposed to HashMaps. This means that I can't ask it to represent null values properly as the old Hashtable can't deal with that. Since the error reporting it less than "stellar", I don't even get good error messages when I give it a null. So that brings up the problem of testing for a null in an expression - clearly, you can't. But what a pain when the HashMap is right there!

I've now taken the route that any null variable value is immediately bad news, and the expression can never evaluate to true which is the point of the expression - to filter results. I'm hoping that this takes care of the problems I've been having, but it's never over till it's over, and I don't hear and fat singing lady.

Not yet, anyway.