Archive for the ‘Open Source Software’ Category

Upgraded to Java 21 LTS

Tuesday, April 8th, 2025

java-logo-thumb.png

Today it was a good time to add Java 21.0.6 to the mix, as I have been noticing a few performance specs for the latest versions of the JDK and my Clojure work, so why not? It's as simple as:

  $ brew install --cask temurin@21

and then to make use of it, I just need to use my trusty shell function:

  $ setjdk 21
  $ java -version
  openjdk version "21.0.6" 2025-01-21 LTS
  OpenJDK Runtime Environment Temurin-21.0.6+7 (build 21.0.6+7-LTS)
  OpenJDK 64-Bit Server VM Temurin-21.0.6+7 (build 21.0.6+7-LTS, mixed mode, sharing)

At this point, I can run the REPL and my Clojure app in JDK 21.0.6, and get all the little updates and bug fixes in the latest LTE JDK. 🙂

Converting Clojure Logging to Timbre

Monday, April 7th, 2025

Clojure.jpg

I have been looking into the timbre logging system for Clojure as it's written by one of the very best Clojure devs I have known, and it has a lot going for it - async logging, loads of appenders, including ones that are log aggregators, which is really convenient, but I've been having some issues the last few days with configuration and errors.

Specifically, the configuration is not in a file - like log4j and slf4j, it's in the Clojure code, and that was a bit of a wrinkle. But once I figured that out, and started to dig into the code for the details I needed, it got a lot easier.

So let's go through my conversion to timbre from log4j and slf4j, and see what it took.

Dependencies

Since I an using jetty as the backbone of the web server, I needed to give slf4j a way to send logging to timbre, and that meant just a few dependencies:

  :dependencies[...
                [com.taoensso/timbre "6.6.1"]
                [org.slf4j/slf4j-api "2.0.17"]
                [com.taoensso/timbre-slf4j "6.6.1"]
                ...]

where the last two provide that conduit free of charge by their simple inclusion. That's one less headache right off the bat. 🙂

Calling Changes

Here, the changes are pretty simple... where I would have had:

  (ns my-app
    (:require [clojure.tools.logging :refer [error info infof]]
               ...))

I simply change the namespace I'm referring in the functions from to be:

  (ns my-app
    (:require [taoensso.timbre :refer [error info infof]]
               ...))

and everything stays the same. Pretty nice.

Configuration

Here is where it can be done in a lot of ways, but I chose to have a single function to set up the logging based on the configuration of the instance - and have it all in one place. In the :main namespace of the app, I added:

  (ns my-app
    (:require [taoensso.timbre :refer [merge-config! error info infof]]
              [taoensso.timbre.appenders.community.rotor :refer [rotor-appender]]
               ...))
 
  (defn init-logs!
    "Function to initialize the Timbre logging system, which can be based on the
    config of this running instance. It will basically disable the default things
    we do not want, and add those things that we do, and will be called when we
    start a REPL, or the app itself. This will modify the Timbre *config*, and so
    we have the bang at the end of the name."
    []
    (merge-config! {:min-level :info
                    :appenders {:println {:enabled? false}
                                :rotor (merge (rotor-appender {:path "log/my-app.log"})
                                         {:async? true})}}))

This does a few things for me:

  • Turn off the console appender - we don't need the :println appender, so merge in the "off" for that guy.
  • Add rotate file appender - have this do asynchronous calls as well, and we shouldn't have to worry about the shutdown for now.
  • Point to the file(s) location - we really need to tell it where to dump the log files.

At this point, we need to add this to the start of the app, and for me that's in the handle-args function of the same namespace:

  (defn handle-args
    "Function to parse the arguments to the main entry point of this project and
    do what it's asking. By the time we return, it's all done and over."
    [args app]
    (init-logs!)            ;; initialize the logging from the config
    (let [[params [action]] (cli args
               ["-p" "--port" "Listen on this port" :default 8080 :parse-fn #(Integer. %)]
               ["-v" "--verbose" :flag true])
          quiet? (:quiet params)
          ignore? (:ignore params)
          reports? (:reports params)]
      ...))

And in order to fire init-logs! off when a REPL is started, we just need to update our project.clj file, as we're using Leiningen:

  :repl-options {:init (init-logs!)}

and Leiningen will know to look in the :main namespace for that function. But it could have been anywhere in the codebase, if properly called.

Possible Issues

In doing the conversion, I had one problem with the log function from clojure.tools.logging. The original code was:

  (let [logf (fn [s & args]
               (set-mdc!)
               (log ns level nil (apply format s args)))]
    ...)

and the log function in timbre doesn't have a matching one with the same arity, so I had to collapse it back to:

  (let [logf (fn [s & args]
               (set-mdc!)
               (log level (apply format s args)))]
    ...)

and the timbre function worked just fine.

All tests worked just fine, the logging is solid and stable, and I'm sure I'll enjoy the flexibility that I can get with the additional appenders that I can add in init-logs! for testing and production when we get to that point. Very nice update! 🙂

iTerm2 is Great Code

Tuesday, April 1st, 2025

iTerm2

This morning I was working on a project, and after pushing my latest commit up to GitHub, I checked to see if iTerm2 had a new release - and it did. Excellent! I read the release motes, and it was a few fixes for a handful of things, but I recognized one as something that I'd run into in the past, and so I updated.

This is when the magic happened. 🙂

iTerm2 updated - that's no surprise, but on the restart, all the terminal sessions had been maintained. Every single one. My psql sessions - right where they were before the restart. My running processes: Leiningen, etc. - right where they were before.

This just made me smile. 🙂 It's not magic, it's just attention to detail, and developers that knew what was important to the users. Bravo.

Very Nice Simplicity in Javascript

Tuesday, March 25th, 2025

One of the things I really like coding is when a simple solution presents itself. There are a lot of times when a language and toolkit really give you only the preferred method to do something. But every now and then, a language will really lend itself to be able to solve the same problem in several different ways, and a toolkit will embrace that philosophy.

This morning I ran into the situation with a simple Bootstrap modal dialog.

This is what I had originally:

  <a id="changeOwner" data-toggle="modal" data-target="#contact_modal">Owner:</a>
  ...
  <script>
    $('#contact_modal').on('show.bs.modal', function(e) {
      console.log('loading contacts from service...');
      loadContacts();
    });
  </script>

and for the most part, this was working just fine. But occasionally, I would notice that clicking on the link didn't show the modal window, and no amount of clicking would solve the issue. It's as if something on the page load that time left the modal in a state where it just wasn't able to be raised. No errors. Just nothing happened.

So I thought: Why not be more overt about it?, and so I changed the structure to be:

  <a id="changeOwner" onClick="pullUpOwners();">Owner:</a>
  ...
  <script>
    function pullUpOwners() {
      console.log('loading contacts from service...');
      $('#contact_modal').modal('show');
      loadContacts();
    }
  </script>

where we are now just had the link directly call the function, and the function directly tell the modal to show itself. The state of the modal was then unimportant, and the acts are now being far more overt. This is a lot more overt, and in the end - simpler.

The great news is that this new method works every time. There are no mis-loadings and possible bad modal states. This makes the click reliable and that's really the point of this... to gain the reliability the previous method didn't have.

It's really nice to see that such a simple change yields the exact results I was looking for. 🙂

Parsing JSON with wrap-json-body

Wednesday, March 19th, 2025

Clojure.jpg

I have been making good progress on the Clojure app, and then got to handling a PUT call, and realized that the current scheme I was using for parsing the :body of the compojure request really was not what I wanted. After all, the Content-type is set to application/json in the call, so there should be a way for the ring middleware to detect this, and parse the JSON body so that the :body of the request is a Clojure map, and not something that has to be parsed.

So I went digging...

As it turns out, there is a set of ring middleware for JSON parsing, and the middleware I needed was already written: wrap-json-body and to use it really is quite simple:

  (:require [camel-snake-kebab.core :refer [->kebab-case-keyword]]
            [ring.middleware.json :refer [wrap-json-body]])
 
  (def app
    "The actual ring handler that is run -- this is the routes above
     wrapped in various middlewares."
    (let [backend (session-backend {:unauthorized-handler unauthorized-handler})]
      (-> app-routes
          (wrap-access-rules {:rules rules :on-error unauthorized-handler})
          (wrap-authorization backend)
          (wrap-authentication backend)
          wrap-user
          (wrap-json-body {:key-fn ->kebab-case-keyword})
          wrap-json-with-padding
          wrap-cors
          (wrap-defaults site-defaults*)
          wrap-logging
          wrap-gzip)))

where the key middleware is:

          (wrap-json-body {:key-fn ->kebab-case-keyword})

and it takes the ->kebab-case-keyword function to apply to each of the keys of the parsed JSON, and for me, that makes them keywords, and kebab-cased. This means I only have to have the right spelling in the client code, and I don't care a whit about the casing - very nice. 🙂

With this, an element of a defroutes can look like:

    (POST "/login" [:as {session :session body :body}]
      (do-login session body))

and the body will be parsed JSON with keywords for keys, and the kebab case. You can't get much nicer than that. Clean, simple, code. Beautiful.

Writing buddy-auth Authorization Handlers

Friday, March 14th, 2025

Clojure.jpg

As I'm building out the core functionality of my current application, the next thing I really wanted to add were a few Authorization handlers for the buddy-auth system I started using with WebAuthN Authentication. The WebAuthN started with a simple Is this user logged in? handler:

  (defn is-authenticated?
    "Function to check the provided request data for a ':user' key, and if it's
    there, then we can assume that the user is valid, and authenticated with the
    passkey. This :user is on the request because the wrap-user middleware put
    it there based on the :session data containing the :identity element and we
    looked up the user from that."
    [{user :user :as req}]
    (uuid? (:id user)))

In order for this to work properly, we needed to make a wrap-user middleware so that if we had a logged in user in the session data, then we would place it in the request for compojure to pass along to all the other middleware, and the routes themselves. This wasn't too hard:

  (defn wrap-user
    "This middleware is for looking at the :identity in the session data, and
    picking up the complete user from their :email and placing it on the request
    as :user so that it can be used by all the other endpoints in the system."
    [handler]
    (fn [{session :session :as req}]
      (handler (assoc req :user (get-user (:email (:identity session)))))))

and this middleware used a function, get-user to load the complete user object from the database based on the email of the user. It's not all that hard, but there are some tricks about the persistence of the Passkey Authenticator object that have to be serialized, and I've already written about that a bit.

And this works perfectly because the WebAuthN workflow deposits the :identity data in the session, and since it's stored server-side, it's safe, and with ring session state persisted in redis, we have this survive restarts, and shared amongst instances in a load balancer. But what about something a little more specific? Like, say we have an endpoint that returns the details of an order, but only if the user has been permission to see the order?

This combines Roll-Based Access Control (RBAC), and Attribute-Based Access Control (ABAC) - and while some will say you only need one, that's really not the best way to build a system because there are times when you need some of both to make the solution as simple as possible.

In any case, this is what we need to add:

  • For a user, can they actually see the order in the database? This can be a question of the schema and data model, but there will likely be a way to determine if the user was associated with the order, and if so, then they can see it, and if not, then they can't.
  • Most endpoint conventions have the identifier as the last part of the URL - the Path, as it is referred to. We will need to be able to easily extract the Path from the URL, or URI, in the request, and then use that as the identifier of the order.
  • Put these into an authentication handler for buddy-auth.

For the first, I made a simple function to see if the user can see the order:

  (defn get-user-order
    "Function to take a user id and order id and return the user/order
    info, if any exists, for this user and this order. We then need to
    look up the user-order, and return the appropriate map - if one exists."
    [uid oid]
    (if (and (uuid? uid) (uuid? oid))
      (db/query ["select * from users_orders
                   where user_id = ? and order_id = ?" uid oid]
        :row-fn kebab-keys-deep :result-set-fn first)))

For the second, we can simply look at the :uri in the request, and split it up on the /, and then take the last one:

  (defn uri-path
    "When dealing with Buddy Authentication handlers, it's often very useful
    to be able to get the 'path' from the request's uri and return it. The
    'path' is defined to be:
       https://google.com/route/to/path
    and is the last part of the url *before* the query params. This is very
    often a uuid of an object that we need to get, as it's the one being
    requested by the caller."
    [{uri :uri :as req}]
    (if (not-empty uri)
      (last (split uri "/"))))

For the last part, we put these together, and we have a buddy-auth authorization handler:

  (defn can-see-order?
    "Function to take a request, and pull out the :user from the wrapping
    middleware, and pick the last part of the :uri as that will be the
    :order-id from the URL. We then need to look up the user-order, and
    see if this user can see this order."
    [{user :user :as req}]
    (if-let [hit (get-user-order (:id user) (->uuid (uri-path req)))]
      (not-nil? (some #{"OPERATOR"} (:roles hit)))))

in this function we see that we are referring to :roles on the user-order, and that's because we have built up the cross-reference table in the database to look like:

  CREATE TABLE IF NOT EXISTS users_orders (
    id              uuid NOT NULL,
    version         INTEGER NOT NULL,
    as_of           TIMESTAMP WITH TIME zone NOT NULL,
    by_user         uuid,
    user_id         uuid NOT NULL,
    roles           jsonb NOT NULL DEFAULT '[]'::jsonb,
    title           VARCHAR,
    description     VARCHAR,
    order_id        uuid NOT NULL,
    created_at      TIMESTAMP WITH TIME zone NOT NULL,
    PRIMARY KEY (id, version, as_of)
  );

The key parts are the user_id and order_id - the mapping is many-to-many, so we have to have a cross-reference table to handle that association. Along with these, we have some metadata about the reference: the title of the User with regard to this order, the description of the relationship, and even the roles the User will have with regards to this order.

The convention we have set up is that of the roles contains the string OPERATOR, then they can see the order. The Postgres JSONB field is ideal for this as it allows for a simple array of strings, and it fits right into the data model.

With all this, we can then make a buddy-auth access rule that looks like:

   {:pattern #"^/orders/[-0-9a-fA-F]{36}"
    :request-method :get
    :handler {:and [is-authenticated? can-see-order?]}}

and the endpoints that match that pattern will have to pass both the handlers and we have exactly what we wanted without having to place any code in the actual routes or functions to handle the authorization. Nice. 🙂

Working with java.time in Clojure

Tuesday, March 11th, 2025

java-logo-thumb.png

I've been working on a new Clojure project, and since I last did production Clojure work, the Joda Time library has been deprecated, and the move has been to the Java 8 java.time classes. The functionality is basically the same, but the conversion isn't, and one of the issues is that the JDBC Postgres library will return Date and Timestamp objects - all based on java.util.Date.

As it turns out, the conversion isn't as easy as I might have hoped. 🙂

For the most part, it's just a simple matter of using different functions, but the capabilities are all there in the Clojure clojure.java-time library. The one key is the conversion with Postgres. There, we have the protocol set up to help with conversions:

  (:require [java-time.api :as jt])
 
  (extend-protocol IResultSetReadColumn
    PGobject
    (result-set-read-column [pgobj metadata idx]
      (let [type  (.getType pgobj)
            value (.getValue pgobj)]
        (case type
          "json" (json/parse-string-strict value true)
          "jsonb" (json/parse-string-strict value true)
          value)))
 
    java.sql.Timestamp
    (result-set-read-column [ts _ _]
      (jt/zoned-date-time (.toLocalDateTime ts) (jt/zone-id)))
 
    java.sql.Date
    (result-set-read-column [ts _ _]
      (.toLocalDate ts)))

and the key features are the last two. These are the conversions of the SQL Timestamp into java.time.ZonedDateTime and Date into java.time.LocalDate values.

As it turns out, the SQL values have Local date, and time/date accessors, and so converting to a Zoned timestamp, just means picking a convenient zone, as the
offset is carried in the LocalDateTime already. Using the system default is as
good as any, and keeps things nicely consistent.

With these additions, the data coming from Postgres 16 timestamp and date columns is properly massaged into something that can be used in Clojure with the rest of the clojure.java-time library. Very nice!

UPDATE: Oh, I missed a few things, so let's get it all cleared up here now. The protocol extensions, above, are great for reading out of the Postgres database. But what about inserting values into the Postgres database? This needs a slightly different protocol to be extended:

  (defn value-to-jsonb-pgobject
    "Function to take a _complex_ clojure data element and convert it into
    JSONB for inserting into postgresql 9.4+. This is the core of the mapping
    **into** the postgres database."
    [value]
    (doto (PGobject.)
          (.setType "jsonb")
          (.setValue (json/generate-string value))))
 
  (extend-protocol ISQLValue
    clojure.lang.IPersistentMap
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    clojure.lang.IPersistentVector
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    clojure.lang.IPersistentList
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    flatland.ordered.map.OrderedMap
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    clojure.lang.LazySeq
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    java.time.ZonedDateTime
    (sql-value [value] (jt/format :iso-offset-date-time value))
 
    java.time.LocalDate
    (sql-value [value] (jt/format :iso-local-date value)))

basically, we need to tell the Clojure JDBC code how to map the objects, Java or Clojure, into the SQL values that the JDBC driver is expecting. In the case of the date and timestamp, that's not too bad as Postgres will cast from strings to the proper values for the right formats.

But there remains a third set of key values - the Parameters to PreparedStatement objects. This is key as well, and they need to be SQL objects, but here the casting isn't done by Postgres as it is in the JDBC Driver, and that needs proper Java SQL objects. For this, we need to add:

  (extend-protocol ISQLParameter
    java.time.ZonedDateTime
    (set-parameter [value ^PreparedStatement stmt idx]
      (.setTimestamp stmt idx (jt/instant->sql-timestamp (jt/instant value))))
 
    java.time.LocalDate
    (set-parameter [value ^PreparedStatement stmt idx]
      (.setDate stmt idx (jt/sql-date value))))

Here, the Clojure java-time library handles the date easily enough, and I just need to take the ZonedDateTime into a java.time.Instant, and then the library again takes it from there.

These last two bits are very important for the full-featured use of the new Java Time objects and Postgres SQL. But it's very worth it.

Installing Redis on macOS

Saturday, March 8th, 2025

Redis Database

I have always liked redis as a wonderful example of very targeted software done very well. It's a single-threaded C app that does one thing, very well, and in the years since I started using it, it's only gotten better. As I've been building a new Clojure web app, one of the things I wanted to take advantage of, was the stored session state, specifically so that when I have multiple boxes running the code, they can all share the one session state - and quickly.

I've been getting my WebAuthN authentication going, and that is using session state as the place to store the :identity of the logged in user. After I worked out the serialization of the Authenticator for WebAuthN, I then turned my attention to persisting the session state with Ring's SessionStore.

I've started using Ring's wrap-defaults middleware. It's easily added to the options for wrap-defaults with:

  ; pull in the wrapper and base defaults
  (:require [ring.middleware.defaults :refer [wrap-defaults site-defaults]])
 
  ; augment the site defaults with the SessionStore from Carmine
  (def my-site-defaults
    (assoc site-defaults
      :session {:flash true
                :store (carmine-store)}))
 
  ; put it in the stack of the app routes...
  (-> app-routes
      (wrap-access-rules {:rules rules :on-error unauthorized-handler})
      (wrap-authorization backend)
      (wrap-authentication backend)
      wrap-user
      wrap-json-with-padding
      (wrap-defaults my-site-defaults)
      wrap-logging
      wrap-gzip)))

I've been a huge fan of carmine for many years, as it's a pure Clojure library for redis, and it's exceptionaly fast.

But first, I needed to install redis locally so that I can do laptop development and not have to have a connection to a hosted service. Simply:

  $ brew install redis

and then to make sure it's started through launchd, simply:

  $ brew services start redis

just like with Postgres. It's really very simple.

At this point, restarting the web server will automatically store the session data in the locally running redis, and for production deployments, it's easy enough to use redis cloud, or redis at AWS, Google Cloud, etc. It's super simple, and it's exceptionally reliable.

The web server can now be restarted without impact to the customers as redis has their session state, and postgres has their Authenticator for WebAuthN.

Persisting Java Objs within Clojure

Thursday, March 6th, 2025

Clojure.jpg

For the last day or so I've been wrestling with a problem using WebAuthN on a Clojure web app based on compojure and ring. There were a few helpful posts that got be in the right direction, and using Buddy helps, as it handles a lot of the route handling, but getting the actual WebAuthN handshake going was a bit of a pain.

The problem was that after the Registration step, you end up with a Java Object, an instance of com.webauthn4j.authenticator.AuthenticatorImpl and there is no simple way to serialize it out for storage in a database, so it was time to get creative.

I did a lot of digging, and I was able to find a nice way to deserialize the object, and return a JSON object, but there was no way to reconstitute it into an AuthenticatorImpl, so that had to be scrapped.

Then I found a reference to an Apache Commons lang object that supposedly was exactly what I wanted... it would serialize to a byte[], and then deserialize from that byte[] into the object. Sounds good... but I needed to save it in a Postgres database. Fair enough... let's Base64 encode it into a string, and then decode it on the way out.

The two key functions are very simple:

  (:import org.apache.commons.lang3.SerializationUtils
           java.util.Base64)
 
  (def not-nil? (complement nil?))
 
  (defn obj->b64s
    "This is a very useful function for odd Java Objects as it is an Apache tool
    to serialize the Object into a byte[], and then convert that into a Base64
    string. This is going to be very helpful with the persistence of objects to
    the database, as for some of the WebAuthN objects, it's important to save
    them, as opposed to re-creating them each time."
    [o]
    (if (not-nil? o)
      (.encodeToString (Base64/getEncoder) (SerializationUtils/serialize o))))
 
  (defn b64s->obj
    "This is a very useful function for odd Java Objects as it is an Apache tool
    to deserialize a byte[] into the original Object that was serialized with
    obj->b64s. This is going to be very helpful with the persistence of objects
    to the database, as for some of the WebAuthN objects, it's important to save
    them, as opposed to re-creating them each time."
    [s]
    (if (not-nil? s)
      (SerializationUtils/deserialize (.decode (Base64/getDecoder) s))))

These then fit into the saving and querying very simply, and it all works out just dandy. 🙂 I will admit, I was getting worried because I was about to regenerate the AuthenticatorImpl on each call, and that would have been a waste for sure.

The complete WebAuthN workflow is the point of another post, and a much longer one at that. But this really made all the difference.

Vim Text File Specific Settings

Thursday, February 20th, 2025

vim.jpg

I have been trying to set Vim settings, specific for text files, in my .vimrc for the longest of time, and I really wanted to get this figured out this morning. So here we go.

It's all about the autocmd, or au, command, and you can set it up pretty easily. What I had in the past was:

  autocmd FileType md
  \    set ai sw=2 sts=2 et
  autocmd FileType txt
  \    set ai sw=2 sts=2 et

where, in these two examples, I'm trying to set the autoindent to ON, the shiftwidth to 2, the softtabstop to 2 and expandtab to ON. And it wasn't working for files like goof.txt and for the life of me I could not figure this out.

And then it hit me - I had a FileType of javascript in the file, what if Vim needed to have text for the FileType? So let's try:

  autocmd FileType markdown
  \    set ai sw=2 sts=2 et
  autocmd FileType text
  \    set ai sw=2 sts=2 et

And all of a sudden, it worked like a charm. I also tried markdown using the same reasoning. I guess that goes to show me that Vim is interpreting the file extension, and not literally using it as the FileType. Smart. Appreciated.