Archive for the ‘Clojure Coding’ Category

Upgraded to Java 21 LTS

Tuesday, April 8th, 2025

java-logo-thumb.png

Today it was a good time to add Java 21.0.6 to the mix, as I have been noticing a few performance specs for the latest versions of the JDK and my Clojure work, so why not? It's as simple as:

  $ brew install --cask temurin@21

and then to make use of it, I just need to use my trusty shell function:

  $ setjdk 21
  $ java -version
  openjdk version "21.0.6" 2025-01-21 LTS
  OpenJDK Runtime Environment Temurin-21.0.6+7 (build 21.0.6+7-LTS)
  OpenJDK 64-Bit Server VM Temurin-21.0.6+7 (build 21.0.6+7-LTS, mixed mode, sharing)

At this point, I can run the REPL and my Clojure app in JDK 21.0.6, and get all the little updates and bug fixes in the latest LTE JDK. 🙂

Converting Clojure Logging to Timbre

Monday, April 7th, 2025

Clojure.jpg

I have been looking into the timbre logging system for Clojure as it's written by one of the very best Clojure devs I have known, and it has a lot going for it - async logging, loads of appenders, including ones that are log aggregators, which is really convenient, but I've been having some issues the last few days with configuration and errors.

Specifically, the configuration is not in a file - like log4j and slf4j, it's in the Clojure code, and that was a bit of a wrinkle. But once I figured that out, and started to dig into the code for the details I needed, it got a lot easier.

So let's go through my conversion to timbre from log4j and slf4j, and see what it took.

Dependencies

Since I an using jetty as the backbone of the web server, I needed to give slf4j a way to send logging to timbre, and that meant just a few dependencies:

  :dependencies[...
                [com.taoensso/timbre "6.6.1"]
                [org.slf4j/slf4j-api "2.0.17"]
                [com.taoensso/timbre-slf4j "6.6.1"]
                ...]

where the last two provide that conduit free of charge by their simple inclusion. That's one less headache right off the bat. 🙂

Calling Changes

Here, the changes are pretty simple... where I would have had:

  (ns my-app
    (:require [clojure.tools.logging :refer [error info infof]]
               ...))

I simply change the namespace I'm referring in the functions from to be:

  (ns my-app
    (:require [taoensso.timbre :refer [error info infof]]
               ...))

and everything stays the same. Pretty nice.

Configuration

Here is where it can be done in a lot of ways, but I chose to have a single function to set up the logging based on the configuration of the instance - and have it all in one place. In the :main namespace of the app, I added:

  (ns my-app
    (:require [taoensso.timbre :refer [merge-config! error info infof]]
              [taoensso.timbre.appenders.community.rotor :refer [rotor-appender]]
               ...))
 
  (defn init-logs!
    "Function to initialize the Timbre logging system, which can be based on the
    config of this running instance. It will basically disable the default things
    we do not want, and add those things that we do, and will be called when we
    start a REPL, or the app itself. This will modify the Timbre *config*, and so
    we have the bang at the end of the name."
    []
    (merge-config! {:min-level :info
                    :appenders {:println {:enabled? false}
                                :rotor (merge (rotor-appender {:path "log/my-app.log"})
                                         {:async? true})}}))

This does a few things for me:

  • Turn off the console appender - we don't need the :println appender, so merge in the "off" for that guy.
  • Add rotate file appender - have this do asynchronous calls as well, and we shouldn't have to worry about the shutdown for now.
  • Point to the file(s) location - we really need to tell it where to dump the log files.

At this point, we need to add this to the start of the app, and for me that's in the handle-args function of the same namespace:

  (defn handle-args
    "Function to parse the arguments to the main entry point of this project and
    do what it's asking. By the time we return, it's all done and over."
    [args app]
    (init-logs!)            ;; initialize the logging from the config
    (let [[params [action]] (cli args
               ["-p" "--port" "Listen on this port" :default 8080 :parse-fn #(Integer. %)]
               ["-v" "--verbose" :flag true])
          quiet? (:quiet params)
          ignore? (:ignore params)
          reports? (:reports params)]
      ...))

And in order to fire init-logs! off when a REPL is started, we just need to update our project.clj file, as we're using Leiningen:

  :repl-options {:init (init-logs!)}

and Leiningen will know to look in the :main namespace for that function. But it could have been anywhere in the codebase, if properly called.

Possible Issues

In doing the conversion, I had one problem with the log function from clojure.tools.logging. The original code was:

  (let [logf (fn [s & args]
               (set-mdc!)
               (log ns level nil (apply format s args)))]
    ...)

and the log function in timbre doesn't have a matching one with the same arity, so I had to collapse it back to:

  (let [logf (fn [s & args]
               (set-mdc!)
               (log level (apply format s args)))]
    ...)

and the timbre function worked just fine.

All tests worked just fine, the logging is solid and stable, and I'm sure I'll enjoy the flexibility that I can get with the additional appenders that I can add in init-logs! for testing and production when we get to that point. Very nice update! 🙂

Nice File Upload in Clojure and Javascript

Friday, March 28th, 2025

Clojure.jpg

Today I wanted to get a simplified File Upload scheme working on an app, where the backend was a Clojure app, and the front-end was simple HTML/jQuery - nothing fancy. I knew it had to be multi-part MIME REST calls, but I needed to work out the details about extracting the file, as well as the JSON metadata from the call.

Let's start with how it's shipped up to the server. We have a Bootstrap-based front-end, and we wanted to allow for the user to enter the Description of the file, and then browse their local filesystem for the file to upload. It's about as fast and efficient as I can imagine, so we had the following:

  1. <div class="row" align="center" style="margin: 5px 0px 0px 0px;">
  2. <div class="col-sm-1" style="margin-top:5px;">
  3. </div>
  4. <div class="col-sm-10" style="margin-top:5px;">
  5. <div id="addDocsDiv" style="margin: 0px 0px 0px 0px;">
  6. <form id="post_site_doc">
  7. <div class="input-group">
  8. <label class="control-label col-sm-2"
  9. style="margin-top: 7px; padding-right: 0px;"
  10. align="right">Description:</label>
  11. <div class="col-sm-10">
  12. <input id="doc_desc" class="form-control" type="text"
  13. aria-label="Description of File"/>
  14. </div>
  15. <span class="input-group-btn">
  16. <label for="site_file" class="btn btn-default">Browse</label>
  17. <input id="site_file" type="file" style="visibility:hidden;"
  18. onChange="sendSiteDoc(this);"/>
  19. </span>
  20. </div>
  21. </form>
  22. </div>
  23. </div>
  24. <div class="col-sm-1" style="margin-top:5px;">
  25. </div>
  26. </div>

this gets rendered as:

File Upload Render.

which is about as clean as I can make it.

The point is that there are two key components:

  • The site_file file input tag with the onChange action of calling sendSiteDoc(this) - on line 17.
  • The doc_desc text input tag - on line 12.

these are where we will be pulling the data to send to the service.

Then the sendSiteDoc() function looks like:

  1. /*
  2.   * Function to take the 'this' from an 'input' file uploader, that was
  3.   * triggered off the 'onChange' event, and will use the description field
  4.   * on the page to send the whole thing up to the server, and then reload
  5.   * the site from the server to make it all look nice.
  6.   *
  7.   * We are using fetch() as opposed to jQuery as it's just easier in this
  8.   * case.
  9.   */
  10. function sendSiteDoc(inp) {
  11. const email = sessionStorage.getItem('login');
  12. if (email) {
  13. const sid = $("#siteId").val();
  14. if (!isUuid(sid)) {
  15. let cont = '<div class="alert alert-danger" role="alert">';
  16. cont += '<strong>Sorry!</strong>';
  17. cont += " You have to be viewing a site to upload a document!</div>";
  18. $("#status").html(cont).show().fadeOut(5000);
  19. return null;
  20. }
  21. // pull the data from the form and make a FormData
  22. const file = inp.files[0];
  23. const description = $("#doc_desc").val();
  24. $("#doc_desc").val('');
  25. // ...we want to have the metadata as a second part of the multi-part
  26. let meta = {};
  27. if (nullIfEmpty(description)) {
  28. meta.description = description;
  29. }
  30. // now let's package this into a FormData() - two parts, please
  31. let fdata = new FormData();
  32. fdata.append("file", file);
  33. fdata.append("meta", JSON.stringify(meta));
  34. fetch('/sites/' + sid + '/doc', { method: 'POST', body: fdata })
  35. .then(resp => {
  36. if (!resp.ok) {
  37. let cont = '<div class="alert alert-danger" role="alert">';
  38. cont += '<strong>Sorry!</strong>';
  39. cont += " We could not upload this document to the system!</div>";
  40. $("#status").html(cont).show().fadeOut(10000);
  41. return null;
  42. }
  43. return resp.json();
  44. })
  45. .then(data => {
  46. if (data) {
  47. // ...and refresh the data on this page, as we're on the page...
  48. loadSite(sid);
  49. // ...and drop a nice status message
  50. console.log('success: ', data);
  51. let cont = '<div class="alert alert-success" role="alert">';
  52. cont += '<strong>Success!</strong>';
  53. cont += ' The document was uploaded to the service.</div>';
  54. $("#status").html(cont).show().fadeOut(5000);
  55. }
  56. })
  57. .catch(err => {
  58. let cont = '<div class="alert alert-danger" role="alert">';
  59. cont += '<strong>Sorry!</strong>';
  60. cont += " We had a problem while uploading this document to the system!</div>";
  61. $("#status").html(cont).show().fadeOut(10000);
  62. });
  63. } else {
  64. let cont = '<div class="alert alert-danger" role="alert">';
  65. cont += '<strong>Error!</strong>';
  66. cont += " You must be logged in to make changes!</div>";
  67. $("#status").html(cont).show().fadeOut(5000);
  68. }
  69. }

Where the key section is really lines 21-34:

  1. // pull the data from the form and make a FormData
  2. const file = inp.files[0];
  3. const description = $("#doc_desc").val();
  4. $("#doc_desc").val('');
  5. // ...we want to have the metadata as a second part of the multi-part
  6. let meta = {};
  7. if (nullIfEmpty(description)) {
  8. meta.description = description;
  9. }
  10. // now let's package this into a FormData() - two parts, please
  11. let fdata = new FormData();
  12. fdata.append("file", file);
  13. fdata.append("meta", JSON.stringify(meta));
  14. fetch('/sites/' + sid + '/doc', { method: 'POST', body: fdata })

and we take the first file from the input selection, and the description from that input field, and then create the meta Object to hold the description for sending to the server. Finally, we create a FormData and then append the two parts to it, making sure to stringify() the JSON to make it shippable to the server.

The rest of the code is about verifying that we are logged into the service, but it'll check that anyway, and then handling error messages and such, but the key lines are just the few, above.

So what does the Clojure back-end look like? Well... we are going to use the ring wrap-defaults as it includes so many of the useful middleware for ring, and the key inclusion here is the params handling:

  (def site-defaults*
    "Ring defaults allows us to package a lot of the standard middleware out
    there into one wrap-defaults step, and then we can configure it all here
    by simply augmenting the site-defaults 'starter pack'. This allows us to
    minimize the dependencies and control a lot of things very simply with this
    one augmentation map."
    (assoc site-defaults
      :cookies true
      :params {
        :keywordize true      ;; this converts the string keys to keywords
        :multipart true       ;; this handles the multi-part MIME
        :nested true
        :urlencoded true
      }
      :proxy true
      :security false
      :session {
        :flash true
        :store (carmine-store :bedrock)
      }
    )
  )
 
  (def app
    "The actual ring handler that is run -- this is the routes above
     wrapped in various middlewares."
    (let [backend (session-backend {:unauthorized-handler unauthorized-handler})]
      (-> app-routes
          (wrap-access-rules {:rules rules :on-error unauthorized-handler})
          (wrap-authorization backend)
          (wrap-authentication backend)
          wrap-user
          (wrap-json-body {:key-fn ->kebab-case-keyword})
          wrap-json-with-padding
          wrap-cors
          (wrap-defaults site-defaults*)   ;; this does the main work
          wrap-logging
          wrap-gzip)))

We then needed to have a route to handle the file upload, and we have a subset of the routes here:

  (defn sites-routes
    "These are the routes for the sites, and should all be placed under the
    '/sites' context off the main defroutes for the server. It's a simple
    way to isolate these endpoints here, and then not have to worry about them
    interferring with anything later."
    []
    (routes
      (GET "/" [:as {user :user}]
        (pull-sites user))
      (POST ["/:id/doc" :id uuid-pattern] [id :as {user :user params :params}]
        (post-site-doc user (->uuid id) (:file params) (parse-string (:meta params))))
      ))

where the POST is the key, and the multi-part FormData in Javascript will come into the call in the params where the keys in the params are the append()-ed names of the elements in the FormData.

As a final step, we need to parse the :meta value from JSON, and then we are good to go. The arguments to post-site-doc will then look like:

    {
      :filename "words.txt"
      :content-type "text/plain"
      :tempfile #object[java.io.File ...]
      :size 1021
    }

and:

    {
      :description "this is a file description"
    }

At this point, we can treat the :tempfile like any file because it is a simple temp file that will be retained for about an hour, and then dropped. There is nothing magic about it, and the only real question is having enough temp filesystem space to handle any upload, and that's a very simple thing to arrange.

With this, we can handle all kinds of file uploads, and the MIME type will be sent from the client so that we can retain that, if we want (and we do), and then be able to serve this file back up to the caller at any time. Even in a streaming format, if that's what they want.

It was really nice to get this all nailed down so easily. 🙂

Parsing JSON with wrap-json-body

Wednesday, March 19th, 2025

Clojure.jpg

I have been making good progress on the Clojure app, and then got to handling a PUT call, and realized that the current scheme I was using for parsing the :body of the compojure request really was not what I wanted. After all, the Content-type is set to application/json in the call, so there should be a way for the ring middleware to detect this, and parse the JSON body so that the :body of the request is a Clojure map, and not something that has to be parsed.

So I went digging...

As it turns out, there is a set of ring middleware for JSON parsing, and the middleware I needed was already written: wrap-json-body and to use it really is quite simple:

  (:require [camel-snake-kebab.core :refer [->kebab-case-keyword]]
            [ring.middleware.json :refer [wrap-json-body]])
 
  (def app
    "The actual ring handler that is run -- this is the routes above
     wrapped in various middlewares."
    (let [backend (session-backend {:unauthorized-handler unauthorized-handler})]
      (-> app-routes
          (wrap-access-rules {:rules rules :on-error unauthorized-handler})
          (wrap-authorization backend)
          (wrap-authentication backend)
          wrap-user
          (wrap-json-body {:key-fn ->kebab-case-keyword})
          wrap-json-with-padding
          wrap-cors
          (wrap-defaults site-defaults*)
          wrap-logging
          wrap-gzip)))

where the key middleware is:

          (wrap-json-body {:key-fn ->kebab-case-keyword})

and it takes the ->kebab-case-keyword function to apply to each of the keys of the parsed JSON, and for me, that makes them keywords, and kebab-cased. This means I only have to have the right spelling in the client code, and I don't care a whit about the casing - very nice. 🙂

With this, an element of a defroutes can look like:

    (POST "/login" [:as {session :session body :body}]
      (do-login session body))

and the body will be parsed JSON with keywords for keys, and the kebab case. You can't get much nicer than that. Clean, simple, code. Beautiful.

Writing buddy-auth Authorization Handlers

Friday, March 14th, 2025

Clojure.jpg

As I'm building out the core functionality of my current application, the next thing I really wanted to add were a few Authorization handlers for the buddy-auth system I started using with WebAuthN Authentication. The WebAuthN started with a simple Is this user logged in? handler:

  (defn is-authenticated?
    "Function to check the provided request data for a ':user' key, and if it's
    there, then we can assume that the user is valid, and authenticated with the
    passkey. This :user is on the request because the wrap-user middleware put
    it there based on the :session data containing the :identity element and we
    looked up the user from that."
    [{user :user :as req}]
    (uuid? (:id user)))

In order for this to work properly, we needed to make a wrap-user middleware so that if we had a logged in user in the session data, then we would place it in the request for compojure to pass along to all the other middleware, and the routes themselves. This wasn't too hard:

  (defn wrap-user
    "This middleware is for looking at the :identity in the session data, and
    picking up the complete user from their :email and placing it on the request
    as :user so that it can be used by all the other endpoints in the system."
    [handler]
    (fn [{session :session :as req}]
      (handler (assoc req :user (get-user (:email (:identity session)))))))

and this middleware used a function, get-user to load the complete user object from the database based on the email of the user. It's not all that hard, but there are some tricks about the persistence of the Passkey Authenticator object that have to be serialized, and I've already written about that a bit.

And this works perfectly because the WebAuthN workflow deposits the :identity data in the session, and since it's stored server-side, it's safe, and with ring session state persisted in redis, we have this survive restarts, and shared amongst instances in a load balancer. But what about something a little more specific? Like, say we have an endpoint that returns the details of an order, but only if the user has been permission to see the order?

This combines Roll-Based Access Control (RBAC), and Attribute-Based Access Control (ABAC) - and while some will say you only need one, that's really not the best way to build a system because there are times when you need some of both to make the solution as simple as possible.

In any case, this is what we need to add:

  • For a user, can they actually see the order in the database? This can be a question of the schema and data model, but there will likely be a way to determine if the user was associated with the order, and if so, then they can see it, and if not, then they can't.
  • Most endpoint conventions have the identifier as the last part of the URL - the Path, as it is referred to. We will need to be able to easily extract the Path from the URL, or URI, in the request, and then use that as the identifier of the order.
  • Put these into an authentication handler for buddy-auth.

For the first, I made a simple function to see if the user can see the order:

  (defn get-user-order
    "Function to take a user id and order id and return the user/order
    info, if any exists, for this user and this order. We then need to
    look up the user-order, and return the appropriate map - if one exists."
    [uid oid]
    (if (and (uuid? uid) (uuid? oid))
      (db/query ["select * from users_orders
                   where user_id = ? and order_id = ?" uid oid]
        :row-fn kebab-keys-deep :result-set-fn first)))

For the second, we can simply look at the :uri in the request, and split it up on the /, and then take the last one:

  (defn uri-path
    "When dealing with Buddy Authentication handlers, it's often very useful
    to be able to get the 'path' from the request's uri and return it. The
    'path' is defined to be:
       https://google.com/route/to/path
    and is the last part of the url *before* the query params. This is very
    often a uuid of an object that we need to get, as it's the one being
    requested by the caller."
    [{uri :uri :as req}]
    (if (not-empty uri)
      (last (split uri "/"))))

For the last part, we put these together, and we have a buddy-auth authorization handler:

  (defn can-see-order?
    "Function to take a request, and pull out the :user from the wrapping
    middleware, and pick the last part of the :uri as that will be the
    :order-id from the URL. We then need to look up the user-order, and
    see if this user can see this order."
    [{user :user :as req}]
    (if-let [hit (get-user-order (:id user) (->uuid (uri-path req)))]
      (not-nil? (some #{"OPERATOR"} (:roles hit)))))

in this function we see that we are referring to :roles on the user-order, and that's because we have built up the cross-reference table in the database to look like:

  CREATE TABLE IF NOT EXISTS users_orders (
    id              uuid NOT NULL,
    version         INTEGER NOT NULL,
    as_of           TIMESTAMP WITH TIME zone NOT NULL,
    by_user         uuid,
    user_id         uuid NOT NULL,
    roles           jsonb NOT NULL DEFAULT '[]'::jsonb,
    title           VARCHAR,
    description     VARCHAR,
    order_id        uuid NOT NULL,
    created_at      TIMESTAMP WITH TIME zone NOT NULL,
    PRIMARY KEY (id, version, as_of)
  );

The key parts are the user_id and order_id - the mapping is many-to-many, so we have to have a cross-reference table to handle that association. Along with these, we have some metadata about the reference: the title of the User with regard to this order, the description of the relationship, and even the roles the User will have with regards to this order.

The convention we have set up is that of the roles contains the string OPERATOR, then they can see the order. The Postgres JSONB field is ideal for this as it allows for a simple array of strings, and it fits right into the data model.

With all this, we can then make a buddy-auth access rule that looks like:

   {:pattern #"^/orders/[-0-9a-fA-F]{36}"
    :request-method :get
    :handler {:and [is-authenticated? can-see-order?]}}

and the endpoints that match that pattern will have to pass both the handlers and we have exactly what we wanted without having to place any code in the actual routes or functions to handle the authorization. Nice. 🙂

Working with java.time in Clojure

Tuesday, March 11th, 2025

java-logo-thumb.png

I've been working on a new Clojure project, and since I last did production Clojure work, the Joda Time library has been deprecated, and the move has been to the Java 8 java.time classes. The functionality is basically the same, but the conversion isn't, and one of the issues is that the JDBC Postgres library will return Date and Timestamp objects - all based on java.util.Date.

As it turns out, the conversion isn't as easy as I might have hoped. 🙂

For the most part, it's just a simple matter of using different functions, but the capabilities are all there in the Clojure clojure.java-time library. The one key is the conversion with Postgres. There, we have the protocol set up to help with conversions:

  (:require [java-time.api :as jt])
 
  (extend-protocol IResultSetReadColumn
    PGobject
    (result-set-read-column [pgobj metadata idx]
      (let [type  (.getType pgobj)
            value (.getValue pgobj)]
        (case type
          "json" (json/parse-string-strict value true)
          "jsonb" (json/parse-string-strict value true)
          value)))
 
    java.sql.Timestamp
    (result-set-read-column [ts _ _]
      (jt/zoned-date-time (.toLocalDateTime ts) (jt/zone-id)))
 
    java.sql.Date
    (result-set-read-column [ts _ _]
      (.toLocalDate ts)))

and the key features are the last two. These are the conversions of the SQL Timestamp into java.time.ZonedDateTime and Date into java.time.LocalDate values.

As it turns out, the SQL values have Local date, and time/date accessors, and so converting to a Zoned timestamp, just means picking a convenient zone, as the
offset is carried in the LocalDateTime already. Using the system default is as
good as any, and keeps things nicely consistent.

With these additions, the data coming from Postgres 16 timestamp and date columns is properly massaged into something that can be used in Clojure with the rest of the clojure.java-time library. Very nice!

UPDATE: Oh, I missed a few things, so let's get it all cleared up here now. The protocol extensions, above, are great for reading out of the Postgres database. But what about inserting values into the Postgres database? This needs a slightly different protocol to be extended:

  (defn value-to-jsonb-pgobject
    "Function to take a _complex_ clojure data element and convert it into
    JSONB for inserting into postgresql 9.4+. This is the core of the mapping
    **into** the postgres database."
    [value]
    (doto (PGobject.)
          (.setType "jsonb")
          (.setValue (json/generate-string value))))
 
  (extend-protocol ISQLValue
    clojure.lang.IPersistentMap
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    clojure.lang.IPersistentVector
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    clojure.lang.IPersistentList
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    flatland.ordered.map.OrderedMap
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    clojure.lang.LazySeq
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    java.time.ZonedDateTime
    (sql-value [value] (jt/format :iso-offset-date-time value))
 
    java.time.LocalDate
    (sql-value [value] (jt/format :iso-local-date value)))

basically, we need to tell the Clojure JDBC code how to map the objects, Java or Clojure, into the SQL values that the JDBC driver is expecting. In the case of the date and timestamp, that's not too bad as Postgres will cast from strings to the proper values for the right formats.

But there remains a third set of key values - the Parameters to PreparedStatement objects. This is key as well, and they need to be SQL objects, but here the casting isn't done by Postgres as it is in the JDBC Driver, and that needs proper Java SQL objects. For this, we need to add:

  (extend-protocol ISQLParameter
    java.time.ZonedDateTime
    (set-parameter [value ^PreparedStatement stmt idx]
      (.setTimestamp stmt idx (jt/instant->sql-timestamp (jt/instant value))))
 
    java.time.LocalDate
    (set-parameter [value ^PreparedStatement stmt idx]
      (.setDate stmt idx (jt/sql-date value))))

Here, the Clojure java-time library handles the date easily enough, and I just need to take the ZonedDateTime into a java.time.Instant, and then the library again takes it from there.

These last two bits are very important for the full-featured use of the new Java Time objects and Postgres SQL. But it's very worth it.

Persisting Java Objs within Clojure

Thursday, March 6th, 2025

Clojure.jpg

For the last day or so I've been wrestling with a problem using WebAuthN on a Clojure web app based on compojure and ring. There were a few helpful posts that got be in the right direction, and using Buddy helps, as it handles a lot of the route handling, but getting the actual WebAuthN handshake going was a bit of a pain.

The problem was that after the Registration step, you end up with a Java Object, an instance of com.webauthn4j.authenticator.AuthenticatorImpl and there is no simple way to serialize it out for storage in a database, so it was time to get creative.

I did a lot of digging, and I was able to find a nice way to deserialize the object, and return a JSON object, but there was no way to reconstitute it into an AuthenticatorImpl, so that had to be scrapped.

Then I found a reference to an Apache Commons lang object that supposedly was exactly what I wanted... it would serialize to a byte[], and then deserialize from that byte[] into the object. Sounds good... but I needed to save it in a Postgres database. Fair enough... let's Base64 encode it into a string, and then decode it on the way out.

The two key functions are very simple:

  (:import org.apache.commons.lang3.SerializationUtils
           java.util.Base64)
 
  (def not-nil? (complement nil?))
 
  (defn obj->b64s
    "This is a very useful function for odd Java Objects as it is an Apache tool
    to serialize the Object into a byte[], and then convert that into a Base64
    string. This is going to be very helpful with the persistence of objects to
    the database, as for some of the WebAuthN objects, it's important to save
    them, as opposed to re-creating them each time."
    [o]
    (if (not-nil? o)
      (.encodeToString (Base64/getEncoder) (SerializationUtils/serialize o))))
 
  (defn b64s->obj
    "This is a very useful function for odd Java Objects as it is an Apache tool
    to deserialize a byte[] into the original Object that was serialized with
    obj->b64s. This is going to be very helpful with the persistence of objects
    to the database, as for some of the WebAuthN objects, it's important to save
    them, as opposed to re-creating them each time."
    [s]
    (if (not-nil? s)
      (SerializationUtils/deserialize (.decode (Base64/getDecoder) s))))

These then fit into the saving and querying very simply, and it all works out just dandy. 🙂 I will admit, I was getting worried because I was about to regenerate the AuthenticatorImpl on each call, and that would have been a waste for sure.

The complete WebAuthN workflow is the point of another post, and a much longer one at that. But this really made all the difference.

Upgraded to Java 17.0.14 and 11.0.26

Thursday, February 20th, 2025

java-logo-thumb.png

I have been looking at starting some projects in Clojure for work, and I thought it would be good for me to get the latest JDK 17 from Homebrew and Temurin. As it turns out, the latest for JDK 17 is now JDK 17.0.4, and since I had a slightly older version of that version, and the Homebrew name changed, I had to:

  $ brew tap homebrew/cask

and then to actually update it:

  $ brew install --cask temurin@17

When I checked:

  $ java -version
  openjdk version "17.0.14" 2025-01-21
  OpenJDK Runtime Environment Temurin-17.0.14+7 (build 17.0.14+7)
  OpenJDK 64-Bit Server VM Temurin-17.0.14+7 (build 17.0.14+7, mixed mode, sharing)

which is exactly what I was hoping for.

As an interesting note, the re-tapping of the cask updated the name of temurin11 to temurin@11, and I updated JDK 11 as well - why not? It's at 11.0.26, and I might use it... you never know.

Now I'm up-to-date with both versions, and I can easily switch from one to the other with the shell function I wrote. Excellent! 🙂

Advent of Code 2022 is On!

Thursday, December 1st, 2022

Christmas Tree

This morning I did the first day of the 2022 Advent of Code. What fun it is to get back into Clojure - for a month. If I thought it the least bit reasonable, I'd be doing back-end Clojurescript as it gets generated into Javascript in the same way that Typescript does, so it would run on the same stack with the same speed, etc. But it's just too big a leap for most folks, and it's not worth the education cycles.

But still... the simplicity of the language, and it's ability to run in highly multi-threaded environments is a huge win, and so it will remain one of my very favorite languages.

Temurin 11 is on Homebrew

Tuesday, May 24th, 2022

Homebrew

This morning I decided to see if I could get the AdoptOpenJDK 11, now called Temurin 11, going on my M1Max MacBook Pro. In the past, they have had the AdoptOpenJDK 11 on Homebrew, but it was Intel, and I didn't want to bring in Rosetta 2 for any Clojure work, so I was willing to wait. I read on Twitter from the Eclipse Foundation that they had placed Java 11 and 8 for Apple Silicon on Homebrew, so why not?

It turns out, it's not bad at all:

  $ brew tap homebrew/cask-versions
  $ brea install --cask temurin11

and it's ready to go. Then I can use my setjdk function to switch between JDK 11 and 17 on my laptop, which is nice when there are some issues I've had to deal with in the past. Don't know when I'll need this again.

Sadly, the architecture for JDK 8 is still Intel:

  $ brew info temurin8
  temurin8: 8,332,09
  https://adoptium.net/
  Not installed
  From: https://github.com/Homebrew/homebrew-cask-versions/blob/HEAD/Casks/temurin8.rb
  ==> Name
  Eclipse Temurin 8
  ==> Description
  JDK from the Eclipse Foundation (Adoptium)
  ==> Artifacts
  OpenJDK8U-jdk_x64_mac_hotspot_8u332b09.pkg (Pkg)

that last line is the kicker: x64... so it goes. Still... I have JDK 11 for Apple Silicon now, that's good. 🙂