Archive for the ‘Open Source Software’ Category

Magic with Clojure Macros

Monday, May 12th, 2025

Clojure.jpg

This morning I wanted to tackle a little problem I got when I moved from Clojure logging to timbre, and while it wasn't a huge deal, it was something I wanted to fix, and it's going to make for a lot better logging experience. Let's take a look at the problem.

When moving to timbre, it logs the namespace and line of the log macro invocation like:

  2025-05-12T14:14:38.101Z peabody.local INFO [bedrock.logging:55] - Finished one-contact
    [email: bob.beaty@gmail.com] in 1ms.

and the problem is that the logging of the execution time is not being done at line 55 of bedrock.logging - that's there the log macro is being called from within the execution logging hook. Timbre is simply looking at the location of the direct call to it's function, which is a macro, and then using that namespace and line for the contents of the log.

So... how do we fix it?

Well... first, we need to look at the execution logging macro that's in our code, and see what we have:

  (defmacro log-execution-time!
    "A macro for adding execution time logging to a named
    function. Simply call at the top level with the name of the function
    you want to wrap. As a second argument you may provide an options
    map with possible values:
 
      {
       :level  ;; defaults to :info
       :msg    ;; some string that is printed with the log messages
       :msg-fn ;; a function that will be called with the return value
               ;; and the arguments, and should return a message for
               ;; inclusion in the log
      }"
    ([var-name] `(log-execution-time! ~var-name {}))
    ([var-name opts]
       `(add-hook (var ~var-name)
                  ::execution-time
                  (execution-time-logging-hook
                   (assoc ~opts
                     :func-name '~var-name
                     ;; pass in the namespace so the log messages
                     ;; can have the appropriate namespace instead
                     ;; of bedrock.logging
                     :ns ~*ns*)))))

so we can see that we are taking the name of the calling namespace, *ns*, and passing it into the execution-time-logging-hook function, but we aren't putting in the line number. It's nice that the namespace is already there, but the line number isn't. However, it's not too far away:

  (defmacro log-execution-time!
    "A macro for adding execution time logging to a named
    function. Simply call at the top level with the name of the function
    you want to wrap. As a second argument you may provide an options
    map with possible values:
 
      {
       :level  ;; defaults to :info
       :msg    ;; some string that is printed with the log messages
       :msg-fn ;; a function that will be called with the return value
               ;; and the arguments, and should return a message for
               ;; inclusion in the log
      }"
    ([var-name] `(log-execution-time! ~var-name {}))
    ([var-name opts]
       (let [l (:line (meta &form))]
         `(add-hook (var ~var-name)
                    ::execution-time
                    (execution-time-logging-hook
                     (assoc ~opts
                       :func-name '~var-name
                       ;; pass in the namespace so the log messages
                       ;; can have the appropriate namespace instead
                       ;; of bedrock.logging
                       :ns ~*ns*
                       :line ~l))))))

Every Clojure macro has the &form value defined, and that will have keys like :ns, :line, and :column, and we just need to extract the :line before we start the expansion code for the macro, and then pass it into the hook function along with the namespace.

So now we're able to know the real location of the execution call log message by namespace and line number. Halfway there.

The next step is to change the timbre call in the hook function to use these values as opposed to the actual values of the namespace and line number of the hook function. Thankfully, timbre supports that, if we are a little careful.

The original code looks like:

  (defn execution-time-logging-hook
    "Given a config map, returns a hook function that logs execution time."
    [{:keys [level func-name msg msg-fn ns line] :or {level :info}}]
    (let [labeler (fn [msg]
                    (str func-name (if msg (str " [" msg "]"))))
          logf (fn [s & args]
                 (set-mdc!)
                 (log level (apply format s args)))]

where the timbre's log function is being used in a ver simple way, and this is where the namespace and line number are being picked up.

If we go one level deeper in the timber cal stack, we can use the more universal function log!, and that takes some parameters:

  (defn execution-time-logging-hook
    "Given a config map, returns a hook function that logs execution time."
    [{:keys [level func-name msg msg-fn ns line] :or {level :info}}]
    (let [labeler (fn [msg]
                    (str func-name (if msg (str " [" msg "]"))))
          logf (fn [& args]
                 (set-mdc!)
                 (log! level :f args {:loc {:ns ns :line line}}))]

So... we have left all the args to logf as a vector, as that's how timbre's log! function can use them, and the :f argument says treat the next vector as a formatting call - which is ideal for what we need. And finally, the :loc option is the location of the call, and that's where we put in the extracted namespace and line number.

With this, we now have execution logging messages that are reporting the namespace and line number where they are actually called, and not where the one log message was. This makes the logs far more useful. 🙂

Moved to Postgres 17

Friday, May 9th, 2025

PostgreSQL.jpg

This morning I noticed that not only was Postgres 17 out, they had released 17.5. I try to keep up to the latest major version, and I seem to be slipping in recent days, and I needed to correct that this morning. 🙂

So based on the steps to move to 16, start by saving everything in all the databases:

  $ pg_dumpall > dump.sql
  $ brew services stop postgresql@16

Now we can wipe out the old install and it's data:

  $ brew uninstall postgresql@16
  $ rm -rf /opt/homebrew/var/postgresql@16

Now we install the new version, start it, and load back up the data:

  $ brew install postgresql@17
  $ brew services start postgresql@17
  $ brew link postgresql@17
  $ psql -d postgres -f dump.sql
  $ psql -l

At this point, it's all loaded up and you can ditch the dump.sql file, as it's no longer needed, and the new version is active:

  $ psql --version
  psql (PostgreSQL) 17.5 (Homebrew)

Excellent. 🙂

As a minor point, I fired up my Clojure code, and it hit the database perfectly. The dump and load worked, the JDBC calls worked, and everything was great.

Upgraded to Java 21 LTS

Tuesday, April 8th, 2025

java-logo-thumb.png

Today it was a good time to add Java 21.0.6 to the mix, as I have been noticing a few performance specs for the latest versions of the JDK and my Clojure work, so why not? It's as simple as:

  $ brew install --cask temurin@21

and then to make use of it, I just need to use my trusty shell function:

  $ setjdk 21
  $ java -version
  openjdk version "21.0.6" 2025-01-21 LTS
  OpenJDK Runtime Environment Temurin-21.0.6+7 (build 21.0.6+7-LTS)
  OpenJDK 64-Bit Server VM Temurin-21.0.6+7 (build 21.0.6+7-LTS, mixed mode, sharing)

At this point, I can run the REPL and my Clojure app in JDK 21.0.6, and get all the little updates and bug fixes in the latest LTE JDK. 🙂

Converting Clojure Logging to Timbre

Monday, April 7th, 2025

Clojure.jpg

I have been looking into the timbre logging system for Clojure as it's written by one of the very best Clojure devs I have known, and it has a lot going for it - async logging, loads of appenders, including ones that are log aggregators, which is really convenient, but I've been having some issues the last few days with configuration and errors.

Specifically, the configuration is not in a file - like log4j and slf4j, it's in the Clojure code, and that was a bit of a wrinkle. But once I figured that out, and started to dig into the code for the details I needed, it got a lot easier.

So let's go through my conversion to timbre from log4j and slf4j, and see what it took.

Dependencies

Since I an using jetty as the backbone of the web server, I needed to give slf4j a way to send logging to timbre, and that meant just a few dependencies:

  :dependencies[...
                [com.taoensso/timbre "6.6.1"]
                [org.slf4j/slf4j-api "2.0.17"]
                [com.taoensso/timbre-slf4j "6.6.1"]
                ...]

where the last two provide that conduit free of charge by their simple inclusion. That's one less headache right off the bat. 🙂

Calling Changes

Here, the changes are pretty simple... where I would have had:

  (ns my-app
    (:require [clojure.tools.logging :refer [error info infof]]
               ...))

I simply change the namespace I'm referring in the functions from to be:

  (ns my-app
    (:require [taoensso.timbre :refer [error info infof]]
               ...))

and everything stays the same. Pretty nice.

Configuration

Here is where it can be done in a lot of ways, but I chose to have a single function to set up the logging based on the configuration of the instance - and have it all in one place. In the :main namespace of the app, I added:

  (ns my-app
    (:require [taoensso.timbre :refer [merge-config! error info infof]]
              [taoensso.timbre.appenders.community.rotor :refer [rotor-appender]]
               ...))
 
  (defn init-logs!
    "Function to initialize the Timbre logging system, which can be based on the
    config of this running instance. It will basically disable the default things
    we do not want, and add those things that we do, and will be called when we
    start a REPL, or the app itself. This will modify the Timbre *config*, and so
    we have the bang at the end of the name."
    []
    (merge-config! {:min-level :info
                    :appenders {:println {:enabled? false}
                                :rotor (merge (rotor-appender {:path "log/my-app.log"})
                                         {:async? true})}}))

This does a few things for me:

  • Turn off the console appender - we don't need the :println appender, so merge in the "off" for that guy.
  • Add rotate file appender - have this do asynchronous calls as well, and we shouldn't have to worry about the shutdown for now.
  • Point to the file(s) location - we really need to tell it where to dump the log files.

At this point, we need to add this to the start of the app, and for me that's in the handle-args function of the same namespace:

  (defn handle-args
    "Function to parse the arguments to the main entry point of this project and
    do what it's asking. By the time we return, it's all done and over."
    [args app]
    (init-logs!)            ;; initialize the logging from the config
    (let [[params [action]] (cli args
               ["-p" "--port" "Listen on this port" :default 8080 :parse-fn #(Integer. %)]
               ["-v" "--verbose" :flag true])
          quiet? (:quiet params)
          ignore? (:ignore params)
          reports? (:reports params)]
      ...))

And in order to fire init-logs! off when a REPL is started, we just need to update our project.clj file, as we're using Leiningen:

  :repl-options {:init (init-logs!)}

and Leiningen will know to look in the :main namespace for that function. But it could have been anywhere in the codebase, if properly called.

Possible Issues

In doing the conversion, I had one problem with the log function from clojure.tools.logging. The original code was:

  (let [logf (fn [s & args]
               (set-mdc!)
               (log ns level nil (apply format s args)))]
    ...)

and the log function in timbre doesn't have a matching one with the same arity, so I had to collapse it back to:

  (let [logf (fn [s & args]
               (set-mdc!)
               (log level (apply format s args)))]
    ...)

and the timbre function worked just fine.

All tests worked just fine, the logging is solid and stable, and I'm sure I'll enjoy the flexibility that I can get with the additional appenders that I can add in init-logs! for testing and production when we get to that point. Very nice update! 🙂

iTerm2 is Great Code

Tuesday, April 1st, 2025

iTerm2

This morning I was working on a project, and after pushing my latest commit up to GitHub, I checked to see if iTerm2 had a new release - and it did. Excellent! I read the release motes, and it was a few fixes for a handful of things, but I recognized one as something that I'd run into in the past, and so I updated.

This is when the magic happened. 🙂

iTerm2 updated - that's no surprise, but on the restart, all the terminal sessions had been maintained. Every single one. My psql sessions - right where they were before the restart. My running processes: Leiningen, etc. - right where they were before.

This just made me smile. 🙂 It's not magic, it's just attention to detail, and developers that knew what was important to the users. Bravo.

Very Nice Simplicity in Javascript

Tuesday, March 25th, 2025

One of the things I really like coding is when a simple solution presents itself. There are a lot of times when a language and toolkit really give you only the preferred method to do something. But every now and then, a language will really lend itself to be able to solve the same problem in several different ways, and a toolkit will embrace that philosophy.

This morning I ran into the situation with a simple Bootstrap modal dialog.

This is what I had originally:

  <a id="changeOwner" data-toggle="modal" data-target="#contact_modal">Owner:</a>
  ...
  <script>
    $('#contact_modal').on('show.bs.modal', function(e) {
      console.log('loading contacts from service...');
      loadContacts();
    });
  </script>

and for the most part, this was working just fine. But occasionally, I would notice that clicking on the link didn't show the modal window, and no amount of clicking would solve the issue. It's as if something on the page load that time left the modal in a state where it just wasn't able to be raised. No errors. Just nothing happened.

So I thought: Why not be more overt about it?, and so I changed the structure to be:

  <a id="changeOwner" onClick="pullUpOwners();">Owner:</a>
  ...
  <script>
    function pullUpOwners() {
      console.log('loading contacts from service...');
      $('#contact_modal').modal('show');
      loadContacts();
    }
  </script>

where we are now just had the link directly call the function, and the function directly tell the modal to show itself. The state of the modal was then unimportant, and the acts are now being far more overt. This is a lot more overt, and in the end - simpler.

The great news is that this new method works every time. There are no mis-loadings and possible bad modal states. This makes the click reliable and that's really the point of this... to gain the reliability the previous method didn't have.

It's really nice to see that such a simple change yields the exact results I was looking for. 🙂

Parsing JSON with wrap-json-body

Wednesday, March 19th, 2025

Clojure.jpg

I have been making good progress on the Clojure app, and then got to handling a PUT call, and realized that the current scheme I was using for parsing the :body of the compojure request really was not what I wanted. After all, the Content-type is set to application/json in the call, so there should be a way for the ring middleware to detect this, and parse the JSON body so that the :body of the request is a Clojure map, and not something that has to be parsed.

So I went digging...

As it turns out, there is a set of ring middleware for JSON parsing, and the middleware I needed was already written: wrap-json-body and to use it really is quite simple:

  (:require [camel-snake-kebab.core :refer [->kebab-case-keyword]]
            [ring.middleware.json :refer [wrap-json-body]])
 
  (def app
    "The actual ring handler that is run -- this is the routes above
     wrapped in various middlewares."
    (let [backend (session-backend {:unauthorized-handler unauthorized-handler})]
      (-> app-routes
          (wrap-access-rules {:rules rules :on-error unauthorized-handler})
          (wrap-authorization backend)
          (wrap-authentication backend)
          wrap-user
          (wrap-json-body {:key-fn ->kebab-case-keyword})
          wrap-json-with-padding
          wrap-cors
          (wrap-defaults site-defaults*)
          wrap-logging
          wrap-gzip)))

where the key middleware is:

          (wrap-json-body {:key-fn ->kebab-case-keyword})

and it takes the ->kebab-case-keyword function to apply to each of the keys of the parsed JSON, and for me, that makes them keywords, and kebab-cased. This means I only have to have the right spelling in the client code, and I don't care a whit about the casing - very nice. 🙂

With this, an element of a defroutes can look like:

    (POST "/login" [:as {session :session body :body}]
      (do-login session body))

and the body will be parsed JSON with keywords for keys, and the kebab case. You can't get much nicer than that. Clean, simple, code. Beautiful.

Writing buddy-auth Authorization Handlers

Friday, March 14th, 2025

Clojure.jpg

As I'm building out the core functionality of my current application, the next thing I really wanted to add were a few Authorization handlers for the buddy-auth system I started using with WebAuthN Authentication. The WebAuthN started with a simple Is this user logged in? handler:

  (defn is-authenticated?
    "Function to check the provided request data for a ':user' key, and if it's
    there, then we can assume that the user is valid, and authenticated with the
    passkey. This :user is on the request because the wrap-user middleware put
    it there based on the :session data containing the :identity element and we
    looked up the user from that."
    [{user :user :as req}]
    (uuid? (:id user)))

In order for this to work properly, we needed to make a wrap-user middleware so that if we had a logged in user in the session data, then we would place it in the request for compojure to pass along to all the other middleware, and the routes themselves. This wasn't too hard:

  (defn wrap-user
    "This middleware is for looking at the :identity in the session data, and
    picking up the complete user from their :email and placing it on the request
    as :user so that it can be used by all the other endpoints in the system."
    [handler]
    (fn [{session :session :as req}]
      (handler (assoc req :user (get-user (:email (:identity session)))))))

and this middleware used a function, get-user to load the complete user object from the database based on the email of the user. It's not all that hard, but there are some tricks about the persistence of the Passkey Authenticator object that have to be serialized, and I've already written about that a bit.

And this works perfectly because the WebAuthN workflow deposits the :identity data in the session, and since it's stored server-side, it's safe, and with ring session state persisted in redis, we have this survive restarts, and shared amongst instances in a load balancer. But what about something a little more specific? Like, say we have an endpoint that returns the details of an order, but only if the user has been permission to see the order?

This combines Roll-Based Access Control (RBAC), and Attribute-Based Access Control (ABAC) - and while some will say you only need one, that's really not the best way to build a system because there are times when you need some of both to make the solution as simple as possible.

In any case, this is what we need to add:

  • For a user, can they actually see the order in the database? This can be a question of the schema and data model, but there will likely be a way to determine if the user was associated with the order, and if so, then they can see it, and if not, then they can't.
  • Most endpoint conventions have the identifier as the last part of the URL - the Path, as it is referred to. We will need to be able to easily extract the Path from the URL, or URI, in the request, and then use that as the identifier of the order.
  • Put these into an authentication handler for buddy-auth.

For the first, I made a simple function to see if the user can see the order:

  (defn get-user-order
    "Function to take a user id and order id and return the user/order
    info, if any exists, for this user and this order. We then need to
    look up the user-order, and return the appropriate map - if one exists."
    [uid oid]
    (if (and (uuid? uid) (uuid? oid))
      (db/query ["select * from users_orders
                   where user_id = ? and order_id = ?" uid oid]
        :row-fn kebab-keys-deep :result-set-fn first)))

For the second, we can simply look at the :uri in the request, and split it up on the /, and then take the last one:

  (defn uri-path
    "When dealing with Buddy Authentication handlers, it's often very useful
    to be able to get the 'path' from the request's uri and return it. The
    'path' is defined to be:
       https://google.com/route/to/path
    and is the last part of the url *before* the query params. This is very
    often a uuid of an object that we need to get, as it's the one being
    requested by the caller."
    [{uri :uri :as req}]
    (if (not-empty uri)
      (last (split uri "/"))))

For the last part, we put these together, and we have a buddy-auth authorization handler:

  (defn can-see-order?
    "Function to take a request, and pull out the :user from the wrapping
    middleware, and pick the last part of the :uri as that will be the
    :order-id from the URL. We then need to look up the user-order, and
    see if this user can see this order."
    [{user :user :as req}]
    (if-let [hit (get-user-order (:id user) (->uuid (uri-path req)))]
      (not-nil? (some #{"OPERATOR"} (:roles hit)))))

in this function we see that we are referring to :roles on the user-order, and that's because we have built up the cross-reference table in the database to look like:

  CREATE TABLE IF NOT EXISTS users_orders (
    id              uuid NOT NULL,
    version         INTEGER NOT NULL,
    as_of           TIMESTAMP WITH TIME zone NOT NULL,
    by_user         uuid,
    user_id         uuid NOT NULL,
    roles           jsonb NOT NULL DEFAULT '[]'::jsonb,
    title           VARCHAR,
    description     VARCHAR,
    order_id        uuid NOT NULL,
    created_at      TIMESTAMP WITH TIME zone NOT NULL,
    PRIMARY KEY (id, version, as_of)
  );

The key parts are the user_id and order_id - the mapping is many-to-many, so we have to have a cross-reference table to handle that association. Along with these, we have some metadata about the reference: the title of the User with regard to this order, the description of the relationship, and even the roles the User will have with regards to this order.

The convention we have set up is that of the roles contains the string OPERATOR, then they can see the order. The Postgres JSONB field is ideal for this as it allows for a simple array of strings, and it fits right into the data model.

With all this, we can then make a buddy-auth access rule that looks like:

   {:pattern #"^/orders/[-0-9a-fA-F]{36}"
    :request-method :get
    :handler {:and [is-authenticated? can-see-order?]}}

and the endpoints that match that pattern will have to pass both the handlers and we have exactly what we wanted without having to place any code in the actual routes or functions to handle the authorization. Nice. 🙂

Working with java.time in Clojure

Tuesday, March 11th, 2025

java-logo-thumb.png

I've been working on a new Clojure project, and since I last did production Clojure work, the Joda Time library has been deprecated, and the move has been to the Java 8 java.time classes. The functionality is basically the same, but the conversion isn't, and one of the issues is that the JDBC Postgres library will return Date and Timestamp objects - all based on java.util.Date.

As it turns out, the conversion isn't as easy as I might have hoped. 🙂

For the most part, it's just a simple matter of using different functions, but the capabilities are all there in the Clojure clojure.java-time library. The one key is the conversion with Postgres. There, we have the protocol set up to help with conversions:

  (:require [java-time.api :as jt])
 
  (extend-protocol IResultSetReadColumn
    PGobject
    (result-set-read-column [pgobj metadata idx]
      (let [type  (.getType pgobj)
            value (.getValue pgobj)]
        (case type
          "json" (json/parse-string-strict value true)
          "jsonb" (json/parse-string-strict value true)
          value)))
 
    java.sql.Timestamp
    (result-set-read-column [ts _ _]
      (jt/zoned-date-time (.toLocalDateTime ts) (jt/zone-id)))
 
    java.sql.Date
    (result-set-read-column [ts _ _]
      (.toLocalDate ts)))

and the key features are the last two. These are the conversions of the SQL Timestamp into java.time.ZonedDateTime and Date into java.time.LocalDate values.

As it turns out, the SQL values have Local date, and time/date accessors, and so converting to a Zoned timestamp, just means picking a convenient zone, as the
offset is carried in the LocalDateTime already. Using the system default is as
good as any, and keeps things nicely consistent.

With these additions, the data coming from Postgres 16 timestamp and date columns is properly massaged into something that can be used in Clojure with the rest of the clojure.java-time library. Very nice!

UPDATE: Oh, I missed a few things, so let's get it all cleared up here now. The protocol extensions, above, are great for reading out of the Postgres database. But what about inserting values into the Postgres database? This needs a slightly different protocol to be extended:

  (defn value-to-jsonb-pgobject
    "Function to take a _complex_ clojure data element and convert it into
    JSONB for inserting into postgresql 9.4+. This is the core of the mapping
    **into** the postgres database."
    [value]
    (doto (PGobject.)
          (.setType "jsonb")
          (.setValue (json/generate-string value))))
 
  (extend-protocol ISQLValue
    clojure.lang.IPersistentMap
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    clojure.lang.IPersistentVector
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    clojure.lang.IPersistentList
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    flatland.ordered.map.OrderedMap
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    clojure.lang.LazySeq
    (sql-value [value] (value-to-jsonb-pgobject value))
 
    java.time.ZonedDateTime
    (sql-value [value] (jt/format :iso-offset-date-time value))
 
    java.time.LocalDate
    (sql-value [value] (jt/format :iso-local-date value)))

basically, we need to tell the Clojure JDBC code how to map the objects, Java or Clojure, into the SQL values that the JDBC driver is expecting. In the case of the date and timestamp, that's not too bad as Postgres will cast from strings to the proper values for the right formats.

But there remains a third set of key values - the Parameters to PreparedStatement objects. This is key as well, and they need to be SQL objects, but here the casting isn't done by Postgres as it is in the JDBC Driver, and that needs proper Java SQL objects. For this, we need to add:

  (extend-protocol ISQLParameter
    java.time.ZonedDateTime
    (set-parameter [value ^PreparedStatement stmt idx]
      (.setTimestamp stmt idx (jt/instant->sql-timestamp (jt/instant value))))
 
    java.time.LocalDate
    (set-parameter [value ^PreparedStatement stmt idx]
      (.setDate stmt idx (jt/sql-date value))))

Here, the Clojure java-time library handles the date easily enough, and I just need to take the ZonedDateTime into a java.time.Instant, and then the library again takes it from there.

These last two bits are very important for the full-featured use of the new Java Time objects and Postgres SQL. But it's very worth it.

Installing Redis on macOS

Saturday, March 8th, 2025

Redis Database

I have always liked redis as a wonderful example of very targeted software done very well. It's a single-threaded C app that does one thing, very well, and in the years since I started using it, it's only gotten better. As I've been building a new Clojure web app, one of the things I wanted to take advantage of, was the stored session state, specifically so that when I have multiple boxes running the code, they can all share the one session state - and quickly.

I've been getting my WebAuthN authentication going, and that is using session state as the place to store the :identity of the logged in user. After I worked out the serialization of the Authenticator for WebAuthN, I then turned my attention to persisting the session state with Ring's SessionStore.

I've started using Ring's wrap-defaults middleware. It's easily added to the options for wrap-defaults with:

  ; pull in the wrapper and base defaults
  (:require [ring.middleware.defaults :refer [wrap-defaults site-defaults]])
 
  ; augment the site defaults with the SessionStore from Carmine
  (def my-site-defaults
    (assoc site-defaults
      :session {:flash true
                :store (carmine-store)}))
 
  ; put it in the stack of the app routes...
  (-> app-routes
      (wrap-access-rules {:rules rules :on-error unauthorized-handler})
      (wrap-authorization backend)
      (wrap-authentication backend)
      wrap-user
      wrap-json-with-padding
      (wrap-defaults my-site-defaults)
      wrap-logging
      wrap-gzip)))

I've been a huge fan of carmine for many years, as it's a pure Clojure library for redis, and it's exceptionaly fast.

But first, I needed to install redis locally so that I can do laptop development and not have to have a connection to a hosted service. Simply:

  $ brew install redis

and then to make sure it's started through launchd, simply:

  $ brew services start redis

just like with Postgres. It's really very simple.

At this point, restarting the web server will automatically store the session data in the locally running redis, and for production deployments, it's easy enough to use redis cloud, or redis at AWS, Google Cloud, etc. It's super simple, and it's exceptionally reliable.

The web server can now be restarted without impact to the customers as redis has their session state, and postgres has their Authenticator for WebAuthN.