Nice Updates to GitHub Codespaces

June 25th, 2021

GitHub Source Hosting

When last I looked at GitHub Codespaces, it had a few issues that meant I couldn't really do all the development I needed because it couldn't yet forward ports on the web-based UI. I found a way to run Postgres on a Codespace, so I'd have a built-in database, and it was persistent across restarts - which was marvelous. And I could customize the UI to be pretty much exactly what I needed to get work done.

But it was that nagging port forwarding that really meant I couldn't get real work done - not like I could on my laptop. And then I decided to give it another look... and they have not been sitting idly by. 🙂

The latest update to Codespaces has a much improved UI in the browser. It seems nearly native on my iPad Pro, and handles both the touch and trackpad gestures. Very nicely done. It also has a slight difference on the mounting of the work, so I had to update the cleanup() script in my .bashrc file:

  #
  # This is a simple function to cleanup the GitHub Codespace once it's been
  # created. Basically, I need to remove the left-overs of the dotfiles setup
  # and clean up the permissions on all the files.
  #
  function cleanup () {
    pushd $HOME
    echo "cleaning up the dotfiles..."
    rm -rf dotfiles install README.md
    echo "resetting the ownership of the /workspaces..."
    sudo chown -R drbob:drbob /workspaces
    echo "cleaning up the permissions on the /workspaces..."
    sudo chmod -R g-w /workspaces
    sudo chmod -R o-w /workspaces
    sudo setfacl -R -bn /workspaces
    echo "done"
    popd
  }

and with this, all my new Codespaces will have the right permissions, and the terminal will look and act like it should. Not too bad a change. But the real news is in the forwarded ports.

It appears that what GitHub has done is to open the port(s) on the running Docker image so that you can easily open a browser to the jetty port 8080 service that's running. It's really just as good as the port forwarding, and it completes the last needed capability to use Codespaces for real development.

If there was one more thing I'd wish for - it's that this would be included in the GitHub iPad app - so that the files are held locally, and edited locally, and the connection to the Docker instance is remote, but you can work locally.

Maybe soon. 🙂

GitHub Actions are Very Impressive

June 16th, 2021

GitHub Source Hosting

Several weeks ago, The Shop made the decision to implement CI/CD on the code repositories at GitHub using GitHub Actions. And it has been an amazing success. The ability to set up individual responses to GitHub actions like push, and so on. It's also very nice that it's all done in parallel, which is very nice for speed.

One of the things I have really enjoyed about the Actions is that GitHub gives each project quite a lot of free compute minutes for the execution of the Actions. This means that if you have a relatively static project, this is likely something you will be able to use for free. And if it's a business, you could prove out that this will work for you before having to invest in more tooling.

When you do run up against the limits of the free plan, the only thing that will happen is that the Actions will all fail. This is completely understandable, and a very reasonable fall-back position for projects. Add a billing source, and you're back in business. Very nicely done.

Enjoying play.js on the iPad Pro

June 16th, 2021

NodeJS

This morning I pulled up play.js on my iPad Pro to run a simple project I built to hit a MLB stats site and extracts some data, and format it into a simple JSON output. It's nothing, really... a simple Express/NodeJS site that I used in learning Express... but it is just an amazing tool for writing Node services - with front-ends, or not.

It's really pretty nice - includes a full git client, and complete dependency searching and incorporation... it's all you'd really need if you had a Node service back-end, and a static assets front-end. I know it can do even more on the front-end, but I'm quite happy with the ability to use HTML/CSS/JavaScript to build the front-end - I typically don't build elaborate front-ends to validate the back-end service.

The one wrinkle I've seen with some Node dependencies that include non-JavaScript components - like downloaded commands. These are not going to run in play.js's environment. It has to be 100% Node and JavaScript. So... there are some limitations on the projects it can handle... but not many.

First Trip Out of Town

June 13th, 2021

Microbe

Yesterday, I had my first trip out of town since the start of the pandemic. I went to Indy for a family get-together for birthdays, and I have to say, it was nice to travel out of town, get there... see folks... and enjoy a nice summer day.

I was still plenty surprised about the lack of mask-wearing by folks out-and-about, and while I know what the CDC guidelines state for mask wearing, and Illinois and Indiana rules for mask-wearing, it seems like such a harmless thing that we have been doing for months, to ensure that we really shut down this pandemic before we start taking chances.

I look at what's happened in India - they thought it was over, and within 2 weeks, they were completely overwhelmed. It could happen here... a new variant... unvaccinated folks getting sick, and all of a sudden we're in another surge that isn't so easily contained.

I'll be wearing my mask in public until the new case count is under 100 a day for the nation... at that point, we're out of the woods. But we aren't there yet.

Published Notarize Node Client

May 24th, 2021

TypeScript

Today I was able to publish my first Open Source TypeScript npm library for using the Notarize service. Their docs are good, but the only client they offer is really just the docs on the REST endpoints for the service, which are nice, but it's really nice to have a good client that makes accessing the functions of the service easy. So at The Shop, we decided to make a Client, and then give it to the Notarize folks so that they can give it to other clients looking for a simpler access interface.

This was a nice foray into TypeScript, because the interfaces are easy to define, the domain components of the service are nicely separable, and things generally worked out quite nicely. The tests didn't seem to fit into a simple CI/CD pipeline, but that's something that we can work on - if needed, and now that it's out in the wild, we will see how it's used, and if we get requests for additions.

All in all, it was fun to get this out. And it made working with the service much nicer. 🙂

Second Vaccine Shot Done!

May 16th, 2021

Microbe

I just got back from Walgreens, after my second COVID-19 vaccine shot. I was a little surprised that I had to fill out paperwork for the second time - after all... I had filled it all out before the first, and they had to have it on-hand. But it wasn't too bad, and I was able to fill out the form in a few minutes, and then a very short wait, and I got the second Moderna shot.

This is supposed to be the more significant, in terms of side-effects, of the two, so I'm going to take it easy today and tomorrow - just to be safe. But I know this is the right thing to do, and it may feel bad for a bit, but that's because it's doing it's job, and I know it'll be gone in a few days.

Now it's just waiting a few weeks to have the immunity built up, and I can feel much safer about my interactions with friends and family. 🙂

First Vaccine Shot Done!

April 18th, 2021

Microbe

I just got back from Walgreens, where I had signed up on Friday for an appointment to get the COVID-19 vaccine. Going in I had no idea what they would have in the way of lines, or which vaccine to provide, but I showed up at 9:10 am today to Get My Shot.

As it turns out, it's the Moderna vaccine, and it didn't take long because I had downloaded the Vaccine Waiver form from the Walgreens website, and filled it out before I arrived. This made it a lot easier - I just sat there, after checking in, and when it was my turn, I got the run-down from the pharmacist, and got my shot.

Simple and easy.

I giggled to myself about the band-aid, and how every kid goes through that phase of loving to have a band-aid on... 🙂 So not bad at all.

My arm felt a little stiff, as if I'd over-exercised, but not to the touch... just like a deeper, muscular ache. But it was fine.

In a few weeks, I'll go back and get my second shot. I already have my appointment, and it'll be just as quick and easy as this one was. It'll be nice to be covered.

Nice OWASP Update Tools for Node/JS

April 7th, 2021

NodeJS

This morning I did a little security updating on a project at The Shop - a few OWASP issues for dependencies. One had risen to a high level, so it seemed like a good time to dig into the updating process.

In the past, for Java and Clojure projects, I've had to go and look up the recent versions of each library and see if they correct the security issue, and if they do, are there other updates that I have to do in order to handle any changes from these security-related updates? It was often times a very tedious process, and doing it for Java Spring projects was almost something like Black Magic.

Imagine my surprise when I find that Node/JS already has this covered. Simply run:

  $ npm audit

and not only will it list all the OWASP security vulnerabilities, but it will also provide you with the specific npm commands to update (aka install) the specific package, and how far down the nesting tree that package sits.

Run the commands specified by the npm audit command, and you'll update just what's needed, and not have to go through the process manually. What a refreshing change to my previous encounters with fixing OWASP vulnerabilities. 🙂

Interesting Proxy of Javascript Objects

March 27th, 2021

Javascript

Ran into something this week, and I wanted to distill it to an understandable post and put it here for those that might find a need, and run across it while searching. There are a lot of posts about the use of the Javascript Proxy object. In short, it's goal is to allow a user to wrap an Object - or function, with a similar object, and intercept (aka trap) the calls to that Object, and modify the behavior of the result.

The examples are all very nice... how to override getting a property value... how to add default values for undefined properties... how to add validation to setting of properties... and all these are good things... but they are simplistic. What if you have an Object that's an interface to a service? Like Stripe... or HelloSign... or Plaid... and you want to be able to augment or modify function calls? What if they are returning Promises? Now we're getting tricky.

The problem is that what's needed is a little more general example of a Proxy, and so we come to this post. 🙂 Let's start with an Object that's actually an API into a remote service. For this, I'll use Platter, but it could have as easily been Stripe, HelloSign, Plaid, or any of the SaaS providers that have a Node Client.

We create an access Object simply:

  const baseDb = new Postgres({
    key: process.env.PLATTER_API_KEY,
  })

but Postgres will have lower-case column names, and we really want to camel case where: first_name in the table, becomes firstName in the objects returned.

So for that, we need to Proxy this access Object, and change the query function to run camelCaseKeys from the camelcase-keys Node library. So let's start by recognizing that the function call is really accessed with the get trap on the Proxy, so we can say:

  const db = new Proxy(baseDb, {
    get: (target, prop) => {
      if (prop === 'query') {
        return (async (sql, args) => {
          const rows = await target.query(sql, args)
          return rows.map(camelCaseKeys)
        })
      }
    }
  })

The signature of the query() function on the access Object is that it returns a Promise, so we need to have the return value of the get trap for the prop equal to query, return a function similar in signature - inputs and output, and that's just what:

        return (async (sql, args) => {
          const rows = await target.query(sql, args)
          return rows.map(camelCaseKeys)
        })

does. It takes the two arguments: a SQL string that will become a prepared statement, and a list of replacement values for the prepared statement.

This isn't too bad, and it works great. But what about all the other functions that we want to leave as-is? How do we let them pass through unaltered? Well... from the docs, you might be led to believe that something like this will work:

        return Reflect.get(...arguments)

But that really doesn't work for functions - async or not. So how to handle it?

The solution I came to involved making a few predicate functions:

  function isFunction(arg) {
    return arg !== null &&
      typeof arg === 'function'
  }
 
  function isAsyncFunction(arg) {
    return arg !== null &&
      isFunction(arg) &&
      Object.prototype.toString.call(arg) === '[object AsyncFunction]'
  }

which simply test if the argument is a function, and an async function. So let's use this to expand the code above and add a else to the if, above:

  const db = new Proxy(baseDb, {
    get: (target, prop) => {
      if (prop === 'query') {
        return (async (sql, args) => {
          const rows = await target.query(sql, args)
          return rows.map(camelCaseKeys)
        })
      } else {
        value = target[prop]
        if (isAsyncFunction(value)) {
          return (async (...args) => {
            return await value.apply(target, args)
          })
        } else if (isFunction(value)) {
          return (...args) => {
            return value.apply(target, args)
          }
        } else {
          return value
        }
      }
    }
  })

In this addition, we get the value of the access Object at that property. This could be an Object, an Array, a String, a function... anything. But now we have it, and now we can use the predicate functions to see how to treat it.

If it's an async function, create a new async function - taking any number of arguments - thereby matching any input signature, and apply the function to that target with those arguments. If it's a simple synchronous function, do the similar thing, but make it a direct call.

If it's not a function at all, then it's a simple data accessor - and return that value to the caller.

With this, you can augment the behavior of the SaaS client Object, and add in things like the mapping of keys... or logging... or whatever you need - and pass the rest through without any concerns.

Putting async at the Top Level of Node

March 25th, 2021

NodeJS

The use of async/await in Javascript is a nice way to make traditional Promise-based code more linear, and yet for the top-level code in a Node script, await can't easily be used, because it's not within an async function. Looking at the traditional top-level script for a Node/Express project, you would look at bin/www and see:

  #!/usr/bin/env node
 
  // dotenv is only installed in local dev; in prod environment variables will be
  // injected through Google Secrets Manager
  try {
    const dotenv = require('dotenv')
    dotenv.config()
  } catch {
    // Swallow expected error in prod.
  }
 
  // load up all the dependencies we need
  const app = require('../app')
  const debug = require('debug')('api:server')
  const http = require('http')

which starts off by loading the dotenv function to read the environment variables into the Node process, and then start loading up the application. But you can't just toss in an await if you need to make some network calls... or a database call.

Sure, you can use a .then() and .catch(), and put the rest of the startup script into the body of the .then()... but that's a little harder to reason through, and if you need another Promise call, it only nests, or another .then().

Possible, but not clean.

If we wrap the entire script in an async function, like:

  #!/usr/bin/env node
  (async () => {
    // normal startup code
  })();

then the bulk of the bin/www script is now within an async function, and so we can use await without any problems:

  #!/usr/bin/env node
 
  (async () => {
 
    // dotenv is only installed in local dev; in prod environment variables will be
    // injected through Google Secrets Manager
    try {
      const dotenv = require('dotenv')
      dotenv.config()
    } catch {
      // Swallow expected error in prod.
    }
 
    // augment the environment from the Cloud Secrets
    try {
      const { addSecretsToEnv } = require('../secrets')
      await addSecretsToEnv()
    } catch (err) {
      console.error(err)
    }
 
    // load up all the dependencies we need
    const app = require('../app')
    const debug = require('debug')('api:server')
    const http = require('http')

While this indents the bulk of the bin/www script, which stylistically, isn't as clean as no-indent, it allows the remainder of the script to use await without any problem.

Not a bad solution to the problem.