First Vaccine Shot Done!

April 18th, 2021

Microbe

I just got back from Walgreens, where I had signed up on Friday for an appointment to get the COVID-19 vaccine. Going in I had no idea what they would have in the way of lines, or which vaccine to provide, but I showed up at 9:10 am today to Get My Shot.

As it turns out, it's the Moderna vaccine, and it didn't take long because I had downloaded the Vaccine Waiver form from the Walgreens website, and filled it out before I arrived. This made it a lot easier - I just sat there, after checking in, and when it was my turn, I got the run-down from the pharmacist, and got my shot.

Simple and easy.

I giggled to myself about the band-aid, and how every kid goes through that phase of loving to have a band-aid on... 🙂 So not bad at all.

My arm felt a little stiff, as if I'd over-exercised, but not to the touch... just like a deeper, muscular ache. But it was fine.

In a few weeks, I'll go back and get my second shot. I already have my appointment, and it'll be just as quick and easy as this one was. It'll be nice to be covered.

Nice OWASP Update Tools for Node/JS

April 7th, 2021

NodeJS

This morning I did a little security updating on a project at The Shop - a few OWASP issues for dependencies. One had risen to a high level, so it seemed like a good time to dig into the updating process.

In the past, for Java and Clojure projects, I've had to go and look up the recent versions of each library and see if they correct the security issue, and if they do, are there other updates that I have to do in order to handle any changes from these security-related updates? It was often times a very tedious process, and doing it for Java Spring projects was almost something like Black Magic.

Imagine my surprise when I find that Node/JS already has this covered. Simply run:

  $ npm audit

and not only will it list all the OWASP security vulnerabilities, but it will also provide you with the specific npm commands to update (aka install) the specific package, and how far down the nesting tree that package sits.

Run the commands specified by the npm audit command, and you'll update just what's needed, and not have to go through the process manually. What a refreshing change to my previous encounters with fixing OWASP vulnerabilities. 🙂

Interesting Proxy of Javascript Objects

March 27th, 2021

Javascript

Ran into something this week, and I wanted to distill it to an understandable post and put it here for those that might find a need, and run across it while searching. There are a lot of posts about the use of the Javascript Proxy object. In short, it's goal is to allow a user to wrap an Object - or function, with a similar object, and intercept (aka trap) the calls to that Object, and modify the behavior of the result.

The examples are all very nice... how to override getting a property value... how to add default values for undefined properties... how to add validation to setting of properties... and all these are good things... but they are simplistic. What if you have an Object that's an interface to a service? Like Stripe... or HelloSign... or Plaid... and you want to be able to augment or modify function calls? What if they are returning Promises? Now we're getting tricky.

The problem is that what's needed is a little more general example of a Proxy, and so we come to this post. 🙂 Let's start with an Object that's actually an API into a remote service. For this, I'll use Platter, but it could have as easily been Stripe, HelloSign, Plaid, or any of the SaaS providers that have a Node Client.

We create an access Object simply:

  const baseDb = new Postgres({
    key: process.env.PLATTER_API_KEY,
  })

but Postgres will have lower-case column names, and we really want to camel case where: first_name in the table, becomes firstName in the objects returned.

So for that, we need to Proxy this access Object, and change the query function to run camelCaseKeys from the camelcase-keys Node library. So let's start by recognizing that the function call is really accessed with the get trap on the Proxy, so we can say:

  const db = new Proxy(baseDb, {
    get: (target, prop) => {
      if (prop === 'query') {
        return (async (sql, args) => {
          const rows = await target.query(sql, args)
          return rows.map(camelCaseKeys)
        })
      }
    }
  })

The signature of the query() function on the access Object is that it returns a Promise, so we need to have the return value of the get trap for the prop equal to query, return a function similar in signature - inputs and output, and that's just what:

        return (async (sql, args) => {
          const rows = await target.query(sql, args)
          return rows.map(camelCaseKeys)
        })

does. It takes the two arguments: a SQL string that will become a prepared statement, and a list of replacement values for the prepared statement.

This isn't too bad, and it works great. But what about all the other functions that we want to leave as-is? How do we let them pass through unaltered? Well... from the docs, you might be led to believe that something like this will work:

        return Reflect.get(...arguments)

But that really doesn't work for functions - async or not. So how to handle it?

The solution I came to involved making a few predicate functions:

  function isFunction(arg) {
    return arg !== null &&
      typeof arg === 'function'
  }
 
  function isAsyncFunction(arg) {
    return arg !== null &&
      isFunction(arg) &&
      Object.prototype.toString.call(arg) === '[object AsyncFunction]'
  }

which simply test if the argument is a function, and an async function. So let's use this to expand the code above and add a else to the if, above:

  const db = new Proxy(baseDb, {
    get: (target, prop) => {
      if (prop === 'query') {
        return (async (sql, args) => {
          const rows = await target.query(sql, args)
          return rows.map(camelCaseKeys)
        })
      } else {
        value = target[prop]
        if (isAsyncFunction(value)) {
          return (async (...args) => {
            return await value.apply(target, args)
          })
        } else if (isFunction(value)) {
          return (...args) => {
            return value.apply(target, args)
          }
        } else {
          return value
        }
      }
    }
  })

In this addition, we get the value of the access Object at that property. This could be an Object, an Array, a String, a function... anything. But now we have it, and now we can use the predicate functions to see how to treat it.

If it's an async function, create a new async function - taking any number of arguments - thereby matching any input signature, and apply the function to that target with those arguments. If it's a simple synchronous function, do the similar thing, but make it a direct call.

If it's not a function at all, then it's a simple data accessor - and return that value to the caller.

With this, you can augment the behavior of the SaaS client Object, and add in things like the mapping of keys... or logging... or whatever you need - and pass the rest through without any concerns.

Putting async at the Top Level of Node

March 25th, 2021

NodeJS

The use of async/await in Javascript is a nice way to make traditional Promise-based code more linear, and yet for the top-level code in a Node script, await can't easily be used, because it's not within an async function. Looking at the traditional top-level script for a Node/Express project, you would look at bin/www and see:

  #!/usr/bin/env node
 
  // dotenv is only installed in local dev; in prod environment variables will be
  // injected through Google Secrets Manager
  try {
    const dotenv = require('dotenv')
    dotenv.config()
  } catch {
    // Swallow expected error in prod.
  }
 
  // load up all the dependencies we need
  const app = require('../app')
  const debug = require('debug')('api:server')
  const http = require('http')

which starts off by loading the dotenv function to read the environment variables into the Node process, and then start loading up the application. But you can't just toss in an await if you need to make some network calls... or a database call.

Sure, you can use a .then() and .catch(), and put the rest of the startup script into the body of the .then()... but that's a little harder to reason through, and if you need another Promise call, it only nests, or another .then().

Possible, but not clean.

If we wrap the entire script in an async function, like:

  #!/usr/bin/env node
  (async () => {
    // normal startup code
  })();

then the bulk of the bin/www script is now within an async function, and so we can use await without any problems:

  #!/usr/bin/env node
 
  (async () => {
 
    // dotenv is only installed in local dev; in prod environment variables will be
    // injected through Google Secrets Manager
    try {
      const dotenv = require('dotenv')
      dotenv.config()
    } catch {
      // Swallow expected error in prod.
    }
 
    // augment the environment from the Cloud Secrets
    try {
      const { addSecretsToEnv } = require('../secrets')
      await addSecretsToEnv()
    } catch (err) {
      console.error(err)
    }
 
    // load up all the dependencies we need
    const app = require('../app')
    const debug = require('debug')('api:server')
    const http = require('http')

While this indents the bulk of the bin/www script, which stylistically, isn't as clean as no-indent, it allows the remainder of the script to use await without any problem.

Not a bad solution to the problem.

Wishing for new Apple Silicon MacBook Pros

March 24th, 2021

Apple Computers

Like many folks that are developers, I'm very interested in what the new 16" MacBook Pros with Apple Silicon will be like. Right now, I'm really focusing on a few key features I've read, and seen, on the 13" MacBook Pros with M1 chips: the Display handling, and the Operating temperatures.

Right now, I've got a couple of LG 5K monitors, and my 16" MacBook Pro can drive them, but the temperature of the laptop starts to rise, and then the kernel_task rises, and the box essentially becomes unusable. I understand all the reasons for the kernel_task, and how it keeps the machine from overheating. And I've blown out my recent-vintage 16" MacBook Pro, so it's not an obvious issue, but I get it... I'm driving big 5K monitors, and that heats up the laptop.

From what I've read, the new Apple Silicon MacBook Pro can drive one 6K monitor, or two 4K monitors - just like the new Mac mini. Which is nice, and I'm just betting that the new 16" MacBook Pros will be able to drive two LG 5Ks - like the current lot can. Which will be nice. And to have no fan noise - that will be the real treat.

Which comes to the point of the thermal environment for the new laptop. I get that they need to be able to cool the components, but I'm to the point that I don't Zoom or run Google Meet on the laptop as it just makes it too hot. I can run Zoom and Meet on my iPad Pro, and it's faster, better, quieter, and I don't have to worry about a long meeting contributing to my machine slowing down.

I know it'll be the end of this year to get the new machines, but at least the new iPad Pros will be out sooner than that - and that too, will be a very nice upgrade. 🙂

Google Cloud has some Nice Tools

March 13th, 2021

Google Cloud

Today I've been working on some code for The Shop, and one of the things I've come to learn is that for about every feature, or service, of AWS, Google Cloud has a mirror image. It's not a perfect mirror, but it's pretty complete. Cloud Storage vs S3... Tasks vs. SQS... it's all there, and in fact, today, I really saw the beauty of Google Cloud Tasks over AWS SNS/SQS in getting asynchronous processing going smoothly on this project.

The problem is simple - a service like Stripe has webhooks, or callbacks, and we need to accept them, and return as quickly as possible, but we have significant processing we'd like to do on that event. There's just no time or Stripe will think we're down, and that's no good. So we need to make a note of the event, and start a series of other events that will to the more costly work.

This is now a simple design problem. How to partition the follow-on tasks to amke use of an efficient loadbalancer, and at the same time, make sure that everything is done in as atomic way as possible. For this project, it wasn't too hard, and it turned out to actually be quite fun.

The idea with Cloud Tasks is that you essientially give it a payload and a URL, and it will call that URL with that payload, until it gets a successful response (status of 200). It will back-off a bit each time, so if there is a contention issue, it'll automatically handle that, and it won't flood your service, so it's really doing all the hard work... the user just needs to implement the endpoints that are called.

What turned out to be interesting was that the docs for Cloud Tasks didn't say how to set the content-type of the POST. It assumes that the content-type is applciation/octet-stream, which is a fine default, but given the Node library, it's certainly possible to imagine that they could see that the body being passed in was an Object, and then make the content-type applciation/json. But they don't.

Instead, they leave an undocumented feature on the creation of the task:

  // build up the argument for Cloud Task creation
  const task = {
    httpRequest: {
      httpMethod: method || 'POST',
      url,
    },
  }
  // ...add in the body if we have been given it - based on the type
  if (body) {
    if (Buffer.isBuffer(body)) {
      task.httpRequest.body = body.toString('base64')
    } else if (typeof body === 'string') {
      task.httpRequest.body = Buffer.from(body).toString('base64')
    } else if (typeof body === 'object') {
      task.httpRequest.body = Buffer.from(JSON.stringify(body)).toString('base64')
      task.httpRequest.headers = { 'content-type': 'application/json' }
    } else {
      // we don't know how to handle whatever it is they passed us
      log.error(errorMessages.badTaskBodyType)
      return { success: false, error: errorMessages.badTaskBodyType, body }
    }
  }
  // ...add in the delay, in sec, if we have been given it
  if (delaySec) {
    task.scheduleTime = {
      seconds: Number(delaySec) + Date.now() / 1000,
    }
  }

The ability to set the headers for the call is really very nice, as it opens up a lot of functionality if you wanted to add in a Bearer token, for instance. But you'll have to be careful about the time... the same data will be used for retries, so you would have to give it sufficient time on the token to enable it to be used for any retry.

With this, I was able to put together the sequence of Tasks that would quickly dispatch the processing, and return the original webhook back to Stripe. Quite nice to have it all done by the Cloud Tasks... AWS would have required that I process the events off an SQS queue, and while I've done that, it's not as simple as firing off a Task and fogetting about it.

Nice tools. 🙂

Sometimes I’m Really Wrong

February 6th, 2021

Path

I just got off the phone with a very good friend that helped me see something in a way that was always there, but I wasn't extending myself to see it from that point of view - and it really got me to thinking: How wrong was I really?... and the answer was: A lot. 🙂

We all live our little lives, and it's unusual to meet someone that can really see life from a few completely different perspectives. The most common way I know of is profound loss - someone recovering from the loss of a close loved one will have the ability to see life with that person, and without that person, and their perspectives will be entirely different. Grief changes most people. But that's not the only way people can have different perspectives.

I was talking to my friend, and mentioning that I was going through a tough time with some folks, and she suggested that I had it all wrong. And proceeded to tell me how wrong I was.

She pointed out that life really is what we make of it, and that I could choose to see things as how they effected me, or I could see it from a different perspective, and see that there were other ways of handling the exact same thing, and in a different way, not make it an us-vs-them situation.

I'm thankful for my friend, because it was what I needed to hear - even if I didn't want to. I need to change how I approach things... life doesn't have to be kind... there's no rule about that. But we can choose to insert kindness in what we do, and not let the kill-or-be-killed be the way we live our lives.

Sometimes I am really wrong. By a lot.

And I'm glad my friend was there to help me see it.

What a Bad Day for America

January 7th, 2021

PotUS.jpg

I'm looking at the stories this morning about the actions in Washington by the pro-Trump mob that breeched the Capital for the first time since The War of 1812... and I feel certain the outgoing administration doesn't care - we've seen four years of policies, speeches, actions, and inactions, that show all that are paying attention that a day like yesterday was virtually inevitable.

Sure, it could have been 4 years from now... but it was now, and we have seen it. And this morning, it's now clear that those that were objecting to the Electoral College vote count have changed their minds, and have decided that they would object no more. Enough was enough - finally... for them. And we have some sort of closure for this election.

We also have a 50-50 Senate, and I have to smile at the idea that the Founders expected this might happen, so they planned for it, and work will get done, compromises will be made, people will be unhappy, and some will be happy. Life will go on.

I wonder what will come in the days and weeks ahead... will we really have seen enough? Will we really change how we treat the less-well-off among us? I hope so. I really do.

COVID-19 Hits Close to Home

January 4th, 2021

Microbe

A close friend, and his family, all have come down with COVID-19 as a result of some Christmas family time. I am so sorry for them. I know the intentions were good - and the precautions the family took were meant to keep everyone safe, but this is a nasty flu, and it's indiscriminate to whom it will hit. Closed spaces for several hours - like a Thanksgiving or Christmas celebration is just what it likes.

I think my household got it back in February 2020, after some trips to Seattle and San Francisco I had to take for work. I was in a packed airplane, and those cities at that time were hot spots. I got sick, for a few days, and then got better. My daughter was sick for a few weeks, and got better. While we haven't been tested, we survived without major complications.

I hope my friend's family does as well. This is a scary thing to have happen...

Happy Birthday to me!

December 31st, 2020

Cake.jpg

Another trip around the sun... I hope for just a nice, quiet day, and then Pizza tonight for dinner. It's my special day, so why not treat myself to pizza? 🙂

It's been a heck of a year, with so many things happening like never before... the pandemic, the election, the working at home, the rationing of paper towels... it's been a year of exceptional events. And not just for me... for everyone. But we keep going.

So I hope that everyone has a little breather today... a little rest... so that we can get back at it on Monday.