Archive for the ‘Vendors’ Category

Ivory for Mastodon for macOS

Tuesday, May 23rd, 2023

Ivory

I just downloaded the Ivory for macOS app from the App Store, and upgraded my subscription to the Universal Subscription, so that it would cover all my devices. It's just an amazing app. This iOS/iPadOS version has been working really well for me since it was released, and now that I have the macOS version, I'll not have to worry about Ice Cubes and reading two streams.

I don't know that Mastodon will ever be popular enough to have some of the feeds from Twitter on it, but that's OK... I'm at about 95% complete, and I don't really need the last 5%. Plus, you just can't beat that it's not under the thumb of a capricious individual.

All in all... a nice bit of news for today. 🙂

Interesting Issues with Clearbit

Monday, April 24th, 2023

Alarm icon

This weekend we had an interesting issue with Clearbit's Logo Search interface - a free service they provide on their Business Information Service system. You can basically hit their endpoint with a query param of the name of a Company, and they will respond with something that looks like:

  {
    name: 'Flexbase',
    domain: 'flexbase.app',
    logo: 'https://logo.clearbit.com/flexbase.app'
  }

which is a nice little thumbnail logos of the Company. It's a very nice tool, and for the most part works flawlessly.

Until it doesn't.

The real issue was the Open Source Node client that was hitting the company's endpoint. It started with:

  topSuggestion(name){
    return new Promise((resolve, reject) => {
      resolve(getTopSuggestion(name));
    });
  }

which called:

  let getTopSuggestion = (query) => {
    return new Promise((resolve, reject) => {
      request(config.api.autocomplete + '?query=' + query, (err, response, body) => {
        resolve(JSON.parse(body)[0]);
      });
    });
  }

Now when everything is working as it should, this code is just fine. But on the weekend, the response from the endpoint in getTopSuggestion() was returning:

  Welcome to the Company API. For docs see https://clearbit.com/docs#company-api

which, of course, isn't JSON, and so the JSON.parse() was throwing an exception. But the function getTopSuggestion() was using the resolve() for the Promise, so it was creating an exception that could not be caught. This was bad news.

Now as it turned out, a coworker found that Clearbit was doing some maintenance, and that might have been the genesis of the issue, but it was made much worse because when we debugged this on our machines - several of us, the issue didn't present itself. Only in production.

Still, it was clear this wasn't the right code to use, and the library was 6 years old without an update, and the code was small. So a coworker suggested we just make the one call ourselves:

    let res = {}
    try {
      // Get the top URL Suggestion for a store name
      const url = new URL(config.api.autocomplete)
      url.searchParams.append('query', name)
      // ...now make the call, and parse the JSON payload
      const payload = await fetch(url).then(response => response.json())
      if (Array.isArray(payload) && payload.length > 0) {
        // ...pick off the top suggestion
        res = payload[0]
      }
    } catch (err) {
      log.error(`[logoFinder] ...message... Name: '${name}', Details: "${err.message}"`)
      return {
        success: false,
        error: errorMessages.badClearBitRequest,
        exception: err,
      }
    }
    return {
      success: true,
      ...res,
    }
  }

where the error message is really up to you, but the point was that this was something that would handle the simple text being returned by the endpoint and throw the exception on the JSON parsing without causing all the trouble of the library we were using.

There were a few things I liked about the new implementation we came up with:

  • Explicitly setting the query param on the URL - while it's possible that 90% of all name values would not lead to an issue, it's always nice to be safe and make sure that the proper encodings are done with the query params. It's two lines of code, but it makes sure that it's all handled properly.
  • The chaining of fetch() and then() - both fetch() and response.json() are async functions, so you might expect to see two await prependers on the functions, but there's only one. This is a nice feature of the then(), in that it unrolls the async nature of the fetch() so that the async nature of the .json() comes through - returning the value to the caller.

Sure, we still need to get the first element in the Array, but we also test that to make sure it's actually an array, and that there's something to get. It's just a lot more defensive coding than the original client had, and when we did this, we still got the good results on the dev machines, and at the same time, we got proper exception catching on the production instances.

Thankfully, the issues resided about the time we got the fix into the code, tested, and into production, so it wasn't long-lived, but it was a problem for a while, and we were able to recover the errors due to queues and retries, which is another saving grace that I was very thankful for.

Nothing like a little production outage to make the day exciting. 🙂

Node, Docker, Google Cloud, and Environment Variables

Monday, November 14th, 2022

GoogleCloud

At The Shop, we're using Google Cloud Run for a containerized API written in Node, and it's a fine solution - really. But one of the issues we have run into is that of environment variables. We have a lot of them. The configuration for dev versus prod versus local development is all being held in environment variables, and the standard way for these to be passed in the cloudbuild.yaml file in the Build step:


steps:
  - name: gcr.io/cloud-builders/docker
    entrypoint: '/bin/bash'
    args:
      - '-c'
      - >-
        docker build --no-cache
        --build-arg BRANCH_NAME=$BRANCH_NAME
        --build-arg THESHOP_ENV=$_THESHOP_ENV
        --build-arg BASE_API_URL=$_BASE_API_URL
        -t $_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA
        . -f Dockerfile
    id: Build

and then in the Dockerfile, you have:

ARG BRANCH_NAME
RUN test -n "$BRANCH_NAME" || (echo 'please pass in --build-arg BRANCH_NAME' && exit 1)
ENV BRANCH_NAME=${BRANCH_NAME}
 
ARG THESHOP_ENV
RUN test -n "$THESHOP_ENV" || (echo 'please pass in --build-arg THESHOP_ENV' && exit 1)
ENV THESHOP_ENV=${THESHOP_ENV}
 
ARG BASE_API_URL
RUN test -n "$BASE_API_URL" || (echo 'please pass in --build-arg BASE_API_URL' && exit 1)
ENV BASE_API_URL=${BASE_API_URL}

While will place them in the environment of the built container. And all this is fine, until you start to hit the limits.

The cloudbuild.yaml command has a limit of 4000 characters, and if you have large, or sufficient number, of environment variables then you can exceed this, and we have. There is also a limit of 20 arguments to the docker build command, so again, we run into trouble if the number of environment variables gets more than that. So what can be done?

Well... since we are using Google Cloud Secrets, we could write something to scan those secrets, and pull them all into the running process, and stuff them into the process.env map for Node. But therein lies another problem: Node is asynchronous, so if we have top-level definitions that use these environment variables, like, say clients to Vendor services, then it's quite possible that they will need those variables before we have had the chance to load them.

So what can we do?

The solution that seems to work is to have a separate app that will be run in the Dockerfile, and will generate a .env file resides only in the container, and is built at the time the container is built, and contains all the environment variables we need. Then, the Node app can just use these with the dotenv library.

To make this file, we have the end of the Dockerfile look like:

# now copy everything over to the container to be made...
COPY . .
# run the node script to generate the .env file
RUN THESHOP_ENV=${THESHOP_ENV} \
  GCP_SECRETS_API_EMAIL=${GCP_SECRETS_API_EMAIL} \
  GCP_SECRETS_API_KEY=${GCP_SECRETS_API_KEY} \
  GCP_BUILD_PROJECT=${GCP_BUILD_PROJECT} \
  npm run create-env
# run the migrations for the database to keep things up to date
RUN npx migrate up --store='@platter/migrate-store'
EXPOSE 8080
CMD [ "node", "-r", "dotenv/config", "./bin/www" ]

So that we give the create-env script the few key environment variables it needs to read the Google Cloud Secrets, and then it generates the file. The create-env script is defined in the package.json as:

{
  "scripts": {
    "create-env": "node -r dotenv/config tools/make-env"
  }
}

and then the script itself is:

const arg = require('arg')
const { execSync } = require('child_process')
const { addSecretsToEnv } = require('../secrets')
const { log } = require('../logging')
 
const _help = `Help on command usage:
  npm run create-env -- --help         - show this message
  npm run create-env -- --file <name>  - where to write the env [.env]
  npm run create-env -- --verbose      - be noisy about it
 
  Nothing is required other than the FLEXBASE_ENV and some GCP env variables
  that can be specified on the command line.`;
 
/*
 * This is the main entry point for the script. We will simply read in all
 * the secrets for the THESHOP_ENV defined environment from the Cloud
 * Secrets, and then write them all to the '.env' file, as the default.
 * This will allow us to set up this environment nicely in a Dockerfile.
 */
(async () => {
  // only do this if we are run directly from 'npm run'...
  if (!module.parent) {
    // let's process the arguments and then do what they are asking
    const args = arg({
      '--help': Boolean,
      '--verbose': Boolean,
      '--file': String,
    })
    // break it into what we need
    const verbose = args['--verbose']
    const where = args['--file'] ?? '.env'
 
    // ... now let's pull in all the appropriate Secrets to the local env...
    log.info(`[makeEnv] loading the Secrets for ${process.env.THESHOP_ENV} into
        this environment...`)
    const resp = await addSecretsToEnv()
    if (verbose) {
      console.log(resp)
    }
    // ...and now we can write them out to a suitable file
    log.info(`[makeEnv] writing the environment to ${where}...`)
    const ans = execSync(`printenv > ${where}`).toString()
    if (verbose) {
      console.log(ans)
    }
    return
  }
})()

The addSecretsToEnv() is where we use the Google Secrets Node Client to read all the Secrets in our account, and one by one, pull them down and put them into process.env. The fact that this runs before the app starts is how we get around the asynchronous nature of Node, and by having it be an .env variable, we can use all the normal tools to read and process it, and we no longer need to worry about the top-level Vendor clients trying to define themselves with environment variables that haven't been defined.

Now if Node had a way to force an async function to finish before moving on, then this wouldn't be necessary, as we'd simply call the addSecretsToEnv() in the Node start-up script, well ahead of the loading of the other files. But alas... that's not how it works.

This has turned out to be a very workable solution, and we get past the limitations of the cloudbuild.yaml file, which is a great relief.

Minor Bug in VoodooPad 6 – Text Colors

Monday, March 21st, 2022

VoodooPad4.jpg

Ever since moving to my new M1Max MacBook Pro, and moving to Apple Silicon apps, I've had one bug in VoodooPad 6 that's a touch annoying. I mean it's not horrible, but it was something that I hoped would be fixed. So I decided to let them know about it.

With the pandemic, and VoodooPad now being owned by Primate Labs, they haven't had someone to do all the upgrades and improvements that they'd planned, and I understand that completely. Times are challenging for everyone, and they want to update the app, they just don't have the people to do it right now, so it has to wait a little.

Thankfully, they responded, and seem to know what the issue was - adding support for Dark Mode, which is kinda odd, but I guess I don't know any of the inner workings of the codebase, or how Dark Mode support is handled, so I'll have to just be patient, and wait for them to update the app as they have time.

But it would be really nice to be able to save text colors... 🙂

Interesting Node/JS Cloud Presence

Wednesday, September 8th, 2021

NodeJS

A friend pointed out Cloudflare Workers to me as a near-Heroku style deployment and run-time platform for Node/JS projects. It's really more turn-key than Heroku - it's targeted at the folks that don't want to maintain servers, or worry about global data centers, or anything like that. Just make a Node/JS project, deploy it, and pay for what you use.

There are a lot of reasons to like something like this - it's what someone would need if they just wanted to write the code and leave all the other details to someone else. It's got a nice "Free Plan", for little hobby projects - much like Heroku, but it doesn't provide all the services that Heroku does in it's Store... but again, if this is a simple Node/JS app, you are probably set up with your own database and Cloud Services, so that's not necessarily a bad thing.

I haven't really used it on a project - yet, but I can imagine trying one - from my iPad and play.js... it might be interesting to see how it all might work in a non-laptop mode... 🙂

Unexpected Crash of PDFpenPro 13.0.1

Tuesday, July 27th, 2021

PDFpenPro

Back in June, I wrote to the folks at Smile, about an issue I was having with PDFpenPro 13.0.1 on my laptop. It was annoying because I could copy and paste some Form Fields to a document, but then when I tried to save it, it'd lock-up, and go non-responsive to Finder as well as the "spinning beachball". It was very repeatable, but depended on the file. Some had this issue - some didn't.

They were able to reproduce this issue, and said they'd get on it. I was thrilled that it would soon be resolved, as it was a great PDF authoring tool for Forms and Text Tags - which I've been doing a lot for a project at The Shop.

Then I read that Smile was selling PDFpenPro and PDFpen for iOS/iPadOS to Nitro, and I got a little concerned that the bug report, and corresponding fix, might be falling through the cracks. So today I sent an email asking for a status on the issue, and we'll see what they happen to say. I really do hope they fix this crashing bug because other than that, PDFpenPro is an excellent tool.

UPDATE: they wrote back, and for the time being, support for PDFpen(Pro) is being handled by the Smile folks, and Jeff at Smile mentioned that it was still an open issue, and that he'd pass along my question about a status. We will see.

Getting Ready for Apple Silicon – CleanShot X

Thursday, July 22nd, 2021

CleanShotX

This morning I was thinking about the move to the upcoming M1/M2 MacBook Pros that are supposed to be coming out later this year, and I decided it was time to move off my old screen capture and annotation tool, Annotate, and move to something that's: 1) Supported... 2) Going to be built for Apple Silicon. And when I read a review about CleanShot and SnagIt, I decided to look into both - SnagIt first.

The reviewer had it right - SnagIt has way more than what I need, and the increased feature set means complexity that I just don't need. Annotate was great... simple, easy, it did all I needed. SnagIt is just too much. But CleanShot is right what I was looking for!

I needed something to make nice screen shots - both area and window-based. I also wanted to be able to draw arrows from the head to the tail, as several of the screen annotations I've used had used that, and my arrows are so much more precise because of it. Also, I wanted to have the white-outlined text so that it was easy to read - regardless of the image below.

CleanShot does all of those. It's just exactly what I was looking for. So I got the basic app, with the 1GB of storage, and we'll see how it goes. If I want to get more updates in a year, then I'll renew then. But I didn't need the "Pro" features like unlimited storage, and I really don't want to pay a monthly fee for software like this - it's not critical to what I do.

So here we go... and we'll see how this works out. I have high hopes. 🙂

Nice Updates to GitHub Codespaces

Friday, June 25th, 2021

GitHub Source Hosting

When last I looked at GitHub Codespaces, it had a few issues that meant I couldn't really do all the development I needed because it couldn't yet forward ports on the web-based UI. I found a way to run Postgres on a Codespace, so I'd have a built-in database, and it was persistent across restarts - which was marvelous. And I could customize the UI to be pretty much exactly what I needed to get work done.

But it was that nagging port forwarding that really meant I couldn't get real work done - not like I could on my laptop. And then I decided to give it another look... and they have not been sitting idly by. 🙂

The latest update to Codespaces has a much improved UI in the browser. It seems nearly native on my iPad Pro, and handles both the touch and trackpad gestures. Very nicely done. It also has a slight difference on the mounting of the work, so I had to update the cleanup() script in my .bashrc file:

  #
  # This is a simple function to cleanup the GitHub Codespace once it's been
  # created. Basically, I need to remove the left-overs of the dotfiles setup
  # and clean up the permissions on all the files.
  #
  function cleanup () {
    pushd $HOME
    echo "cleaning up the dotfiles..."
    rm -rf dotfiles install README.md
    echo "resetting the ownership of the /workspaces..."
    sudo chown -R drbob:drbob /workspaces
    echo "cleaning up the permissions on the /workspaces..."
    sudo chmod -R g-w /workspaces
    sudo chmod -R o-w /workspaces
    sudo setfacl -R -bn /workspaces
    echo "done"
    popd
  }

and with this, all my new Codespaces will have the right permissions, and the terminal will look and act like it should. Not too bad a change. But the real news is in the forwarded ports.

It appears that what GitHub has done is to open the port(s) on the running Docker image so that you can easily open a browser to the jetty port 8080 service that's running. It's really just as good as the port forwarding, and it completes the last needed capability to use Codespaces for real development.

If there was one more thing I'd wish for - it's that this would be included in the GitHub iPad app - so that the files are held locally, and edited locally, and the connection to the Docker instance is remote, but you can work locally.

Maybe soon. 🙂

GitHub Actions are Very Impressive

Wednesday, June 16th, 2021

GitHub Source Hosting

Several weeks ago, The Shop made the decision to implement CI/CD on the code repositories at GitHub using GitHub Actions. And it has been an amazing success. The ability to set up individual responses to GitHub actions like push, and so on. It's also very nice that it's all done in parallel, which is very nice for speed.

One of the things I have really enjoyed about the Actions is that GitHub gives each project quite a lot of free compute minutes for the execution of the Actions. This means that if you have a relatively static project, this is likely something you will be able to use for free. And if it's a business, you could prove out that this will work for you before having to invest in more tooling.

When you do run up against the limits of the free plan, the only thing that will happen is that the Actions will all fail. This is completely understandable, and a very reasonable fall-back position for projects. Add a billing source, and you're back in business. Very nicely done.

Interesting Proxy of Javascript Objects

Saturday, March 27th, 2021

Javascript

Ran into something this week, and I wanted to distill it to an understandable post and put it here for those that might find a need, and run across it while searching. There are a lot of posts about the use of the Javascript Proxy object. In short, it's goal is to allow a user to wrap an Object - or function, with a similar object, and intercept (aka trap) the calls to that Object, and modify the behavior of the result.

The examples are all very nice... how to override getting a property value... how to add default values for undefined properties... how to add validation to setting of properties... and all these are good things... but they are simplistic. What if you have an Object that's an interface to a service? Like Stripe... or HelloSign... or Plaid... and you want to be able to augment or modify function calls? What if they are returning Promises? Now we're getting tricky.

The problem is that what's needed is a little more general example of a Proxy, and so we come to this post. 🙂 Let's start with an Object that's actually an API into a remote service. For this, I'll use Platter, but it could have as easily been Stripe, HelloSign, Plaid, or any of the SaaS providers that have a Node Client.

We create an access Object simply:

  const baseDb = new Postgres({
    key: process.env.PLATTER_API_KEY,
  })

but Postgres will have lower-case column names, and we really want to camel case where: first_name in the table, becomes firstName in the objects returned.

So for that, we need to Proxy this access Object, and change the query function to run camelCaseKeys from the camelcase-keys Node library. So let's start by recognizing that the function call is really accessed with the get trap on the Proxy, so we can say:

  const db = new Proxy(baseDb, {
    get: (target, prop) => {
      if (prop === 'query') {
        return (async (sql, args) => {
          const rows = await target.query(sql, args)
          return rows.map(camelCaseKeys)
        })
      }
    }
  })

The signature of the query() function on the access Object is that it returns a Promise, so we need to have the return value of the get trap for the prop equal to query, return a function similar in signature - inputs and output, and that's just what:

        return (async (sql, args) => {
          const rows = await target.query(sql, args)
          return rows.map(camelCaseKeys)
        })

does. It takes the two arguments: a SQL string that will become a prepared statement, and a list of replacement values for the prepared statement.

This isn't too bad, and it works great. But what about all the other functions that we want to leave as-is? How do we let them pass through unaltered? Well... from the docs, you might be led to believe that something like this will work:

        return Reflect.get(...arguments)

But that really doesn't work for functions - async or not. So how to handle it?

The solution I came to involved making a few predicate functions:

  function isFunction(arg) {
    return arg !== null &&
      typeof arg === 'function'
  }
 
  function isAsyncFunction(arg) {
    return arg !== null &&
      isFunction(arg) &&
      Object.prototype.toString.call(arg) === '[object AsyncFunction]'
  }

which simply test if the argument is a function, and an async function. So let's use this to expand the code above and add a else to the if, above:

  const db = new Proxy(baseDb, {
    get: (target, prop) => {
      if (prop === 'query') {
        return (async (sql, args) => {
          const rows = await target.query(sql, args)
          return rows.map(camelCaseKeys)
        })
      } else {
        value = target[prop]
        if (isAsyncFunction(value)) {
          return (async (...args) => {
            return await value.apply(target, args)
          })
        } else if (isFunction(value)) {
          return (...args) => {
            return value.apply(target, args)
          }
        } else {
          return value
        }
      }
    }
  })

In this addition, we get the value of the access Object at that property. This could be an Object, an Array, a String, a function... anything. But now we have it, and now we can use the predicate functions to see how to treat it.

If it's an async function, create a new async function - taking any number of arguments - thereby matching any input signature, and apply the function to that target with those arguments. If it's a simple synchronous function, do the similar thing, but make it a direct call.

If it's not a function at all, then it's a simple data accessor - and return that value to the caller.

With this, you can augment the behavior of the SaaS client Object, and add in things like the mapping of keys... or logging... or whatever you need - and pass the rest through without any concerns.