Node, Docker, Google Cloud, and Environment Variables
Monday, November 14th, 2022At The Shop, we're using Google Cloud Run for a containerized API written in Node, and it's a fine solution - really. But one of the issues we have run into is that of environment variables. We have a lot of them. The configuration for dev versus prod versus local development is all being held in environment variables, and the standard way for these to be passed in the cloudbuild.yaml file in the Build step:
steps: - name: gcr.io/cloud-builders/docker entrypoint: '/bin/bash' args: - '-c' - >- docker build --no-cache --build-arg BRANCH_NAME=$BRANCH_NAME --build-arg THESHOP_ENV=$_THESHOP_ENV --build-arg BASE_API_URL=$_BASE_API_URL -t $_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA . -f Dockerfile id: Build
and then in the Dockerfile, you have:
ARG BRANCH_NAME RUN test -n "$BRANCH_NAME" || (echo 'please pass in --build-arg BRANCH_NAME' && exit 1) ENV BRANCH_NAME=${BRANCH_NAME} ARG THESHOP_ENV RUN test -n "$THESHOP_ENV" || (echo 'please pass in --build-arg THESHOP_ENV' && exit 1) ENV THESHOP_ENV=${THESHOP_ENV} ARG BASE_API_URL RUN test -n "$BASE_API_URL" || (echo 'please pass in --build-arg BASE_API_URL' && exit 1) ENV BASE_API_URL=${BASE_API_URL}
While will place them in the environment of the built container. And all this is fine, until you start to hit the limits.
The cloudbuild.yaml command has a limit of 4000 characters, and if you have large, or sufficient number, of environment variables then you can exceed this, and we have. There is also a limit of 20 arguments to the docker build command, so again, we run into trouble if the number of environment variables gets more than that. So what can be done?
Well... since we are using Google Cloud Secrets, we could write something to scan those secrets, and pull them all into the running process, and stuff them into the process.env map for Node. But therein lies another problem: Node is asynchronous, so if we have top-level definitions that use these environment variables, like, say clients to Vendor services, then it's quite possible that they will need those variables before we have had the chance to load them.
So what can we do?
The solution that seems to work is to have a separate app that will be run in the Dockerfile, and will generate a .env file resides only in the container, and is built at the time the container is built, and contains all the environment variables we need. Then, the Node app can just use these with the dotenv library.
To make this file, we have the end of the Dockerfile look like:
# now copy everything over to the container to be made... COPY . . # run the node script to generate the .env file RUN THESHOP_ENV=${THESHOP_ENV} \ GCP_SECRETS_API_EMAIL=${GCP_SECRETS_API_EMAIL} \ GCP_SECRETS_API_KEY=${GCP_SECRETS_API_KEY} \ GCP_BUILD_PROJECT=${GCP_BUILD_PROJECT} \ npm run create-env # run the migrations for the database to keep things up to date RUN npx migrate up --store='@platter/migrate-store' EXPOSE 8080 CMD [ "node", "-r", "dotenv/config", "./bin/www" ]
So that we give the create-env script the few key environment variables it needs to read the Google Cloud Secrets, and then it generates the file. The create-env script is defined in the package.json as:
{ "scripts": { "create-env": "node -r dotenv/config tools/make-env" } }
and then the script itself is:
const arg = require('arg') const { execSync } = require('child_process') const { addSecretsToEnv } = require('../secrets') const { log } = require('../logging') const _help = `Help on command usage: npm run create-env -- --help - show this message npm run create-env -- --file <name> - where to write the env [.env] npm run create-env -- --verbose - be noisy about it Nothing is required other than the FLEXBASE_ENV and some GCP env variables that can be specified on the command line.`; /* * This is the main entry point for the script. We will simply read in all * the secrets for the THESHOP_ENV defined environment from the Cloud * Secrets, and then write them all to the '.env' file, as the default. * This will allow us to set up this environment nicely in a Dockerfile. */ (async () => { // only do this if we are run directly from 'npm run'... if (!module.parent) { // let's process the arguments and then do what they are asking const args = arg({ '--help': Boolean, '--verbose': Boolean, '--file': String, }) // break it into what we need const verbose = args['--verbose'] const where = args['--file'] ?? '.env' // ... now let's pull in all the appropriate Secrets to the local env... log.info(`[makeEnv] loading the Secrets for ${process.env.THESHOP_ENV} into this environment...`) const resp = await addSecretsToEnv() if (verbose) { console.log(resp) } // ...and now we can write them out to a suitable file log.info(`[makeEnv] writing the environment to ${where}...`) const ans = execSync(`printenv > ${where}`).toString() if (verbose) { console.log(ans) } return } })()
The addSecretsToEnv() is where we use the Google Secrets Node Client to read all the Secrets in our account, and one by one, pull them down and put them into process.env. The fact that this runs before the app starts is how we get around the asynchronous nature of Node, and by having it be an .env variable, we can use all the normal tools to read and process it, and we no longer need to worry about the top-level Vendor clients trying to define themselves with environment variables that haven't been defined.
Now if Node had a way to force an async function to finish before moving on, then this wouldn't be necessary, as we'd simply call the addSecretsToEnv() in the Node start-up script, well ahead of the loading of the other files. But alas... that's not how it works.
This has turned out to be a very workable solution, and we get past the limitations of the cloudbuild.yaml file, which is a great relief.