Archive for the ‘Open Source Software’ Category

Interesting Messaging Client – Telegram

Tuesday, August 10th, 2021

chat.jpg

I was chatting with a friend this morning, and he has a new job at a blockchain company, and reached out to me on GTalk - which can be accessed from his messaging client: Telegram. Now I'd never heard of Telegram, even though it's been around for ages, and my friend says that it's pretty much the defecto standard for the crypto space. It makes sense, the feature list is something you'd expect from the crypto space: Simple, Private, Fast, Open... it makes sense.

Also, they have clients for all platforms, and the clients all stay secure and in sync. It's a nice idea, and while I think the clients aren't minimal enough, maybe that's something that you can change - after all, the code is all open source, and stripping out is usually simpler than adding in. πŸ™‚

It's something to keep in mind... Interesting space, and challenges...

Excited about iTerm2 Window Restoration

Friday, August 6th, 2021

iTerm2

This morning, I was wondering if iTerm2 had yet added the feature to restore all the window positions on restart. In the past, I used the Open Default Window Arrangement - making sure to save any changes before a restart. But there were issues with that - one, I'd forget... two, on restart, all the windows would be on the first screen, and I'd have to move them to the six (or so) screens they needed to be, and while it's not horrible, it's time-consuming.

This morning, I did a quick search to see if there was any status update on that... and I was very happy to see that when I wasn't looking, they seemed to have added that option in the Settings of iTerm2.

Go to the General -> Startup settings in iTerm2, and then select Use System Window Restoration Setting, and I should be good to go. I haven't had the chance to test it, but I'm hoping that it's going to be exactly what I want. Right down to putting the windows on the correct screens.

UPDATE: when updating the macOS 11.5.2 this morning, this worked perfectly. The windows are all in the right places, the contents of each tab (session) is still there to reivew. It's just exactly what I'd hoped for. πŸ™‚

Published a PostGrid Node Client

Tuesday, July 27th, 2021

TypeScript

On the heels of the Notarize Node Client, we took the time to create a Node Client for the PostGrid service - where they will use regular Postal Delivery for PDFs, and HTML pages, and we needed that at The Shop. It was easy enough to build on the previous client, and just update the different domain elements, and handle the data interfaces. Not bad at all.

One thing I did have a few issues with was the handling of the Form Data for the posts to the service. There were endpoints that could accept application/json data, and some that required multipart MIME data from a FormData element. Thankfully, I'd had to work with this for some additions we made to the HelloSign Node Client, but that was a lot easier because the basic client was written by the HelloSign engineers, and we just had to add the ability to post PDF documents provided as Buffer objects.

In all, it wasn't all that bad, and now I have a core TypeScript library for building almost any client for a restful service with either JSON or Form Data. That's a nice place to be. πŸ™‚

Published Notarize Node Client

Monday, May 24th, 2021

TypeScript

Today I was able to publish my first Open Source TypeScript npm library for using the Notarize service. Their docs are good, but the only client they offer is really just the docs on the REST endpoints for the service, which are nice, but it's really nice to have a good client that makes accessing the functions of the service easy. So at The Shop, we decided to make a Client, and then give it to the Notarize folks so that they can give it to other clients looking for a simpler access interface.

This was a nice foray into TypeScript, because the interfaces are easy to define, the domain components of the service are nicely separable, and things generally worked out quite nicely. The tests didn't seem to fit into a simple CI/CD pipeline, but that's something that we can work on - if needed, and now that it's out in the wild, we will see how it's used, and if we get requests for additions.

All in all, it was fun to get this out. And it made working with the service much nicer. πŸ™‚

Nice OWASP Update Tools for Node/JS

Wednesday, April 7th, 2021

NodeJS

This morning I did a little security updating on a project at The Shop - a few OWASP issues for dependencies. One had risen to a high level, so it seemed like a good time to dig into the updating process.

In the past, for Java and Clojure projects, I've had to go and look up the recent versions of each library and see if they correct the security issue, and if they do, are there other updates that I have to do in order to handle any changes from these security-related updates? It was often times a very tedious process, and doing it for Java Spring projects was almost something like Black Magic.

Imagine my surprise when I find that Node/JS already has this covered. Simply run:

  $ npm audit

and not only will it list all the OWASP security vulnerabilities, but it will also provide you with the specific npm commands to update (aka install) the specific package, and how far down the nesting tree that package sits.

Run the commands specified by the npm audit command, and you'll update just what's needed, and not have to go through the process manually. What a refreshing change to my previous encounters with fixing OWASP vulnerabilities. πŸ™‚

Putting async at the Top Level of Node

Thursday, March 25th, 2021

NodeJS

The use of async/await in Javascript is a nice way to make traditional Promise-based code more linear, and yet for the top-level code in a Node script, await can't easily be used, because it's not within an async function. Looking at the traditional top-level script for a Node/Express project, you would look at bin/www and see:

  #!/usr/bin/env node
 
  // dotenv is only installed in local dev; in prod environment variables will be
  // injected through Google Secrets Manager
  try {
    const dotenv = require('dotenv')
    dotenv.config()
  } catch {
    // Swallow expected error in prod.
  }
 
  // load up all the dependencies we need
  const app = require('../app')
  const debug = require('debug')('api:server')
  const http = require('http')

which starts off by loading the dotenv function to read the environment variables into the Node process, and then start loading up the application. But you can't just toss in an await if you need to make some network calls... or a database call.

Sure, you can use a .then() and .catch(), and put the rest of the startup script into the body of the .then()... but that's a little harder to reason through, and if you need another Promise call, it only nests, or another .then().

Possible, but not clean.

If we wrap the entire script in an async function, like:

  #!/usr/bin/env node
  (async () => {
    // normal startup code
  })();

then the bulk of the bin/www script is now within an async function, and so we can use await without any problems:

  #!/usr/bin/env node
 
  (async () => {
 
    // dotenv is only installed in local dev; in prod environment variables will be
    // injected through Google Secrets Manager
    try {
      const dotenv = require('dotenv')
      dotenv.config()
    } catch {
      // Swallow expected error in prod.
    }
 
    // augment the environment from the Cloud Secrets
    try {
      const { addSecretsToEnv } = require('../secrets')
      await addSecretsToEnv()
    } catch (err) {
      console.error(err)
    }
 
    // load up all the dependencies we need
    const app = require('../app')
    const debug = require('debug')('api:server')
    const http = require('http')

While this indents the bulk of the bin/www script, which stylistically, isn't as clean as no-indent, it allows the remainder of the script to use await without any problem.

Not a bad solution to the problem.

Fun Feature Request for iTerm2

Wednesday, December 30th, 2020

iTerm2

A few days ago, I sent an email to the iTerm2 developer, and asked him the following question:

...and maybe this is a silly request, but I would really enjoy the option to put the Emoji Picker on the TouchBar of my MacBook Pro while in iTerm2… I know it’s not a Big Deal - so dragging it off/on in the customization makes sense, but there are a lot of times in my Git commit messages that I’d like to be able to toss in an emoji

and this morning I got a (surprise) response:

Thanks for pointing this out. There’s no good reason why it shouldn’t be allowed. Commit 1ae34d90f adds it. You can test in the next nightly build, due out in about an hour. In order to avoid breaking existing setups, it’s not in the default setup. You need to choose View > Customize Touch Bar to add it.

which is perfect for what I was hoping to have.

One of the best uses I've found for the TouchBar on the MacBook Pro is the Emoji Picker - as it's perfect for Instant Messaging, and Twitterrific, and at The Shop it's a big thing to have a nice, representative emoji as the first character of a pull request title. This is OK with LaunchBar, but it's not as convenient as the TouchBar Emoji Picker, and that's really what I was hoping to use it for. But until recently, iTerm2 just didn't allow it in the configuration of the TouchBar.

I am as pleased as I can be. Sounds silly, but it's nice to see that your thoughts aren't completely left-field to others. πŸ™‚

UPDATE: the v3.4.4beta2 release has the Emoji Picker. I'm just smiling. πŸ™‚

Working with Node/JS and Express for Services

Tuesday, December 29th, 2020

Javascript

At The Shop, we are using a completely different platform than I've used in the past - Node/JS and Express as well as Platter, for a back-end database. It's been a steep learning curve for me, but I have to say today was a nice day where I really started to feel like I was getting the handle on the tools. What has really exciting to me with Express is the ease with which I can build middleware to insert into the calling stack.

For ring middleware, in Clojure, it's not horrible, but it's not trivial to understand the calling order, and the passing of the handler to all the middleware. In Express, it's far simpler - you simply have a function that takes the request, the response, and the next in the line of the calling stack, and that's it. You can augment the request, and that's basically what a lot of middleware is about - adding authentication tokens, looking up permissions, etc. It's all adding to the request to be used in the simpler endpoint routes.

When working with Passport for the authentication framework, it's great that it fits in with Express so well, but one of the issues that I ran into today was that the middleware added to the top-level Express app would be executed before the Passport authentication middleware that was in place on each individual endpoint. It makes sense, not all endpoints need authentication, so adding that with Passport would naturally be done after the top-level middleware. But that makes some of the middleware I'd written unfunctional.

The Passport authentication scheme can be set up to easily add the user object to the Express request, and then for all endpoints, it's "Just There". I had expected to add middleware that would take that user and use it to look up other attributes and data to add to the request as well. But if the middleware I'd written was placed at the top-level, then it wouldn't have the user object on the request, and so it'd never work.

The solution was so elegant, I'm convinced that this had to be anticipated by the Express developers. πŸ™‚ Each of the routes wired into the Express app takes a path and a router:

  app.use('/login', loginRouter)
  app.use('/company', companyRouter)

and when you add in the Passport support for JWT authentication with a Bearer token, you get something like:

  app.use('/login', loginRouter)
  app.use('/company', passport.authenticate('jwt', { session: false }), companyRouter)

where the /login endpoint is not protected by the JWT, and the /company endpoint is. This seemed like a very late stage to put in the Passport middleware, but as it turns out, Express can handle an array, or a list of middleware in the use() function. So we can say:

  const authStack = [
    passport.authenticate('jwt', { session: false }),
    tenantMiddleware,
    accountMiddleware,
  ]
  app.use('/login', loginRouter)
  app.use('/company', authStack, companyRouter)

where the authStack is the additional middleware for the individual routes, and it's handled in the order it appears in the array.

And it works like a champ. Just amazing, that we can create different stacks of middleware and then as long as we layer them properly, we can set up an amazing diversity of middleware. For this, it's great that we can group the authentication-focused middleware into an array, and then easily drop that on the endpoints that need it.

Very slick. πŸ™‚

Setting up Versioned Node Environment

Wednesday, November 25th, 2020

Javascript

Today I spent a little time with a good friend helping me get going on a good, versioned Node environment - a lot like RVM for Ruby - but for Node. I wanted to do this because it looks like I might be doing some work for a Node-based company where the development is all based in Node, and I wanted to make sure I got it all set up right.

I just finished reading a nice book on ES5, ES6, Promises, async and await, and all the new features of JavaScript called Simplifying JavaScript from the Pragmatic Programmers. They are having a Thanksgiving Sale, and it seemed like a great time to pick up a book that I'd probably like on the subject. I did.

It's been a long time since I spent any real time on JavaScript, and if I'm going to be taking a bit out of this project, I wanted to make sure I had come up to speed on JavaScript, and Node as well. The book was laid out well, with all the ideas based on a decent understanding of JavaScript, but not the latest additions. It really read well to me, and I was able to finish it in two days.

So, here's what I needed to do on my 16" MacBook Pro to get things up and running... πŸ™‚

Start off by installing nodenv from Homebrew. This is the equivalent of rvm, and will manage the Node versions nicely for me.

  $ brew install nodenv

I then needed to add in the environmental set-up in my ~/.zlogin file by adding:

  # now do the nodenv stuff
  eval "$(nodenv init -)"

right after I set up my PATH and RVM environment things. It's very similar to the RVM approach, with directory-level controls, as well as system-wide defaults.

At that point, I can source my ~/.zlogin and then I'm ready to go. Next, is to install a good, long-term stable (LTS) version of Node:

  $ nodenv install 14.15.1
  $ nodenv global 14.15.1

where the second command sets that version as the global default for new projects, etc. You can always check the versions with:

  $ nodenv versions
  * 14.15.1 (set by /Users/drbob/.nodenv/version)

Next was to install a few global tools with npm that I'd need:

  $ npm install -g express-generator
  $ npm install -g nodemon
  $ nodenv rehash

where the first is the basic RESTful pattern for a service, and the latter is a way to run a Node app while monitoring the filesystem for changes to the files, and reloading them automatically. This will no-doubt prove to be exceptionally handy. The rehash command is something he's found to be nexessary when installing new global tools, as they don't seem to properly get picked up in the PATH without it. Fair enough.

At this point, we can make a new project, just to play with the new features in the book. Start by making a directory to put all this, and then use the express-generator to make the skeleton we need:

  $ mkdir playground
  $ cd playground
  $ express api --no-view

and now, in the api/ directory we have what we need to get started. Simply have npm pull everything down:

  $ cd api
  $ npm install

and we are ready to go.

There is an index.html file in the public/ directory, and we can use that... and running the Node server is as simple as:

  $ node bin/www
  ... this is the log output... 

or if we want to use the file-watching version, we can say:

  $ nodemon bin/www
  ... this is the log output... 

The port is set in the bin/www script, but I'm guessing the default is port 3000, so if you go to localhost:3000 you'll see the GET calls, and the page. Very slick... very easy. πŸ™‚

Once I get this into a git repo, or start working on a real project/git repo, I'll see if I can get it loaded up on my iPad Pro using play.js - as it appears to be able to run all this, and have a web page to hit it... so that would be very interesting to work with - given the power of the iPad Pro, and the perfect size.

UPDATE: Indeed... once I pused the code to GitHub, and then went into play.js on my iPad Pro, I could Create a new project, from a Git Clone, and putting in the repo location, and the SSH Keys, etc. it all came down. Then it was just resolving the dependencies with the UI, and then setting the "start" command to be the npm command in the package.json, and then it ran.

Open up the play.js web browser, and it's there. On port 3000, just like it's supposed to be. And editing the file, refreshing the page - it's there. No saving, it's just there. Amazing. This is something I could get used to.

Updating Postgres to 13.1 with Homebrew

Monday, November 23rd, 2020

PostgreSQL.jpg

With the update to macOS 11 Big Sur, and the updates I keep doing to my linode box, I thought it would be a nice time to update Postgres to the latest that Homebrew had - I was expecting 12.5, as that's what's latest for Ubuntu 20.04LTS at linode, and it was surprisingly easy - even with the major version update.

The standard upgrade for anything in Homebrew is:

  $ brew upgrade postgres

and it will upgrade the binaries as well as upgrade the database files - if it's a minor release change. But if it's a major release change - like it was for me from 12.x to 13.x, then you also have to run:

  $ brew postgresql-upgrade-database

and that will update the database files and place the old database files in /usr/local/var/postgres.old so after you're sure everything is running OK, you just need to:

  $ rm -rf /usr/local/var/postgres.old

and it's all cleaned up.

The one wrinkle is that I have set up the environment variable not to do automatic cleanups of the old versions of packages - because I wanted to have multiple versions of Leiningen hanging around, so I needed to clean up the old versions of postgres with:

  $ brew cleanup postgres
  Removing: /usr/local/Cellar/postgresql/11.1_1... (3,548 files, 40.3MB)
  Removing: /usr/local/Cellar/postgresql/12.1... (3,217 files, 37.7MB)  

and then the old versions are cleaned up as well.

I was expecting 12.5... but got all the way to 13.1 - nice. πŸ™‚