Archive for the ‘Vendors’ Category

Google Cloud has some Nice Tools

Saturday, March 13th, 2021

Google Cloud

Today I've been working on some code for The Shop, and one of the things I've come to learn is that for about every feature, or service, of AWS, Google Cloud has a mirror image. It's not a perfect mirror, but it's pretty complete. Cloud Storage vs S3... Tasks vs. SQS... it's all there, and in fact, today, I really saw the beauty of Google Cloud Tasks over AWS SNS/SQS in getting asynchronous processing going smoothly on this project.

The problem is simple - a service like Stripe has webhooks, or callbacks, and we need to accept them, and return as quickly as possible, but we have significant processing we'd like to do on that event. There's just no time or Stripe will think we're down, and that's no good. So we need to make a note of the event, and start a series of other events that will to the more costly work.

This is now a simple design problem. How to partition the follow-on tasks to amke use of an efficient loadbalancer, and at the same time, make sure that everything is done in as atomic way as possible. For this project, it wasn't too hard, and it turned out to actually be quite fun.

The idea with Cloud Tasks is that you essientially give it a payload and a URL, and it will call that URL with that payload, until it gets a successful response (status of 200). It will back-off a bit each time, so if there is a contention issue, it'll automatically handle that, and it won't flood your service, so it's really doing all the hard work... the user just needs to implement the endpoints that are called.

What turned out to be interesting was that the docs for Cloud Tasks didn't say how to set the content-type of the POST. It assumes that the content-type is applciation/octet-stream, which is a fine default, but given the Node library, it's certainly possible to imagine that they could see that the body being passed in was an Object, and then make the content-type applciation/json. But they don't.

Instead, they leave an undocumented feature on the creation of the task:

  // build up the argument for Cloud Task creation
  const task = {
    httpRequest: {
      httpMethod: method || 'POST',
      url,
    },
  }
  // ...add in the body if we have been given it - based on the type
  if (body) {
    if (Buffer.isBuffer(body)) {
      task.httpRequest.body = body.toString('base64')
    } else if (typeof body === 'string') {
      task.httpRequest.body = Buffer.from(body).toString('base64')
    } else if (typeof body === 'object') {
      task.httpRequest.body = Buffer.from(JSON.stringify(body)).toString('base64')
      task.httpRequest.headers = { 'content-type': 'application/json' }
    } else {
      // we don't know how to handle whatever it is they passed us
      log.error(errorMessages.badTaskBodyType)
      return { success: false, error: errorMessages.badTaskBodyType, body }
    }
  }
  // ...add in the delay, in sec, if we have been given it
  if (delaySec) {
    task.scheduleTime = {
      seconds: Number(delaySec) + Date.now() / 1000,
    }
  }

The ability to set the headers for the call is really very nice, as it opens up a lot of functionality if you wanted to add in a Bearer token, for instance. But you'll have to be careful about the time... the same data will be used for retries, so you would have to give it sufficient time on the token to enable it to be used for any retry.

With this, I was able to put together the sequence of Tasks that would quickly dispatch the processing, and return the original webhook back to Stripe. Quite nice to have it all done by the Cloud Tasks... AWS would have required that I process the events off an SQS queue, and while I've done that, it's not as simple as firing off a Task and fogetting about it.

Nice tools. 🙂

Upgraded to AmpliFi Alien

Friday, November 13th, 2020

NetworkedWorld.jpg

A few days ago, I was running some speed tests on my iPhone 12 Pro, and noticed that the WiFi speed with connected to my Apple TimeCapsule and AirPort Extreme was about half that of connecting directly to the Xfinity xFi gateway. Given that I wanted a little more security and cohesive networking, I don't want to put everything on the Xfinity box, so it was time to upgrade my WiFi.

I've been looking at the AmpliFi Alien for quite a while, but haven't had a great reason to change - given that my TimeCapsule was also my backups with Time Machine. So first I had to move to Backblaze for backups, and that turned out to be a great move for me.

I wanted to have a place that all my versions of all my files would be stored, and with the "Forever" option at Backblaze, I can get just that. It's a little more per month, but it's exactly what I wanted, as I now have one place for all the versions of all the files on this, my main machine. It's just wonderful.

With the iOS app, I can now have access to these files - and have the peace of mind that I'll be able to look back in time for those things I might have been foolish enough to delete. I honestly don't expect to have a major data loss, but that's just when things like that happen. 🙂

With my backup issue solve, the Alien mesh arrived and it was time to install it. First, it's a beautiful piece of tech - the display is amazing, and the iOS app is amazing in what it can do, measure, all the goodies that I'm sure a current Apple router would do - if they made them. But alas, they don't. But as easy as it was to set things up, I ran into a problem with my VPN to The Shop, and that was a real pickle.

Removing the DNS Cache on AmpliFi Alien

Everything was working great - the speed tests done at the router were showing me the exact speeds that I was expecting with my Xfinity Gigabit service - a bit too asymmetrical for me, but I'm working on that, and hope that Gigabit Pro, or AT&T Fiber will be available with more symmetrical numbers, and maybe more speed. But that's another story.

The mesh was easy to set up... and upgrade the cylinders to the latest version. Almost like the Sonos set-up and control... very simple, very clear. Nice. I had to make sure all my machines had the access point in their lists, and all were talking and happy... interesting point - I had to reboot my Apple TV4K because it had the old networking (wired) DHCP address. It wouldn't refresh normally. No big deal.

But the real issue was with the OpenVPN client for The Shop. Everything seemed fine with accessing most all services, but the DNS for the shop.com domain for work weren't being resolved. Wow... OK... let's dig into this. Turns out - the Aline Router caches DNS so that it can offer you the control address of http://amplifi.lan/ from your web browser.

That's a nice touch, but if it means that the changes from the VPN didn't take... well... it was simple enough to change.

  • Go to http://amplifi.lan/ and login with the password you just set up - this is pretty easy, and while it's not obvious, a simple google search pulled this up.
  • Check the Bypass DNS Cache in the list and save - this is really not a bad idea in today's DNS hijacking environment, but it really has to be a little smarter about the existence of VPNs in the world.
  • Shut down all networking - disconnect from the VPN, turn off WiFi on the box, unplug networking - make sure it gets to a clean state.
  • Plug in network, turn on WiFi, connect to VPN - in the logic order, start the networking back up so that things are rolling again.
  • Edit /etc/hosts to add amplifi.lan - this is just to get us back to the state where we can go to http://amplifi.lan/ for the control of the router, and it's as simple as just adding a line to the /etc/hosts file where we just use the address of the Gateway, or base router in any of the DHCP address blocks we have on any of the local machines:
      192.168.153.1   amplifi.lan amplifi
      

At this point, it's all working as it should. The Router is safe and secure, and very fast. Has great diagnostics built-in to it, and available from the iOS app... and it's silent. No spinning drives like the TimeCapsule.

There may come a time that I don't need to worry about the VPN issues, or maybe they will update the firmware to more intelligently cache DNS data... that would be nice... but until then, this is exactly what I'd hoped. 🙂

Update on iPad Development

Monday, November 2nd, 2020

Linode

This morning I spent a little time happily updating my Linode box with the updated to Ubuntu 20.04, and wanted to write down what I'd found in the investigation of the "held back" updates that Ubuntu does. First, the problem. 🙂

In doing the standard:

  $ sudo apt-get update
  $ sudo apt-get upgrade

I saw that all upgrades were applied, and after a restart (what a wonderful console at Linode), all was back... kinda. I could see that there were an additional 8 packages that a needed to be upgraded, and yet the standard commands didn't seem to pick them up.

So I did a little searching, and it turns out that these packages couldn't be upgraded without installing additional packages. That's why they were being held back. Makes perfect sense to me, and thankfully, the way to fix this is very easy:

  $ sudo apt-get upgrade --with-new-pkgs

or, as I have read, you can say:

  $ sudo apt-get dist-upgrade

to do all the distribution upgrades - which will include adding packages, as needed.

And then, to clean up the old packages:

  $ sudo apt autoremove

And after a reboot, the system was completely up-to-date, and moving forward, I'll use the dist-upgrade as it's clearly the preferred mechanism moving forward. I usually do this on the weekend, just to make sure it's all updated on a regular basis.

At the same time, using tmux, Blink Shell, and Textastic on my iPad has really been quite fun to learn the extent to which these tools can be exactly what I wanted from an iPad development platform.

One of the biggest surprises was that when Blink Shell is updated from the App Store - it maintains the connections! I was completely blown away... I expected to have to fire up the connections to the host again - but Nope... the display was in the same state as it was before the update, and it worked perfectly. This is really the "Mobile Shell", and the Block Shell app is an amazing implementation on the iPad.

The next surprise was that Textastic can be pointed at the GitHub checked out repo on the remote host (no surprise), but it remembers the location of the source file, so it's one key to upload it back to the remote host (already wrote about this). But this means that I would have to hop onto the remote host and commit the changes... but with Working Copy, I can simply split-screen Textastic and Working Copy, and drag the changed files from Textastic to Working Copy, and then commit them there.

Why does this matter? Well... as of the current version of Blink Shell, it does not yet do SSH Key Forwarding, so I can't easily use my SSH key authentication into GitHub via Blink Shell. Yes, they know about this, and they say it's coming in v14, but as of today, I would have to use something like Prompt from Panic, which does enable SSH Key Forwarding. With Working Copy on my iPad, I don't have to do that... I can easily see the diffs, make the commits all from a nice UI on the iPad. Very nice. 🙂

Don't get me wrong... I'll be very excited about Blink Shell getting SSH Key Forwarding, but until it does, I'm OK... and this is just an amazingly nice platform to do the development work I really like to do. What a joy!

Another Approach for iPad Development

Thursday, October 15th, 2020

Building Great Code

While apps like Buffer Editor are a good all-in-one tool, the journey that I started on has yielded some truly remarkable iPad apps for do the same things - just not all-in-one. More of a collection of tools, and they work together quite nicely.

The first is the editor - Textastic for the iPad. This is a great editor that can handle the SSH/SCP downloading of working directories on my linode host, but the real plus here is that the downloads remember where they came from, and with the SSH key, it's a simple keystroke to upload the changes to the remote host. This allows me to edit locally and auto-save to my iPad, and then a single keystroke, and the file is up on the host ready to be used there. Fantastic workflow.

At the same time, it integrates with Working Copy, a nice Git tool for the iPad, that downloads from GitHub, GitLab, BitBucket, etc. and maintains local copies of the repos on the iPad, so that you can really work on the iPad as a secondary machine. Sure, you can't compile on it, but with Textastic, you can use a nice editor (when the built-in editor isn't enough), and then use with the Textastic upload, or the Working Copy commits to get the changes to the correct place. Very slick.

But without a doubt, the very best of the tools I found was Blink Shell for the iPad. This implements the mosh protocol - Mo(bile) Sh(ell) - and it's a fascinating read. This will give me appearance of an "always on" SSH connection to the remote host, and all I need to do there is install the mosh-server. It's simply:

  $ sudo apt-get install mosh

on my Ubuntu 20.04 box at linode, and then I can just configure the connection parameters in the Blink Shell to connect with mosh, and I'm good to go. I can quit the app, I can sleep my iPad, and wake it back up, and when I start the app, it's there... just as I left off. Instantly back at the same point in the REPL, and tailing a log file (which I use tmux to set up). It's an amazing tool, and one that I'm stunned I didn't know about earlier... but in truth, I would not have needed it until the iPad.

What I am left with is similar to what Buffer Editor is doing - but it's decidedly faster to get moving, and the tools are really quite amazing in their own right. Working Copy is a more than adequate Git client, with previews for standard files, and all the configuration I would need. Using the GUI for commits, as opposed to my usual command-line is nice, and the fact that it connects to GitHub to see what repos I have to clone is an added bonus that tells me I don't need to copy a bunch of URLs to clone them.

Textastic has been in the App Store for 10 yrs, and it's remembering of where the files came from, and one-keystroke upload is so clever... it's honestly a feature I hadn't even imagined - but it's exactly what I was looking for. True delight to use it. And the integration with Working Copy is very nice so that I get the best of both.

Blink Shell with mosh and tmux are really the winners, though... the panes allow me to have a REPL in the top three-quarters, and a tailed log file in the bottom fourth, and never having to worry about having enough space on the screen. The speed of returning to development after an hour, or a day, is just amazing. These tools have made the value of linode servers jump up considerably in my mind. This would allow me to work on several projects, each on a small node, and be able to talk to one another - with Postgres on each node. It's really quite amazing. 🙂

Now I just need some time to work on these projects. Fear not, Advent of Code will be here sooner than you think!

Setting Up Linode and Buffer Editor

Monday, October 12th, 2020

Linode

It's been fun to get access to the beta of GitHub's Codespaces, but one of its short-falls is that when you run an outward-facing service - like a Jetty server, the IDE understands the port needs forwarding, but on the iPad, in Safari, there's really no way to forward a port. Additionally, the IP address of the container on the back-end isn't opened up for answering those forwarded requests. So while it's a great idea for development on an iPad - you can't really do any RESTful service development.

Additionally, the nice thing about it being all browser-based, is also a limitation in that it's browser-based, and no local storage. This means that there is no offline mode for editing, and while we can't (yet) compile on the iPad, you can edit - if the files are local, and without local storage, you don't have that.

So I went looking and found a very close match to what I might write: Buffer Editor. It's on iOS and iPadOS, and it allows for local and remote editing from an incredible number of sources - Dropbox, iCloud, GitHub, BitBucket, etc. For example, you can clone a GitHub repo right on your iPad, and then edit it locally, and push up changes. You can also set up an SSH/SFTP connection and remote edit the files, and even have a terminal session.

This is a lot like Panic's Code Editor for iOS, but Buffer Editor handles general Git access, and Code Editor does not. Also, Buffer Editor handles Clojure syntax, and Code Editor doesn't.

I was able to write to the Buffer Editor folks, and give them updated rules for Clojure, and within a week, they had an update out, and the changes were there. That's some impressive support. I have done the same with Panic, but I haven't heard back yet - there, I know they know Git support is important, so I'm thinking they may not be really supporting Code Editor on iOS as much... that would be a shame.

Still, Buffer Editor is working great - but I needed to have a host on the back-end to be able to do the work. I wasn't a huge fan of AWS, so I decided to try Linode, and I'm so very happy that I did! 🙂

Linode is a lot like AWS - with a somewhat limited feature set. You can get machines - either shared CPUs, or dedicated ones... and you can pick from a lot of different styles: compute, big memory, GPUs, etc. It's all most folks would need for most projects. They also have lots of SSD disk space - like NFS, and they also have an S3-like object storage. Add in load balancers, and it's enough to do most of the things you need - if you roll your own services like database, etc.

They also had a nice introductory offer so I decided to take it for a spin, and see what Buffer Editor could do with a nice Ubuntu 20.04 back-end with AdoptOpenJDK 11, and Clojure.

The basic instructions are very clear, and it was easy enough to set up the root account, and get the box running. It was even a little faster than AWS. Once I had the box running, I logged into the box, and updated everything on the box:

  (macbook) $ ssh root@123.45.67.88
  (linode) $ apt-get update && apt-get upgrade

With all that OK, I then set the hostname, updated the /etc/hosts file, made my user account, and got ready for moving my SSH key:

  (linode) $ vi /etc/hosts
             ... adding: 123.45.67.88   opus.myhome.com
  (linode) $ adduser drbob
  (linode) $ adduser drbob sudo

and then as me:

  (macbook) $ ssh drbob@123.45.67.88
  (linode) $ mkdir -p ~/.ssh && chmod -R 700 ~/.ssh

and then back on my laptop, I send the keys over:

  (macbook) $ scp ~/.ssh/id_rsa.pub drbob@123.45.67.88:~/.ssh/authorized_keys

And then I can add any options to the /etc/sudoers file as the command above put my new user in the sudo group, but there could be tweaks you might want to make there.

At this point, sudo was working on my account on the Linode box, and then it was time to lock down the machine a little:

  (linode) $ vi /etc/ssh/sshd_config
             ... PermitRootLogin no
                 PasswordAuthentication no
                 AddressFamily inet
  (linode) $ service ssh restart

At this point, I could install the other packages I needed:

  (linode) $ apt-get -y install --no-install-recommends openjdk-11-jdk leiningen grc mosh
  (linode) $ apt-get -y install --no-install-recomments postgresql postgresql-contrib

Then I can make a new user for Postgres with:

  (linode) $ sudo -u postgres createuser --superuser drbob

and then I can clone all the repos I need. The box is ready to go.

With this, I can now edit offline on my iPad, and then push, or copy the files when I get to a network connection, and then I can edit and debug as much as I'd like when I do have connection. It's almost exactly what I was hoping for.

The one missing thing: panes for the terminals... I'd like to have a REPL, and a tailing of the log file visible at the same time. I think I can accomplish that with the screen command, but I'll have to experiment with it a lot more to find out. But it's close... very close... 🙂

Reaching GitHub Codespaces Forwarded Ports

Sunday, September 27th, 2020

GitHub Source Hosting

I've tried a couple of times to run a Jetty web server in a Clojure project on a GitHub Codespace, and the directions for port forwarding are very clear, and very easily done in the IDE, but when I try to "Open in Browser" - while in Safari on my iPad, I get nothing. And when I "Copy URL" I get 127.0.0.1:8080 which is "correct" as far as the documentation goes, but it's not working because in a browser, there's no way a browser page can redirect the localhost address.

So the only way this is going to work is if the VS Code IDE can open up a web page to the back-end, and show it there. Or... if they can open up ports based on the URL to get to the Codespace. They are all unique, so it's not impossible to image that... but it'd take work.

This is going to be important because as you develop things, you need to be able to interact with them, and being able to hit a web server, or a RESTful interface is kinda important. But hey... this is still in beta, and I noticed this morning that they have been making changes to the Codespaces page on GitHub, so I'll give them time.

Postgres and GitHub Codespaces

Saturday, September 26th, 2020

PostgreSQL.jpg

This morning I wanted to see if I could get a more convenient method of getting to the attached Docker postgres server on GitHub Codespaces. The defaults of localhost, on port 5432 are standard, but if the machine can do unix sockets, then that's the preferred default. So there's the rub - the Codespaces images are Ubuntu, but they aren't set up for that - Docker maintaining the independent images and mounts for data. So I had to try something different.

I had previously tried using the pgpass file, but while that was easy to set up, it wasn't assuming precedence over the unix sockets... so that was a bust. The next thing I tried was to use the environment variables: PGHOST, PGPORT, PGUSER - and once I set those:

  $ export PGHOST=localhost
  $ export PGPORT=5432
  $ export PGUSER=postgres

then I could use:

  $ psql postgres

and other postgres commands like:

  $ createdb advent

to make a new database that I could reach with:

  $ psql advent

This is exactly what I was looking for! At this point, I could use the Codespaces for all the kind of development I was looking to do. Just fantastic! 🙂

I then updated the .bashrc and .zshrc files in my dotfiles repo so that any new Codespaces I would make have these baked in. I just need to get the Codespaces support directory into each project, and that will spin up just exactly what I need. Very nice indeed.

GitHub Codespaces Customization

Monday, September 21st, 2020

GitHub Source Hosting

This morning I was able to make a stripped-down dotfiles GitHub repo that Codespaces will use when a new Docker image is created. If there is a install file in the repo, it'll be executed after cloning the repo to the new user - as specified in the Dockerfile's USER variable.

So in my Dockerfile I now have:

  # Make my user for when we really get going, and pull in my env from GitHub
  ARG USERNAME=drbob
  ARG USER_UID=1001
  ARG USER_GID=$USER_UID
 
  # Create the user and add him to the sudoers so that he can install more stuff
  RUN groupadd --gid $USER_GID $USERNAME \
      && useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \
      && apt-get update \
      && apt-get install -y sudo \
      && echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \
      && chmod 0440 /etc/sudoers.d/$USERNAME
 
  # [Optional] Set the default user. Omit if you want to keep the default as root.
  USER $USERNAME

and that lands me in the repo's home directory, but I still have to clean up the dotfiles repo, and all the permissions from the last post. To do that, I made a simple function in my ~/.bashrc that does all that:

  #
  # This is a simple function to cleanup the GitHub Codespace once it's been
  # created. Basically, I need to remove the left-overs of the dotfiles setup
  # and clean up the permissions on all the files.
  #
  function cleanup () {
    pushd $HOME
    echo "cleaning up the dotfiles..."
    rm -rf dotfiles install README.md
    echo "resetting the ownership of the workspace..."
    sudo chown -R drbob:drbob workspace
    echo "cleaning up the permissions on the workspace..."
    sudo chmod -R g-w workspace
    sudo chmod -R o-w workspace
    sudo setfacl -R -bn workspace
    echo "done"
    popd
  }

At this point, I can launch a Codespace, it'll have all my necessary environment, and I can continue to refine it in my dotfiles repo as that's only being used for the set-up of these Codespaces. So it's easy to remove things I'll never need there, and make sure it's customized to the things that I do need there.

Very nice. 🙂

GitHub Codespaces

Sunday, September 20th, 2020

GitHub Source Hosting

Late last week, I was accepted into the GitHub Codespaces beta program, and since I've been looking for something to do coding on my iPadPro, and was doing a little work with GitPod, it was something I was really interested in seeing and comparing with GitPod. So I started with two of the projects I've put into GitPod: Advent of Code, and CQ.

First things, Codespaces is a similar in that it's all about Docker images of the repos with enough of a system wrapped around it to enable the style of development needed. Java, Ruby, Node, Rust - if you can do it on Debian or Ubuntu, then you can do it in Codespaces. It's just that universal.

It's also possible to run other images, like Postgres, Mongo, MySQL, and then have them linked up to the repo's instance so that you can refer to the databases off of localhost and the default port. That's really nice.

It's not particularly easy - unless you really understand Dockerfiles... but if you keep at it, they have examples, and for the most part, you'll be able to get something going, and it's all in the repo, so it's simple enough to drop the image, and create another. It just takes time.

What I did find interesting is that the default user for Codespaces is root. I was not at all interested in working as root - even on a Docker image... it's just too uncomfortable. Thankfully, there is a vscode user you can use, and the USER directive in the Dockerfile will drop you into that user when things get started.

The next thing is the permissions... it's wild... everything is open to the world, and again, while this may be "fine" - it's very uncomfortable for me, so I converted the ownership of the workspace to the vscode user, and then removed the additional ACL security with:

  $ cd /home/vscode
  $ sudo chown -R vscode:vscode workspace
  $ sudo chmod -R g-w workspace
  $ sudo chmod -R o-w workspace
  $ sudo setfacl -R -bn workspace

With these done, the directory is owned by vscode and it's respecting the normal umask values on the creation of new files.

In the future, I'm going to have to figure out how to personalize the codespace by using the dotfiles repo in GitHub, and then installing my dot files and updating all these permissions, etc. as well. If I get to have several of these, it'll pay off to have it all done properly from the start...

Upgrading AdoptOpenJDK 1.8, 11, and 14

Thursday, July 16th, 2020

java-logo-thumb.png

Just saw a tweet from the AdoptOpenJDK folks, about new releases for JDK 1.8, 11, and 14, and thought I'd update what I had so that I could be assured of being able to upgrade when a serious bug shows up. It's good to be prepared. 🙂

Because I have several versions installed, I needed to upgrade each... and because they are delivered as casks, the upgrade commands are just a little different:

$ brew cask upgrade adoptopenjdk8    ;; 8u262
$ brew cask upgrade adoptopenjdk11   ;; 11.0.8
$ brew cask upgrade adoptopenjdk     ;; 14.0.8

Each one will take a bit to download, and they clean up the older version of the same package - so you only have the latest... not bad, and I can see the advantages.

All up to date now! 🙂