Great Update to iTerm2 Today

September 23rd, 2020

iTerm2

Today I noticed that a new beta of iTerm2 was out, and as part of the update, there is - of course - a restart of the app. But something I noticed, quite by accident, was that if you double-clicked on the tab title, iTerm2 would bring up a nice dialog box where you can enter the name of the tab, or even evaluate a function for the name of the tab.

In the past, I always had the ANSI escape codes to set the name to the current directory, and that was nice, but it also wasn't exactly what I wanted for a lot of my development work - because the tabs there need to have fixed names for the repo, or the function of the terminal, etc. So I had made a simple script functions:

  #
  # These are simple functions that can't be expressed as aliases, but
  # are very simple, and can go here because they are simple.
  #
  function winname() {
    echo -ne "ESC]0;$1^G"
  }
 
  function tabname() {
    echo -ne "ESC]1;$1^G"
  }
 
  function fixwt() {
    unset PROMPT_COMMAND
    if [ "$1" != "" ]; then
      winname "$1"
    fi
  }

so that I could easily override the PROMPT_COMMAND setting of the cwd, and the title would be fixed. It worked, but it meant that every time I had to restart iTerm2, I had to update all the shells with a fixit name, where name was the fixed name I wanted on that terminal tab.

But with this new iTerm2 feature, I don't have to do that.

I can simply set each one with the title I want, and leave those blank that will default to the PROMPT_COMMAND setting, and then they survive restarts! Amazing. πŸ™‚

Yes, it's not really all that shocking, but for many years, I've hoped to have this feature, and now it's here, and I can restart my iTerm2 app, and not have to spend the next several minutes typing the same fixwt titles over and over. It's very nice.

Now if iTerm2 could remember the Spaces the windows were on... now that would be really nice! πŸ™‚

GitHub Codespaces Customization

September 21st, 2020

GitHub Source Hosting

This morning I was able to make a stripped-down dotfiles GitHub repo that Codespaces will use when a new Docker image is created. If there is a install file in the repo, it'll be executed after cloning the repo to the new user - as specified in the Dockerfile's USER variable.

So in my Dockerfile I now have:

  # Make my user for when we really get going, and pull in my env from GitHub
  ARG USERNAME=drbob
  ARG USER_UID=1001
  ARG USER_GID=$USER_UID
 
  # Create the user and add him to the sudoers so that he can install more stuff
  RUN groupadd --gid $USER_GID $USERNAME \
      && useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \
      && apt-get update \
      && apt-get install -y sudo \
      && echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \
      && chmod 0440 /etc/sudoers.d/$USERNAME
 
  # [Optional] Set the default user. Omit if you want to keep the default as root.
  USER $USERNAME

and that lands me in the repo's home directory, but I still have to clean up the dotfiles repo, and all the permissions from the last post. To do that, I made a simple function in my ~/.bashrc that does all that:

  #
  # This is a simple function to cleanup the GitHub Codespace once it's been
  # created. Basically, I need to remove the left-overs of the dotfiles setup
  # and clean up the permissions on all the files.
  #
  function cleanup () {
    pushd $HOME
    echo "cleaning up the dotfiles..."
    rm -rf dotfiles install README.md
    echo "resetting the ownership of the workspace..."
    sudo chown -R drbob:drbob workspace
    echo "cleaning up the permissions on the workspace..."
    sudo chmod -R g-w workspace
    sudo chmod -R o-w workspace
    sudo setfacl -R -bn workspace
    echo "done"
    popd
  }

At this point, I can launch a Codespace, it'll have all my necessary environment, and I can continue to refine it in my dotfiles repo as that's only being used for the set-up of these Codespaces. So it's easy to remove things I'll never need there, and make sure it's customized to the things that I do need there.

Very nice. πŸ™‚

GitHub Codespaces

September 20th, 2020

GitHub Source Hosting

Late last week, I was accepted into the GitHub Codespaces beta program, and since I've been looking for something to do coding on my iPadPro, and was doing a little work with GitPod, it was something I was really interested in seeing and comparing with GitPod. So I started with two of the projects I've put into GitPod: Advent of Code, and CQ.

First things, Codespaces is a similar in that it's all about Docker images of the repos with enough of a system wrapped around it to enable the style of development needed. Java, Ruby, Node, Rust - if you can do it on Debian or Ubuntu, then you can do it in Codespaces. It's just that universal.

It's also possible to run other images, like Postgres, Mongo, MySQL, and then have them linked up to the repo's instance so that you can refer to the databases off of localhost and the default port. That's really nice.

It's not particularly easy - unless you really understand Dockerfiles... but if you keep at it, they have examples, and for the most part, you'll be able to get something going, and it's all in the repo, so it's simple enough to drop the image, and create another. It just takes time.

What I did find interesting is that the default user for Codespaces is root. I was not at all interested in working as root - even on a Docker image... it's just too uncomfortable. Thankfully, there is a vscode user you can use, and the USER directive in the Dockerfile will drop you into that user when things get started.

The next thing is the permissions... it's wild... everything is open to the world, and again, while this may be "fine" - it's very uncomfortable for me, so I converted the ownership of the workspace to the vscode user, and then removed the additional ACL security with:

  $ cd /home/vscode
  $ sudo chown -R vscode:vscode workspace
  $ sudo chmod -R g-w workspace
  $ sudo chmod -R o-w workspace
  $ sudo setfacl -R -bn workspace

With these done, the directory is owned by vscode and it's respecting the normal umask values on the creation of new files.

In the future, I'm going to have to figure out how to personalize the codespace by using the dotfiles repo in GitHub, and then installing my dot files and updating all these permissions, etc. as well. If I get to have several of these, it'll pay off to have it all done properly from the start...

Copying Safari Bookmarks to Safari Technology Preview

August 17th, 2020

Safari.jpg

This morning I wanted to be able to copy my bookmarks from Safari to Safari Technology Preview and thought "This should be easy" - and Hoo Boy! I'm glad I was right. πŸ™‚ I will confess that I was a little surprised that the Bookmarks are implemented as a plist, but I guess that's as simple a storage scheme as macOS has, and there are system calls to make sure that updates are atomic... so I guess it's as good as any.

The key is that this Bookmarks.plist is in ~/Library/Safari and can be copied to the target directory:

  $ cp ~/Library/Safari/Bookmarks.plist ~/Library/SafariTechnologyPreview/

after that, there wasn't even the need to restart Safari Technology Preview - just the next time I looked at the Bookmarks, they were updated. Nice! πŸ™‚

Upgrading AdoptOpenJDK 1.8, 11, and 14

July 16th, 2020

java-logo-thumb.png

Just saw a tweet from the AdoptOpenJDK folks, about new releases for JDK 1.8, 11, and 14, and thought I'd update what I had so that I could be assured of being able to upgrade when a serious bug shows up. It's good to be prepared. πŸ™‚

Because I have several versions installed, I needed to upgrade each... and because they are delivered as casks, the upgrade commands are just a little different:

$ brew cask upgrade adoptopenjdk8    ;; 8u262
$ brew cask upgrade adoptopenjdk11   ;; 11.0.8
$ brew cask upgrade adoptopenjdk     ;; 14.0.8

Each one will take a bit to download, and they clean up the older version of the same package - so you only have the latest... not bad, and I can see the advantages.

All up to date now! πŸ™‚

Odd Repl.it Editor Bug in Safari

July 14th, 2020

Clojure.jpg

I've been a big fan of Repl.it as it allows me to be able to fire up a nice Clojure REPL without a lot of grief or overhead, and it's fast enough for small projects, and while it's not perfect - like you can't include a real project.clj so you can't load other packages, it's still pretty nice.

A few weeks ago, I noticed an odd little bug in the editor on Repl.it - the cursur wasn't where the actual insertion point was on the editor:

Real it Editor Bug

The more you had on a line, the more of a gap there would be on the editor. And it didn't matter if I was using the Desktop browser on my iPad, or the Mobile browser... on my iPad, it was off. And I tried a lot of things... reported it to the Bugs List for Repl.it, and while others had seen it - there were no answers.

Finally, I thought about the zoom feature.

On my iPadPro, for Repl.it, I like to zoom out a few steps to get more on the screen. I don't mind the smaller fonts - I can read them just fine, and it reduces the "dead space" on the screen quite nicely, so that I have a good editor window, and a nice REPL window.

So I went back to Repl.it, pulled up a saved REPL, and rest the zoom to "Original". Boom! The cursor and the insertion point lined up, and looked just fine. I then updated my Bug Report on Repl.it, and hoped that it was going to be a lot easier to reproduce for the developers - because I had a way to make it "Good", and then "Bad", and back to "Good". Repeatable 100% of the time!

It's been a few weeks, and nothing, so today I offered to help work on this, as I'd really like to have this fixed, and I'm sure others would too... but I may have to wait for iPadOS 14, and hope that Safari on iPadOS 14 is going to fix this behavior.

I'd be happy to help... because I'd really like it fixed before the Fall.

Comcast XFi Goes Unlimited

June 30th, 2020

NetworkedWorld.jpg

Well, what a nice development... Today I got an email from Comcast that they are now are removing the 1TB/month transfer limit and allowing accounts like mine to have unlimited transfer per month. This started a little earlier in the lockdown, but I guess they saw it was popular, and so they let it stick.

I, for one, am very glad that I don't have to pay extra for the "Unlimited Transfer" any more. πŸ™‚

UPDATE: I got an email from Comcast about this, and it turns out that they are not quite giving it away... what they are doing is "bundling" the XFi router rental and the Unlimited Bandwidth at roughly a $25/mo savings to me. Well... it's better than nothing, and it's not like I'm going to be getting rid of either anytime soon... so OK. I'll take the $25/mo. It's pizza money. πŸ™‚

Working with MongoDB Again

June 18th, 2020

MongoDB

As part of the onboarding process at The Shop, today was getting set up with the development tools to run all the Docker containers for an Airplane Mode development set-up on my laptop. Nice... I have always liked that mode - as it limits damage that can be done, and at the same time, allows everyone to develop without stepping on each other's toes.

I've used MongoDB before - back at PEAK6 with the MMD (Magic Middle Dude), and there was a lot ot like about it back in 2011, but they certainly have been making changes since then, and it was fun to get back to working with it - but the first thing was to get a decent command-line client for mongo - and thankfully, Homebrew was there to help!

The steps are pretty easy - given that I didn't want to run mongo on my laptop - Docker was here for all that... I just needed the client:

  $ brew tap mongodb/brew
  $ brew install mongodb-community-shell

and then it's ready to use as mongo. Could not be simpler!

Running it against the Dockerized replica set, that was part of the set-up, wasn't bad:

  $ mongo --host rsetname/host1:27017,host2:27017,host3:27017 \
          --username dev-guy  --password dev-local \
          --authenticationDatabase dev-local

This is just the example of the replica set called rsetname running on the three hosts called host1, host2, and host3 - all on the default mongo port of 27017. The username, password, and authentication database are all simple examples, and this is all easily made into an alias that makes it even easier to start.

I'm looking forward to working with mongoDB again... it's been a while.

First Day at The Shop

June 15th, 2020

cubeLifeView.gif

Today is the first day at a new Shop, and I have to say... I'm really looking forward to getting back to creating again. Writing code... running it - seeing the results... this is what really fuels my soul. It's what I have always loved about working with electronics and computers. You don't have to wait for the pain to dry... or the adhesive to set... it's there - it's an expression of what's in your mind - and it can be run as soon as you can type it in. πŸ™‚

I learned a lot at the last Shop, but this new opportunity was just too good to pass up. So I didn't. I'm excited for all that this means, and it'll unfold as it should in the days to dome.

The Illusion of Configuration isn’t Code

June 3rd, 2020

Building Great Code

One of the Finance Shops I was at happened to have a Post-Trade Management system, and it was built in Java that had all kinds of interesting capabilities to aggregate and plot different levels of aggregations and filtering on the positions, and folks really seemed to like it. One of the key components to that system was a UI toolkit from a consulting shop that was entirely driven on XML configuration files.

I can remember adding a few lines of XML, and adding a very complex dialog box to the app, and thinking - There is no way that's a 1:1 mapping of the config! And I was right. These were heavily built library modules, and adding a few lines really added entire subsystems and connections for those UI elements.

I've also worked at a Dot-Com Shop where the complete build of a machine was described in a single YAML file. This includes all the user accounts, all the software packages, all the configuration for the software packages, and all sitting in a YAML file.

I'm currently looking at a CI/CD Pipeline that is completely specified by a YAML file. This includes shell commands, with options, and variable expansion, and while it's understandable why the developers of each of these systems chose XML, or YAML, as their configuration files - there are loads of parsers that are solid and reliable, and the files can mix in simple data structures, and in those data structures, you can add shell commands and can then do variable expansion... so it makes sense.

What concerns me is that so many developers seem to feel that these configuration files are not nearly as important as the code backing them. That it's easy, safe, and simple to change the configuration and try it again... and maybe for some things it is... but chances are, if your configuration is really your code - then you need to treat it as such.

For example, it is very common to have multiple layers of configuration files. Maybe there's a company level, and overlaid on that is a project level, and overlaid on that is an environment level. These probably get parsed, and then merged one on another to allow certain things to be set at one level, and others at another, and the sum total of all the config data is what's used.

What could go wrong?

Well... imagine if one component of the config is a list of strings that have to be processed in a specific order - let's say top to bottom, as they appear in the file. But then another of the layered config files has another list - with the same name - maybe it's a typo, maybe it's not. How are these merged?

Does the latter stacked file overwrite the lower level? That's one way, but then it's hard to make sure that those lower-level commands are being run/parsed properly. Could lead to duplication in the upper-level files, and that's not really the point of the stacking, is it?

What if you simply append the upper entries to the lower entries? That could be just as bad because the writer of the lower ones may be making assumptions about the state left after the processing of the upper file.

In short - having configuration files store data structures, is fine - and it's useful... but having it include what amounts to executable code means that it needs to be treated like code. And can you imagine writing a function that's layered from multiple files, and then executed at runtime? The difficulty in tracking down errors would more than offset any gains in reuse.

So if you want to have layered configuration files - Great! Just leave it to data that's easily flattened, and tested... but if you're going to have it include executable code - make it simpler - a single layer. You'll be glad you did. πŸ™‚