Saying stuff about stuff.

Better Ruby Gem caching on CircleCI

I was grateful to Nick Charlton for his blog post on setting up CircleCI 2.0 for Rails as understanding the new v2 configuration options was something I’d been avoiding (v1 has since been deprecated so this is now a must). However, after using, loving, and getting so much value from Dependabot I noticed that with any change to my Gems they were all being installed from scratch – meaning that the cache wasn’t being used and minutes were being added to the build time. I took a little time to understand more about CircleCI’s caching and discovered that they actually already have this covered.

The restore_cache step can take an array of cache keys which it will look up in the order declared. So the seemingly erroneous final bundler- key in the following is actually really important as it allows CircleCI to fallback to the last cache that has the prefix bundler- (I had seen it in a couple of examples but wrongly assumed it would pointlessly look for a cache named literally bundler-).

steps:
  - restore_cache:
    keys:
      - bundler-{{ checksum "Gemfile.lock" }}
      - bundler-

Now after adding this the restore_cache will always fall back to the latest cache entry with the prefix bundler- and the bundle install step will only have to install missing gems. You can verify that it’s working by expanding the “Restoring Cache” step in the CircleCI UI, when there’s a cache miss you’ll see something like “Found a cache from build 513 at bundler-“:

No cache is found for key: bundler-7cHA+e+3dMj5o8KeEXzZWm_pWslivYO08S8xulWZ4gw=
Found a cache from build 513 at bundler-
Size: 66 MB
Cached paths:
  * /home/circleci/app/vendor/bundle

The next problem you’ll encounter is the cache growing with each change to your Gems. This is also easy to fix by running bundle install with the --clean option.

My full restore/install/cache section looks like this:

- restore_cache:
    keys:
      - bundler-{{ checksum "Gemfile.lock" }}
      - bundler-

- run: bundle install --clean --path vendor/bundle

- save_cache:
    key: bundler-{{ checksum "Gemfile.lock" }}
    paths:
      - vendor/bundle

Adding this caching has reduced my build time by a whopping 2 minutes – even if the change was bumping a patch version of a single Gem – and has helped reduce feedback time and CI utilisation. It’s also generally satisfying.

Dependabot

Have you tried Dependabot yet? I’ve been using Dependabot for a some months now and I am really impressed, it’s like adding a developer to your team.

In the past I’ve dismissed development bots as a bit of a fad, often more noise than help — sometimes even feeling that they increase my workload. I already had some automated security vulnerability detection running on CI but there’s a huge difference between waking up to a failing build and waking up to a detailed pull request that has updated the offending dependency, passed through your CI pipeline, and is available for verification via a review app — it may have even already been merged and deployed.

One piece of advice from GitHub’s recent work on upgrading Rails is to “upgrade early and upgrade often” and Dependabot lets you achieve this with hardly any effort at all. Beyond the initial setup, interacting with Dependabot is performed through your normal development flow — another thing that makes it feel like you’re working with another developer. It usually goes like this:

  • Receive a GitHub pull request notification for a dependency update.
  • Read the detailed description of the changes.
  • Tests pass / manually verify.
  • Merge.

However, there are times when bumping a dependency is just the start of a journey and, even if the tests pass, further changes may be required. That’s OK, remember it’s a normal branch/pull request so you can git checkout and carry on as usual.

One of the little things that I think is an indicator of Dependabot’s quality is that it cleans up after itself:

  • It deletes branches when they’ve been merged/closed.
  • If a dependency is updated while there’s an existing pull request then it’ll be closed and a new one opened - with a reference between the two.
  • If you remove a dependency from the default branch then related pull requests will be closed.
  • If changes to the master branch cause a merge conflict then affected pull requests will be rebased.

GitHub security alerts have been around for a while, they’re nice but slightly hidden away and often seem to be some days behind. Here’s an example of how Dependabot deals with a security vulnerability:

Security vulnerability announced in Loofah < 2.2.3. We’ve submitted a PR to the RubySec Advisory Database with details and have triggered dependency updates for all Dependabot users. Thanks to @flavorjones for alerting us. https://github.com/flavorjones/loofah/issues/154

@dependabot

Then 90 minutes later:

In the 90 minutes since today’s Loofah vulnerability was announced we’ve opened PRs to patch it on 1,078 repos. 195 have already been merged. Stay safe out there🕵️‍♀️

@dependabot

To top it all off it’s free for open source and private personal repositories so what are you waiting for, go and sign up to Dependabot now. (My one tip is to turn it on for only a couple of projects at a time as you’ll likely receive a whole load of pull requests in the first few days.)

Testing an array of objects with RSpec have_attributes

After recently discovering RSpec’s --next-failure option I’ve just happened upon the have_attributes matcher which can help turn many expectations into a single, more readable statement.

In the past when checking an array of objects I’ve manually written out each expectation, something like this:

expect(items[0].id).to eql(1)
expect(items[0].name).to eql('One')
expect(items[1].id).to eql(2)
expect(items[1].name).to eql('Two')

But have_attributes lets you check an object’s properties against a hash, so the above can be re-written as:

expect(items[0]).to have_attributes(id: 1, name: 'One')
expect(items[1]).to have_attributes(id: 2, name: 'Two')

Even better, have_attributes can be combined with match_array to get this:

expect(items).to match_array([
  have_attributes(id: 1, name: 'One'),
  have_attributes(id: 2, name: 'Two'),
])

In one particular case I also wanted to check that the correct class was being returned, which is simple as it’s just another method call:

expect(items).to match_array([
  have_attributes(class: Foo, id: 1, name: 'One'),
  have_attributes(class: Bar, id: 2, name: 'Two'),
])

The next thing I tried felt quite natural though I didn’t expect it to work:

expect(items).to match_array([
  have_attributes(
    class: Foo,
    id: 1,
    name: 'One',
    price: have_attributes(
      cents: 123,
      currency: 'GBP',
    ),
  ),
  have_attributes(
    class: Bar,
    id: 2,
    name: 'Two',
    price: have_attributes(
      cents: 456,
      currency: 'USD',
    ),
  ),
])

It turns out this almost exactly matches the examples from the docs — I guess I wasn’t paying much attention all those years ago when RSpec announced composable matchers.

Downgrading Kubectl with Homebrew

I started seeing the following error when attempting to list pods with kubectl get pods -n production:

No resources found.
Error from server (NotAcceptable): unknown (get pods)

A quick search led me to a GitHub issue which explained that my version of kubectl was incompatible with the server.

The fix is to revert to an older version but I installed kubectl via Homebrew which only maintains a single version. What I didn’t know is that it’s possible to install a Homebrew package from a URL which makes downgrading easy.

First uninstall the newest version:

brew uninstall kubernetes-cli

Find a compatible version from the history of changes to the kubernetes-cli formula and install:

brew install https://raw.githubusercontent.com/Homebrew/homebrew-core/d09d97241b17a5e02a25fc51fc56e2a5de74501c/Formula/kubernetes-cli.rb

Now everything works again. You should probably pin the newly-installed old version so that it won’t get upgraded the next time you run brew upgrade:

brew pin kubernetes-cli

Dynamically setting default_url_options in Capybara

If you’re developing a full-stack Rails app with a link-based hypermedia API then you may find incorrect URLs breaking your system/feature Capybara specs. What’s going on?

When running your tests Capybara lazily boots the Rails app on a random port and, because this host/port are unknown to Rails, links generated in serializers (and emails) will point to the wrong URL – and following them within your test app and Capybara will fail. Just like dynamically setting Rails default_url_options in Heroku review apps you must tell Rails the host/port on which to build these URLs.

To do this with RSpec you can use RSpec.shared_context to update default_url_options before the example runs and reset it after. Add a support file in spec/support/default_url_options.rb with the following:

original_host = Rails.application.routes.default_url_options[:host]
original_port = Rails.application.routes.default_url_options[:port]

RSpec.shared_context 'default_url_options' do
  before do
    Rails.application.routes.default_url_options[:host] = Capybara.current_session.server.host
    Rails.application.routes.default_url_options[:port] = Capybara.current_session.server.port
  end

  after do
    Rails.application.routes.default_url_options[:host] = original_host
    Rails.application.routes.default_url_options[:port] = original_port
  end
end

(For a reason unknown to me I had to use separate before/after blocks instead of an around block.)

Include it in your browser specs in spec/rails_helper.rb:

RSpec.configure do |config|
  # Traditional feature specs.
  config.include_context 'default_url_options', js: true, type: :feature

  # New fangled system tests.
  config.include_context 'default_url_options', type: :system
end

Now all your _links will point to the correct host/port and you can get on with consuming them in your hypermedia link-driven single page app Rails monolith.