Saying stuff about stuff.

Dependabot

Have you tried Dependabot yet? I’ve been using Dependabot for a some months now and I am really impressed, it’s like adding a developer to your team.

In the past I’ve dismissed development bots as a bit of a fad, often more noise than help — sometimes even feeling that they increase my workload. I already had some automated security vulnerability detection running on CI but there’s a huge difference between waking up to a failing build and waking up to a detailed pull request that has updated the offending dependency, passed through your CI pipeline, and is available for verification via a review app — it may have even already been merged and deployed.

One piece of advice from GitHub’s recent work on upgrading Rails is to “upgrade early and upgrade often” and Dependabot lets you achieve this with hardly any effort at all. Beyond the initial setup, interacting with Dependabot is performed through your normal development flow — another thing that makes it feel like you’re working with another developer. It usually goes like this:

  • Receive a GitHub pull request notification for a dependency update.
  • Read the detailed description of the changes.
  • Tests pass / manually verify.
  • Merge.

However, there are times when bumping a dependency is just the start of a journey and, even if the tests pass, further changes may be required. That’s OK, remember it’s a normal branch/pull request so you can git checkout and carry on as usual.

One of the little things that I think is an indicator of Dependabot’s quality is that it cleans up after itself:

  • It deletes branches when they’ve been merged/closed.
  • If a dependency is updated while there’s an existing pull request then it’ll be closed and a new one opened - with a reference between the two.
  • If you remove a dependency from the default branch then related pull requests will be closed.
  • If changes to the master branch cause a merge conflict then affected pull requests will be rebased.

GitHub security alerts have been around for a while, they’re nice but slightly hidden away and often seem to be some days behind. Here’s an example of how Dependabot deals with a security vulnerability:

Security vulnerability announced in Loofah < 2.2.3. We’ve submitted a PR to the RubySec Advisory Database with details and have triggered dependency updates for all Dependabot users. Thanks to @flavorjones for alerting us. https://github.com/flavorjones/loofah/issues/154

@dependabot

Then 90 minutes later:

In the 90 minutes since today’s Loofah vulnerability was announced we’ve opened PRs to patch it on 1,078 repos. 195 have already been merged. Stay safe out there🕵️‍♀️

@dependabot

To top it all off it’s free for open source and private personal repositories so what are you waiting for, go and sign up to Dependabot now. (My one tip is to turn it on for only a couple of projects at a time as you’ll likely receive a whole load of pull requests in the first few days.)

Testing an array of objects with RSpec have_attributes

After recently discovering RSpec’s --next-failure option I’ve just happened upon the have_attributes matcher which can help turn many expectations into a single, more readable statement.

In the past when checking an array of objects I’ve manually written out each expectation, something like this:

expect(items[0].id).to eql(1)
expect(items[0].name).to eql('One')
expect(items[1].id).to eql(2)
expect(items[1].name).to eql('Two')

But have_attributes lets you check an object’s properties against a hash, so the above can be re-written as:

expect(items[0]).to have_attributes(id: 1, name: 'One')
expect(items[1]).to have_attributes(id: 2, name: 'Two')

Even better, have_attributes can be combined with match_array to get this:

expect(items).to match_array([
  have_attributes(id: 1, name: 'One'),
  have_attributes(id: 2, name: 'Two'),
])

In one particular case I also wanted to check that the correct class was being returned, which is simple as it’s just another method call:

expect(items).to match_array([
  have_attributes(class: Foo, id: 1, name: 'One'),
  have_attributes(class: Bar, id: 2, name: 'Two'),
])

The next thing I tried felt quite natural though I didn’t expect it to work:

expect(items).to match_array([
  have_attributes(
    class: Foo,
    id: 1,
    name: 'One',
    price: have_attributes(
      cents: 123,
      currency: 'GBP',
    ),
  ),
  have_attributes(
    class: Bar,
    id: 2,
    name: 'Two',
    price: have_attributes(
      cents: 456,
      currency: 'USD',
    ),
  ),
])

It turns out this almost exactly matches the examples from the docs — I guess I wasn’t paying much attention all those years ago when RSpec announced composable matchers.

Downgrading Kubectl with Homebrew

I started seeing the following error when attempting to list pods with kubectl get pods -n production:

No resources found.
Error from server (NotAcceptable): unknown (get pods)

A quick search led me to a GitHub issue which explained that my version of kubectl was incompatible with the server.

The fix is to revert to an older version but I installed kubectl via Homebrew which only maintains a single version. What I didn’t know is that it’s possible to install a Homebrew package from a URL which makes downgrading easy.

First uninstall the newest version:

brew uninstall kubernetes-cli

Find a compatible version from the history of changes to the kubernetes-cli formula and install:

brew install https://raw.githubusercontent.com/Homebrew/homebrew-core/d09d97241b17a5e02a25fc51fc56e2a5de74501c/Formula/kubernetes-cli.rb

Now everything works again. You should probably pin the newly-installed old version so that it won’t get upgraded the next time you run brew upgrade:

brew pin kubernetes-cli

Dynamically setting default_url_options in Capybara

If you’re developing a full-stack Rails app with a link-based hypermedia API then you may find incorrect URLs breaking your system/feature Capybara specs. What’s going on?

When running your tests Capybara lazily boots the Rails app on a random port and, because this host/port are unknown to Rails, links generated in serializers (and emails) will point to the wrong URL – and following them within your test app and Capybara will fail. Just like dynamically setting Rails default_url_options in Heroku review apps you must tell Rails the host/port on which to build these URLs.

To do this with RSpec you can use RSpec.shared_context to update default_url_options before the example runs and reset it after. Add a support file in spec/support/default_url_options.rb with the following:

original_host = Rails.application.routes.default_url_options[:host]
original_port = Rails.application.routes.default_url_options[:port]

RSpec.shared_context 'default_url_options' do
  before do
    Rails.application.routes.default_url_options[:host] = Capybara.current_session.server.host
    Rails.application.routes.default_url_options[:port] = Capybara.current_session.server.port
  end

  after do
    Rails.application.routes.default_url_options[:host] = original_host
    Rails.application.routes.default_url_options[:port] = original_port
  end
end

(For a reason unknown to me I had to use separate before/after blocks instead of an around block.)

Include it in your browser specs in spec/rails_helper.rb:

RSpec.configure do |config|
  # Traditional feature specs.
  config.include_context 'default_url_options', js: true, type: :feature

  # New fangled system tests.
  config.include_context 'default_url_options', type: :system
end

Now all your _links will point to the correct host/port and you can get on with consuming them in your hypermedia link-driven single page app Rails monolith.

Dynamically setting Rails default_url_options in Heroku review apps

I love Heroku review apps but not when URLs in emails point to the parent app – and if you have a hypermedia API its _links can end up totally broken. Why does this happen and how can it be fixed?

My usual approach used to be to configure Rails.application.routes.default_url_options[:host] from a DEFAULT_URL_HOST environment variable, but if the review app inherits this variable then all of its URLs generated outside the controller/view request/response cycle – including URLs in emails and serializers such as ActiveModel::Serializers – incorrectly point to the parent app. In the past I’ve been forced to manually update a review app’s config but that isn’t a solution.

It isn’t possible to dynamically set an environment variable but you can detect a review app at run-time by enabling the HEROKU_APP_NAME environment variable and then use it to generate the correct host. To do this in Rails create an initializer config/initializers/default_url_options.rb with the following contents:

# If a default host is specifically defined then it's used otherwise the app is
# assumed to be a Heroku review app. Note that `Hash#fetch` is used defensively
# so the app will blow up at boot-time if both `DEFAULT_URL_HOST` and
# `HEROKU_APP_NAME` aren't defined.
host = ENV['DEFAULT_URL_HOST'] ||
  "#{ENV.fetch('HEROKU_APP_NAME')}.herokuapp.com"

# Set the correct protocol as SSL isn't configured in development or test.
protocol = Rails.application.config.force_ssl ? 'https' : 'http'

Rails.application.routes.default_url_options.merge!(
  host: host,
  protocol: protocol,
)

In staging and production apps you should set the DEFAULT_URL_HOST using heroku config:set, but in review apps tell Heroku to pick up the HEROKU_APP_NAME variable by adding it to the app’s app.json (and remove references to DEFAULT_URL_HOST):

{
  "env": {
    "HEROKU_APP_NAME": {
      "required": true
    }
  }
}

Note that this also means in development/test/CI you’ll need to define the DEFAULT_URL_HOST environment variable or the app won’t boot – on my development machine I use direnv, dotenv is also popular.

And that’s it, now all URLs generated in emails and serializers will correctly point to the review app’s host.