Canary tests

Canary Tests are minimal tests to quickly and automatically verify that everything you depend on is ready. You run Canary tests before other time-consuming tests, and before wasting time investigating in your code when the other tests are red. If the canary test fails, you know you have to fix something on the environments first.

This idea of Canary test is different from the Canary Deployment. In Canary Deployment you deploy to a small fraction of your users to check everything’s fine before rolling out to more users.

Save time by checking what should be always OK

Canary tests check for the obvious and frequent sources of issues, such as:

  • connectivity to network: firewall rules ok, ports open, proxy working fine, NAT, ping below a good threshold

  • Databases and middleware are up
  • disk quota for logs not almost full
  • every needed login and password is valid
  • installed software available in the right version: dll installed, registry set-up, environment variables set, user directories all exist, the frameworks and OS versions are fit, timezone and locale are as expected
  • reference data integrity and consistency (dates, valuations…) are ok
  • Database schema and audit of applied scripts are as expected
  • Licences are not expired (there is usually a way to check that automatically)

Canary tests should run regularly, ideally before any expensive tests like end-to-end tests. Of course you want to run them whenever there is a trouble somewhere, before wasting time on manual investigations in your code when the expected environment is not fully available.

Even at the code level, a canary test is just a trivial test to verify that the testing framework works correctly, as mentioned by Marcus on his blog:


Don’t forget to verify that your tests can fail too!

Simple and low-maintenance

The canary test tools should not assume much from the application. They must be independent from new developments to be as stable as possible. They should require little to no maintenance at all.

One way to do that in practice is to simply scan configuration files for every URL, password and just ping them one by one against a predefined time threshold. Any log path mentioned in the configuration files can be scanned and checked for the required write permissions and available disk space. Any login and password can be checked, even though this may be more complicated.

Canary tests are documentation too

Doing Canary tests may require explicit declarations of expectations, e.g. an annotation AssumedPermission(‘777’) to declare the permissions required on the files referenced in the configuration files. Alternatively you may rely on a Convention Over Configuration principle. For example every


variable is assumed  to be a log path to check against some predefined expectations like being writable and being ok with disk quota.

When you add canary tests, this automation itself is a form of documentation that makes assumption more explicit.

You could export a report of every canary test that has been ran into a readable form that can become part of your Living Documentation.

Read More

Collaborative Artifacts as Code

A software development project is a collaborative endeavor. Several team members work together and produce artifacts that evolve continuously over time, a process that Alberto Brandolini (@ziobrando) calls Collaborative Construction. Regularly, these artifacts are taken in their current state and transformed into something that become a release. Typically, source code is compiled and packaged into some executable.

The idea of Collaborative Artifacts as Code is to acknowledge this collaborative construction phase and push it one step further, by promoting as many collaborative artifacts as possible into plain text files stored in the same source control, while everything else is generated, rendered and archived by the software factory.

Collaborative artifacts are the artifacts the team works on and maintains over time thanks to the changes done by several people through a source control management such as SVN, TFS or Git, with all their benefits like branching and versioning.

Keep together what varies together

The usual way of storing documentation is to put MS Office documents into a shared drive somewhere, or to write random stuff in a wiki that is hardly organized.

Either way, this documentation will quickly get out of sync because the code is continuously changing, independently of the documents stored somewhere else, and as you know, “Out of sight, out of mind”.

we now have better alternatives

We now have better alternatives

Over the last few years, there has been changes in software development. Github has popularized the overview file written in Markdown. DevOps brought the principle of Infrastructure as Code. The BDD approach introduced the idea of text scenarios as a living documentation and an alternative for both specifications and acceptance tests. New ways of planning what a piece of software is supposed to be doing have appeared as in Impact Mapping.

All this suggests that we could replace many informal documents by their more structured alternatives, and we could have all these files collocated within the source control together with the source.

In any given branch in the source control we would then have something like this:

  • Source code (C#, Java, VB.Net, VB, C++)
  • Basic documentation through plain and perhaps other .md files wherever useful to give a high-level overview on the code
  • SQL code as source code too, or through Liquibase-style configuration
  • Living Documentation: unit tests and BDD scenarios (SpecFlow/Cucumber/JBehave feature files) as living documentation
  • Impact Maps (and every other mindmaps), may be done as text then rendered via tools like text2mindmap
  • Any other kind of diagrams (UML or general purpose graphs) ideally be defined in plain text format, then rendered through tools (Graphviz, yUml).
  • Dependencies declarations as manifest instead of documentation on how to setup and build manually (Maven, Nuget…)
  • Deployment code as scripts or Puppet manifests for automated deployment instead of documentation on how to deploy manually

Plain Text Obsession is a good thing!

Nobody creates software by editing directly the executable binary that the users actually run at the end, yet it is common to directly edit the MS Word document that will be shipped in a release.

Collaborative Artifacts as Code suggests that every collaborative artifact should be text-based to work nicely with source control, and to be easy to compare and merge between versions.

Text-based formats shall be preferred whenever possible, e.g. .csv over xls, rtf or .html over .doc, otherwise the usual big PPT files must go to another dedicated wiki where they can be safely forgotten and become instantly deprecated…

Like a wiki, but generated and read-only

My colleague Thomas Pierrain summed up the benefits of this approach, for a documentation:

  • always be up-to-date and versioned
  • easily diff-able (text filesn e.g. with Markdown format)
  • respect the DRY principle (with the SCM as its golden source)
  • easily browsable by everyone (DEV, QA, BA, Support teams…) in the readonly and readable wiki-like web site
  • easily modifiable by team members in a well know and official location (as easy as creating or modifying a text file in a SCM)

What’s next?

This approach is nothing really new (think about LateX…), and many of the tools we need for it already exist (Markdown renderers, web site to organize and display Gherkin scenarios…). However I have never seen this approach fully done in an actual project. Maybe your project is already doing that? please share your feedback!

UPDATE: My colleague Thomas Pierrain wrote a post on this idea:

Read More