Skip to main content

Automated Visual Testing with Sparkbox Wraith

07-06-15 Rob Tarr, Adam Simpson

How we use the BBC News team’s Wraith to track visual changes.

Mason Stewart gave a great talk at this year’s ConvergeSE about frontend developers becoming more comfortable with backend programming. One of the obstacles to feeling comfortable with backend environments is that most frontend feedback is visual; meaning that if something breaks in the frontend, it doesn’t look right. Sometimes, there may be a console error if it’s a javascript problem, but there’s no such thing as a CSS error message—the page will just look broken. Recently, we had this situation happen on the sparkbox.com homepage. A section of content was missing, and after digging into it, we realized it had been missing for months! Rob Tarr and I realized we needed better visual error reporting, so we started looking at our options.

Find Changes with Wraith

We quickly found Wraith, which the BBC released a couple of years ago. Their motivations for creating Wraith echoed our own, and they solved it in a really smart way:

“[Wraith] came about as we continued to see small changes from release to release, as more teams joined the project, small front end bugs cropped up more frequently. Our solution was to compare screenshots of pages, at the pull request level and when merged into master. This process produces fewer bugs and unintended changes, while also being able to ensure intentional changes appear correctly.”

Out of the box, Wraith does exactly what it says on the tin and does it really well. It screenshots two URLs and then uses Imagemagick to diff the two images. If the difference breaks the threshold (which by default is set to 20%), it reports an error. Wraith stores these images and the diff images and generates an HTML file to view all the pages and any differences. Wraith essentially automates testing for visual changes. We aren’t using Wraith to catch errors—we’re monitoring for changes; some are purposeful and some are errors. Looking for changes would have alerted us to our bug on the Sparkbox site the minute our content went missing.

Circle the Automated Wagons

We use CircleCI as our deployment process for most of our work. CircleCI is a test automation service that we lean on for testing and for deployments. CircleCI runs any tasks specified in a circle.yml file whenever a new commit hits master on a Github repo. This behavior (deploy when master changes) presents a “chicken and egg” problem for testing with Wraith—when should it run and what should it test? We decided to have Wraith compare our staging server (the latest deployed code) and our production (live) server. Here is the section of our circle.yml that handles this staging, wraith, production configuration:

deployment:
  staging:
    branch: master
    commands:
      - ./node_modules/.bin/divshot deploy staging --token $DivshotToken
      - node wraith-init.js
      - "cd wraith/ && mv shots/gallery.html shots/index.html && ../node_modules/.bin/divshot deploy staging --token $WraithDivshot"

See Results on Divshot

We wanted the website generated by Wraith to be visible to the team, so once Wraith has run on CircleCI, we push the generated web content to a Divshot site. You can see in the last line of our circle.yml file that we move into the directory where Wraith saves the diff screenshots and we run another Divshot deploy to a different domain (project-wraith is the typical naming convention we’ve followed on Divshot). A dedicated domain for Wraith results makes it super easy to view the results by simply clicking a link.

A few simple steps for setting up Divshot for Wraith:

  1. Sign up/log in to Divshot (using your GitHub account makes this easy)

  2. Click the Create App button

  3. Give a name to your Wraith site

Warnings for Slackers

Wraith exits with an error code if there are differences, but we don’t want our builds to fail for every difference. Failures would mean that new designs and bug-fixes would be flagged as failures as well as actual bugs.

Instead of letting things fail, we alert the team that Wraith has found changes by dropping a message into Slack. From the wraith-init.js file, we check the exit code of Wraith. If it’s anything other than 0 (meaning we broke the threshold for changes), then we use the slack-client NPM module to open a connection to Slack and post an @channel message with the link to the Divshot site so that the team knows to take a look to see what has changed.

Quality Websites

This has added another tool to our toolbox to help us track the quality of our websites, both as we’re developing them and maintaining them long term. Wraith was extremely easy to set up and integrated very nicely with our existing build flow. Thanks to the efforts of the BBC team, we’re excited to get this added to more projects and see what we can contribute back to help move it forward.

Related Content

User-Centered Thinking: 7 Things to Consider and a Free Guide

Want the benefits of UX but not sure where to start? Grab our guide to evaluate your needs, earn buy-in, and get hiring tips.

More Details

See Everything In

Want to talk about how we can work together?

Katie can help

A portrait of Vice President of Business Development, Katie Jennings.

Katie Jennings

Vice President of Business Development