Quantcast
Channel: NoRedInk
Viewing all articles
Browse latest Browse all 193

Going Automative: Increasing Quality To misquote Jane Austen: it is a truth universally...

$
0
0

Going Automative: Increasing Quality

To misquote Jane Austen: it is a truth universally acknowledged, that a QA team in possession of a rapidly growing product, must be in want of automated tests.

That semi-authentic sentiment expresses nicely where the NoRedInk QA team found ourselves at the start of 2019. Our incredible engineering teams were churning out a phenomenal amount of work and new features and enhancements to the existing site were arriving almost quicker than we could keep up. There was a rapidly approaching point in the future where the manual testing process that we had in place would no longer be enough to cover the site to the level we wanted and needed.

To combat this less-than-perfect future, we decided it was time for us to start automating some of our manual tests so that we could know that key areas of the site were working even if we didn’t have time to walk through them all. As well as reassuring us that those critical paths were still working, this automation would also help us carve out more time for exploratory testing of those areas that we don’t currently have huge opportunity to look into and to come up with new and innovative ways of verifying the site was working exactly as we wanted it to.

Before we could get there, though, we had to chose a tool with which to implement automation! To start on this journey, we came up with a number of criteria we wanted an automation tool to fill and considerations we had to bear in mind. These included:

  • The ability to run the test against any environment we wanted (development, staging, production, etc.)
  • No effect on people outside of QA — we didn’t want to negatively impact the rest of engineering whilst building our test suite
  • Integration with existing tools — namely Percy and Browserstack
  • Maintainability — whilst anything we write will require some maintenance, we don’t want to be spending hours a day maintaining flaky tests. We also need the tests to be self-sufficent enough to continue working through events like the back-to-school period when a lot of data on the site is reset

Having defined what we were looking for our next step was to look at what we already had; there was set of tests, written by the engineers using RSpec & Capybara, which had the advantage of being tied closely to the application code and having the ability to easily create/remove data. They also covered a good amount of the site already, but came with a few major negative points, namely:

  • QA don’t own them or know what they cover — going through them and learning what cases they include would be almost as big a project as writing our own tests
  • They only run against the test environment, either locally or within our CI system (which doesn’t have a lot of the 3rd-party systems, etc. running that an environment like staging does), so they aren’t an accurate representation of production
  • Some of the older ones are becoming classic legacy code and are only really understood by a few engineers
  • Any changes we make will affect all of engineering, especially as we’re learning and causing flakes and failures, which will reduce our ability to experiment and make mistakes
  • As a QA team our flake tolerance can be higher than the entire engineering department and we aren’t as concerned about the speed of each test run at the moment
  • The learning curve of adding to these tests isn’t aligned to current QA skills and the direction we’d like to go

With those existing tests in mind, we put in place a plan to evaluate a number of possible alternative tools to see if we could find one that better met our goals and needs. The strategy we came up with was to give each tool a trial run and create the same set of tests in all of them. The tests we chose to implement included some that we definitely wanted in our future test suite, others that would really push tools’ abilities, and others still that we were fairly sure none of the tools would be able to achieve successfully. (We weren’t wrong!).

There are hundreds upon hundreds of automation tools available out there, all doing slightly different things in slightly different ways but achieving the same goal of testing a site end-to-end, and it would have been impossible to look at and evaluate all of them. We settled on four tools, and once we’d created the proof of concept tests in each, we had a pretty solid idea of the pros and cons of them:

  • Nightwatch
    • ✅ Percy integration
    • ✅ Browserstack integration
    • ✅ Good, active community support
    • ✅ Pretty good documentation
    • ❌ Heavy reliance on CSS selectors
  • Ghost Inspector
    • ✅ Record tests via Chrome extension
    • ✅ Easily schedule test runs
    • ✅ GUI showing test results
    • ✅ Simple Slack integration
    • ❌ Test recording isn’t 100% accurate; recorded tests often require manual changes
    • ❌ GUI for editing tests is confusing
    • ❌ Recording captures dynamic CSS selectors that we want to avoid
    • ❌ No integration with existing tools
    • ❌ Built-in image diff tool is very limited
  • Taiko
    • ✅ Simple REPL
    • ✅ Very easy to create simple tests
    • ✅ Doesn’t use CSS selectors at all
    • ❌ Doesn’t handle dropdown menus well
    • ❌ Chrome only
    • ❌ Very limited documentation
    • ❌ Feels very new and not fully developed
    • ❌ Struggled with a lot of our test cases
  • Cypress
    • ✅ Fantastic test runner
    • ✅ Don’t need to use as many CSS selectors
    • ✅ Excellent documentation
    • ✅ Good community support
    • ✅ Percy integration
    • ✅ Very active company in online discussions, webinars, etc.
    • ❌ Chrome only
    • ❌ Tests can only be recorded in Electron browser
    • ❌ Built-in version of Electron is currently out of date

Following the evaluation of each tool, we gathered our thoughts to decide where we would go from here. Each member of the team provided their thoughts and selected the tool they thought we should move forwards with, and the overwhelming choice was to introduce Cypress! A big factor in its favor was simply how it felt to use; at no point was it frustrating, and backed up by its fantastic documentation (which not only explains how to do things but also why to do them in a certain way), no problem seemed insurmountable and the answer always seemed to be available. As introducing automation was going to be a relatively steep learning curve for the team, having a tool which was straight forward to pick up and start using, whilst at the same being able to do everything we wanted, was going to be key in the success of the project.

In terms of its functionality, Cypress simply seemed to do just about everything that the other tools did better than any of the other tools, and the things Cypress didn’t do weren’t anything that we considered critical or thought we’d miss. (The exception to this was Cypress’s lack of cross-browser testing, but as Percy allows us to capture snapshots in both Chrome & Firefox, and both Cypress & Percy have plans to introduce more browsers, we decided we could live with this for the time being. Further, around 70% of our users are using Chrome anyway.) Cypress was also a popular choice amongst the engineering department, who’d already been considering it themselves. Having us introduce it first is also nice way to bring it into the company without affecting any pre-existing processes; this also put us in a nice position as there’s a lot of JavaScript knowledge amongst the engineers that we can lean on if we need to!

With a winner selected, it was then time to come up with an initial coverage plan and start implementing Cypress tests.

Check back in a few months to see how we’re getting on!



Matt Charlton, Product Quality Specialist at NoRedInk

(thanks to Alexander Roy, Brian Hicks, and Kristine Horn for reviewing drafts!)


Viewing all articles
Browse latest Browse all 193

Trending Articles