Why you should crowdsource your acceptance tests - and how
Why you should crowdsource your acceptance tests - and how
georg.oechsler
Fri, 23/02/2018 - 15:37
Behat tests are great. Why? The gherkin language allows you to describe the setup, actions and expected results of your test scenarios in an easy to learn, human readable and almost natural language. Basically anyone can do it. Sounds too good to be true? Because it is. What now? No worries, stay calm, we’ve got you covered: The answer is crowdsourcing. Just read on, we’ll fill you in with everything you need to know.
Reality-check: All that glitters is not gold
But first: Let’s rewind. Anyone can write behat tests now, but when it comes to actually executing these tests, the situation changes drastically. What you expect: Dealing with a well documented, adaptable and versatile language for describing tests scenarios. What you are facing: Intricate details of all the selenium servers, chrome drivers and headless X-servers you may need to run your tests.
Crowdsourcing is the new outsourcing
That’s exactly when we came into play. For the Sportscheck.com website we implemented a nifty system that allows you to “crowdsource” the creation and execution of tests. Quite the game-changer, right? Various in-house experts can now leverage their knowledge of their respective fields of expertise by using this system. For them having to deal with the “dark” side of testing (a.k.a. the technical details of test execution) is a thing of the past. Hooray!
Use cases: The when and the why
There are countless use cases for automated testing - as you can probably imagine. Let’s highlight the two most important ones:
Checking SEO settings
But what about all the content configuration settings? Totally different story. Generally, these are not in scope because, because it's technically content.
Our solution bridges this gap in test coverage. Now, you are able to test parameters and settings of content items: SEO relevant settings, like meta-tags or redirects for example. All the good stuff, basically.
Tracking
Setting up analytics and tracking can be tricky. Just like content, the tag manager setup is usually not in scope of automated tests by a dev team. Our new test framework allows to test the setup. Yes: By defining the steps to trigger a certain event and verifying the expected outcome, like emitting a tracking request or setting a cookie. Try it, it’s awesome.
Let’s get deep: The Technical Nitty-Gritty
The system we built consists of two parts:
- The user-facing part used for editing and organising tests as well as triggering execution and inspecting results, and
- a test runner, which is dispatching the queue of tests to execute and delivers the results.
UI? Drupal to the rescue!
We chose Drupal for the user-facing part. Why? As feature files written in gherkin are just plain text, a decent content management system is the perfect place to create, edit and organize your test scenarios. Drupal’s excellent content modelling capabilities? They all come in handy here.
These are the custom entities we built:
- Features: the gherkin code for the tests,
- Suites: a organisational container entity to group features, which name was borrowed from behat itself, and
- Exercises: the executable collection of suites combined with the test configuration of choice.
All of the above make straight forward use of the core fields and the usual CRUD capabilities of Drupal 8. Just add a JavaScript enhanced textarea with syntax highlighting for gherkin to the mix and you are good to go.
Editing form of a feature entity. This translates to a *.feature file in behat.
Want to make things even more interesting? Allow templates. In order to enable similar SEO tests for hundreds of URLs without hassle, we allow to use twig placeholders and substitute them with the contents of an attached CSV file. Works like a charm and makes it very easy to maintain similar tests for a constantly changing list of URLs.
When execution of an exercise is triggered, either by visiting the “Run” tab of the respective exercise or by automatic execution, the system collects all the respective features and passes it to the test runner, where waits to be dispatched.
Want to make things even more interesting? Allow templates. In order to enable similar SEO tests for hundreds of URLs without hassle, we allow to use twig placeholders and substitute them with the contents of an attached CSV file. Works like a charm and makes it very easy to maintain similar tests for a constantly changing list of URLs.
When execution of an exercise is triggered, either by visiting the “Run” tab of the respective exercise or by automatic execution, the system collects all the respective features and passes it to the test runner, where waits to be dispatched.
Putting it to the test
Depending on the chosen configuration profile in the Drupal UI, the test runner dispatches the queue tests to the respective infrastructure.
What are the available options?
- A fast text-based browser,
- any browser available on browserstack - depending of configuration settings, and
- a real firefox.
The test runner does it all: It triggers test execution, collects all relevant data, such as the result, accompanying log files and probably screenshots and passes the back to the Drupal UI, where the results of the test run can be inspected.
The list of test runs after returning from the test runner, including the result and links to all the artefacts like screenshots and log files.
So why should you keep a real firefox around locally instead of just using browserstack? To be able to test tracking requests when interacting with a website, we need to evaluate the network traffic. This is done by passing the traffic through BrowserMob proxy, which allows to recording and analyzing the traffic between traffic and browser. Neat.
The verdict? Crowdsourcing acceptance tests is a keeper.
Our automated test framework has grown to be an invaluable tool for SportScheck. And the saga continues: They’re still discovering new possibilities to benefit from the power of automated testing. It’s a work in progress. Time and again users of the system come back to us and request new test step definitions or other features to improve the usefulness even further. In the end, it turned out to be quite the success: The people at SportScheck enjoy writing tests now - without having to deal with all the technical details.