Behind The Scenes: Software Testing at Evoluted
How do we test at Evoluted? It’s a question that we ask ourselves a lot, and since I joined in 2019 we’ve massively increased the types of testing that we do and improved our playbook for thorough testing before release.
Other questions come from that interrogation though, like do we test enough? Are we testing the right things?
Let’s take a step back though with an example: you’re a client and we’re working together on a digital transformation project for your business. We’ve just demoed and handed over the first sprint’s worth of work and you ask: how much has this been tested?
Automated testing
Developing software to achieve your business aspirations and solve complex technical challenges is a large part of what we do at Evoluted. Anyone on our team who writes code is expected to also write automated tests to ensure their code functions not only as expected, but also when the conditions are less than ideal.
These automated tests take many different forms: unit and feature tests are a staple and ensure that your business logic - when to split production between factories, what to export to your CRM - is captured and robust; whereas integration and UI tests ensure that all of the different components of a system function smoothly together and critical user journeys are always achievable.
As these tests are automated, they can be run by our engineers with a single command (UI tests are my favourite as it’s like a ghost has haunted your web browser 👻) but are enforced through our CI/CD pipelines, running many times in a day.
This automation makes sure that new code doesn’t break existing features, and increases the confidence and speed of our engineers.
Engineer testing
Speaking of engineers, we understand that sometimes just because a piece of code is verified working, it isn’t always what you expected. Our engineers may have built the functionality they think is required but if it isn’t what you want, then it fails.
We obviously try to make sure this never happens, that what you want and expect is what we deliver. We start early in a project’s lifecycle with a discovery phase that might produce wireframes, specifications, and flow diagrams, but continually revisit them throughout the project lifecycle.
Features change as others are developed or stakeholder priorities shift; we try to capture these and make sure that our engineers understand not only the value in what they’re building, but the evolving context in which they’re building it.
For example: you want a way to contact your customers, but there’s a canyon’s worth of difference between a fully featured notification and contact centre, and a newsletter sign up form.
One of the most valuable parts of the testing process involves two other engineers undertaking a code review: a snapshot of the code for a feature or bug along with instructions on how to test it in an isolated environment. These reviews have many different benefits:
They safeguard code from unforeseen bugs and errors
Our engineers have a wealth of experience but always seek to improve, and reviews from other engineers expose them to ways of thinking and problem solving that they might not otherwise have encountered.
They validate that the code works
We encourage our testers to not just follow the instructions but also work outside them, my trick is adding the honey bee emoji 🐝 to text fields to test unicode handling
They give an opportunity for different approaches to be discussed - these might be more performant, less prone to invalid states, or just code that’s easier to maintain.
They ensure that the code does what the task needs
This is the critical one, the reviewers get the same information as the code author(s) and as part of the review make sure that the code satisfies, and hopefully exceeds, the requirements and that the interpretation of those requirements is correct.
I could easily write an entirely separate post on the ins-and-outs of code reviews. But, what's most important is that while your project will have dedicated engineers, our entire team is involved in testing and sharing their expertise and insight, making sure you always get the best result.
Manual testing
It’s only once a feature, bugfix or epic has passed through engineer testing that it is then tested by the project manager.
They’ve shepherded innumerable projects through to completion and are well-versed in topics like accessibility, performance, security and all the other facets of a project that make it successful.
At this stage the project manager will use their deep knowledge of your requirements to test the work, ensure it delivers you value and maintains our high standards.
From there the work will progress from an isolated, internal-only testing environment to a shared environment where we will usually demo the work to you - either in person or via a video call - then hand it over to you for your own testing and feedback.
Depending on the project structure that might happen at regular sprint planning and retrospectives or ad-hoc via any of the communication channels we support, from Slack to Asana to email.
Continuous testing
No project is ever “one and done”, and we pride ourselves on the long term relationships we have with our clients.
Regardless of what that looks like - support credits or a maintenance contract - we continue to test. Part of that is the automated test suite setup mentioned earlier, but for many projects we regularly run Lighthouse audits to ensure that user-facing metrics don’t degrade over time, as well as performance audits to ensure that our systems remain performant even when under heavy use.
We might also develop specialised tests to ensure that agreements with your customers are upheld - that might be a particular level of uptime or that an API’s implementation doesn’t deviate from its specification.
Value versus effort
With all this testing going on, the question has to be asked: when are you testing too much?
There is no bullet-proof way of ensuring that the software we produce is bug free, blisteringly performant, and flawlessly user friendly. But as an agency our worth is always in pragmatic solutions, we don’t have the luxury of being able to spend hundreds of hours and tens of thousands of pounds on solutions that don’t move the needle for our clients.
So all of our testing practices are developed from introspection, research and, well, testing!
Code reviews were still relatively new to Evoluted when I joined, and that first iteration of them is almost unrecognisable to how we do them now. Automated UI tests are a recent addition due to the time it’s taken for them to prove their worth with some of our clients. Accessibility has been at the heart of Evoluted for over a decade but finding the most effective place for it to be tested and feedback acted upon has taken a lot of iteration. The list goes on, and if you asked me how we test in a few years, the list might look quite different.
Really though, it’s all about the value to you, the client.
Our services are bespoke, which means we’re not going to apply every testing methodology to every project. It’s not worthwhile for us to test Shopify’s order process or Stripe’s payment flow, but testing that we don’t interfere with them is.
Inevitably we’ll slip up at some point and a bug will get through, we’ll misunderstand a request, or something entirely unexpected will happen - despite working with machines, we’re only human. We’ll learn from it though, whether that’s project retrospectives, incident post-mortems, and introducing new procedures to make sure it doesn’t happen again - after all, that kind of iterative process is what’s at the heart of everything we do as engineers!
We can stop the role play now, though if any of this sounds like what you’re looking for, please get in touch.