I’ve been working with a client whose development team and test team are separate. There’s history involved, but to cut a long story short, the testers don’t trust the developers and the developers think the testers waste a lot of time re-testing functionality manually that has already been tested automatically.
One of the root causes of this distrust is that the testers can’t see the automated tests the developers are writing. Sure, there’s the output from the build process. Sure, there’s the tests themselves. But the testers can’t read Java and can’t work out what has already been tested and what remains untested. This is compounded by times when the developers have thought that a feature was fully tested, but defects were still uncovered during manual test.
A further problems is that the testers know what the system should do, but not how the system is built. They have an understanding of the large subsystems (e.g. database, print server, document repository), but not the implementation components (e.g. business rules, internal validations, shared error reporting). The upshot of this is that they express all their test scripts in an end-2-end format – manual scripts of how a user interacts with the system. This gives rise to a lot of duplication.
So, how can we address this lack of trust between the test team and the development team? For starters we need to make the technical tests that are of interest to the testers visible to the testers. This is equivalent to bringing component and/or unit tests above the visibility waterline (see my previous post The Testing Iceberg).
Using Cucumber, for example, this is a ‘simple’ matter of crafting the relevant scenarios and writing the step definitions that wire them up to the underlying code (which might be the unit tests that already exist). This will require collaboration between testers and developers to create the ubiquitous language that the scenarios are written in, which is itself a step towards a better relationship between the teams.
We will also need to address the tendency of the test team to express all their test scripts in a workflow format (workflow tests can be useful, but I’ll cover why we try not to overuse them in the next blog). I have adopted 2 parallel approaches to this.
The first is to ensure that the developers give the testers insight into the construction of the system under test. They provide a (very) high level schematic of the components that the system is built out of, along with the major interactions between them. This component map allows the testers to understand why and how more focussed, domain model testing can be of use. It acts as scaffolding within which developers and testers can communicate effectively.
The second approach, which I will cover in detail in a future blog, is more technical. The scenarios are still expressed in an end-to-end style, but we provide Cucumber tags that modify how the system under test is configured. Using this approach we start by testing a feature using only end-to-end tests. As confidence develops that the feature is stabilising we annotate some scenarios with the relevant tags and the setup then utilises stubs/mocks/canned data as appropriate, without changing the wording of the scenarios at all.
Breakdown of trust is crippling in any organisation. It takes time, skill and effort to overcome the issues and rebuild trust. My client has made good progress, but there’s more to do. In many respects the problem is rooted in the separation of development and test in the first place, but even if test and dev were in a single team, similar communication issues would still need to be tackled. High fidelity communication between everyone involved in a development project is essential for effective delivery, which is why BDD is such a useful technique.