At DLT we have seen many examples of software development processes across many agencies, and unfortunately, most of them suffer from a lack of automated testing. As a developer at heart, I find this particularly concerning as defect-free code is a key metric for development success, and is a point of personal and professional pride, like a doctor’s Hippocratic Oath of doing no harm.
Writing and running automated tests not only prove that your code change is correct but allows future developers to keep their oath in not harming the intent of your code when making their changes. And, contrary to popular opinions, users are not testers and they should be the last people to find an issue with an application. So, let’s talk about the elephant in the room … let’s talk about testing.
With waterfall project management, testing is given its own phase with its own specialized team whose chief job seems to be telling the prior development team that they didn’t do their job correctly. This naturally causes some friction but is somewhat karmic in nature as development teams often make the same complaints of the prior design or requirements gathering teams.
Agile project management remedies this infighting by merging the development and testing responsibilities into the same effort, while a DevOps mindset aspires to automate these processes into continuous integration and delivery (CI/CD) practices. The only way to reach this continuous nirvana state is through automation. Manual testing will always have a place for aesthetics and usability, but automation is required to achieve these continuous processes and to reap the maximum benefit.
But it’s not as easy as just coding a few tests or everyone would be doing it. First, it’s a unique skill set. Programming by its nature is a creative act, building functional software out of concepts, syntax, and thousands of tiny decisions. Testing is the act of attacking and systematically tearing down a software product to reveal its deficiencies, especially when conducting security testing. Being good at one capability does not necessarily translate well to the other, but developers need to grow both skill sets.
Like a metalsmith, they need to form the shape of their code and then harden and temper it by hammering it with the right amount of testing heat at the right points throughout the process or risk ruining the material.
To that end, testing tools need to be in the hand of developers, so that they can run any test in their individual environment, as well as automatically in the CI/CD pipelines. A failing test that halts a pipeline should be caught before it affects other efforts downstream. But you can only hold a developer responsible for these failing tests if you have given them complete access in their own environment, so that the developer can validate everything prior to committing their changes.
Inappropriate testing affects the time to create, operate, and maintain an application. Extreme Programming (XP) championed the concept of Test-Driven Development (TDD), which consists of creating a failing test case before writing a single line of working code, that would then fix the failing test case just created. With this test harness, a developer can then safely refactor the code, making it more efficient or elegant, while guaranteeing that the tests still pass. Yet, developers being very literal and extreme, can take this too far and write a hundred test cases proving that adding two numbers together “still works.”
Maybe that is important if you a writing a banking application, but as an example, running a hundred different test scenarios every time two numbers are added together in your application would be a lengthy process to build and take days to execute the entire test suite. Worse, it will make your entire process brittle if the functionality suddenly changes to multiplication instead of addition, forcing thousands of tests to change because of one operational change in the code. This is far from a repudiation of TDD, but rather a recognition that not all tests are equal and that they need to be applied judiciously with skill and with forethought on maintenance.
Nowhere is this brittleness depicted more clearly than when testing the application through its user interface (UI). UI tests are valuable as they are the validation of a user’s actual experience. However, they are the most brittle as UIs change frequently and can be affected by browser choices, devices, and screen sizes.
If you are not extremely careful, a simple UI change, such as moving a button, could suddenly cause every automated system test to fail. Thus, testing through a UI is often shunned for testing the internal functionality, but that is at a detriment to the user. Automating a minimal set of UI tests is still advantageous to “smoke test” that an environment is stable and that critical functions are available for the user in the latest deployment.
The most obvious point of testing is validating that the code correctly implements the desired functionality, but even here we can run into issues. From a developer’s internal point of view, this effort is “unit testing”, or testing that ensures that a unit of code achieves the developer’s intent. But sometimes the developer and business owners are not aligned.
Acceptance Test-Driven Development (ATDD) is an extension of TDD looking from an outside-in perspective and advocates that tests should be written by business owners along with the developers. Acceptance tests are designed to prove that the user requirements are met by the developer’s code, which is a subtle but important distinction, placing the emphasis on the user versus the developer. ATDD is often used synonymously with the Behavior-Driven Development (BDD) style, meaning that BDD is how ATDD tests are automated and achieved.
Acronyms and styles aside, unit testing is still valid and should be executed by developers when making changes to a unit of code or periodically when needed. But acceptance testing speaks more to the intent of the code and should be automated and validated in the CI/CD pipeline after every change as part of “regression testing”, or validating that all functionality is still correct and not just the unit that has recently changed.
Another challenge to testing is having code that is easily “testable”, meaning code that is smaller and independent, making it easy to understand and therefore easier to test. Testability has been a driving force for architectural enhancements for many years moving toward developing ever-smaller components or microservices. Ensuring testability requires a fierce devotion to loose coupling by isolating and eliminating external dependencies. Any external dependency that cannot be eliminated must be mocked for testing, which is an additional effort to create and maintain, and is a potential source of errors when the external service itself changes behavior.
With an evolution into microservices, adopting Event-Driven Architecture (EDA) and Domain-Driven Design (DDD) patterns can help achieve these testability goals along with many additional benefits. Unfortunately, microservice architectures have created new problems with “integration testing” or testing the system from end-to-end, as the flow from service to service is more adaptable and volatile. Testing each microservice should focus on the Application Programming Interface (API) layer, which is an operational contract with the “user”, and is, therefore, the outward-in, acceptance level test equivalent.
That is a big elephant, fueled by many drivers, and it feels like I’ve only scratched its surface. There is still load and performance testing, infrastructure testing and security concerns along with so many others. No matter where you are in your testing coverage, there are probably more scenarios to cover. Yet the effort must continue because testing is important and needs to be addressed to keep our oaths to our profession and to our users. So, remember:
- Continuously automate everything.
- Build skills within your developers to hammer at their own creations and give them access to use these tools in their own environments.
- Aim for the right amount of testing that can execute quickly, remembering that you need to maintain your tests along with your code.
- Judiciously use UI tests to validate your builds and environments.
- Co-develop acceptance level tests with your business owners to ensure that your user requirements are always met.
- Ensure that code functions are isolated and easily testable.
- Adapt testing to microservice environments with events and domain-driven patterns.
Luckily there have been advancements in continuous testing capabilities and new products have emerged to help meet these challenges. New continuous testing products can scan code and automatically generate standard unit test cases, reducing the time for creation and maintenance. Some utilities can monitor traffic patterns observed from existing APIs in production and create mock utilities automatically, eliminating that effort as well.
Other products have been developed to use artificial intelligence to recognize misidentified failing tests that have only failed because of user interface changes, and then works with developers to automatically correct the tests ongoing. This self-healing capability greatly diminishes the brittleness of UI tests, reinstating the highly valuable benefit of testing through the user interface.
As with everything in the technological world, continuous testing products have advanced with the times, so investigate and be open to new alternatives to meet your testing needs.