All humans make mistakes. All untested code is flawed. Nobody wants any responsibility for a known, broken system. Good Automated Testing Practices are:
- Believe in tests
- Only tests verify that in shippable / deployable state
- Automate test execution
- Only automated tests can assure constant quality
- Only automated tests can proof quality of added functionality
- Only immediately executed tests provide sufficient fast feedback loops for continuous development
- Optimize test execution
- Cheap tests must run first, the more expensive the later
- Parallelize tests to decrease time-to-response
- Test beyond bugs
- Have performance tests with metrics, to identify slow build up (of decreased performance)
- Have tests for non-functional requirements (availability, scalability, security, …)
- Have tests for Infrastructure as Code
- Any / all appropriate tooling (like statistical analysis, code style checker, ..) that supports quality, assuming it can be automated, should be used
- Act on failing tests
- Failing tests trigger Andon Cord, trigger Swarm and Solve - aside from incident, fixing failed test always highest priority
- After tests: Build packages
- Build package creation must be independent of tests, to assure dependencies are exposed and known
- Build packages are mandatory prerequisite for “same deploy in every environment”
Noteworthy
- Decrease in test coverage (metrics) can highlight overmuch pressure on developers (that prioritize feature availability over quality, to meet deadlines / release dates)