Skip to main content

Build quality in early and often

One of the most important aspects of building good software is to encourage the concept of build, measure and learn. For companies to be able to innovate and be quick to market they must encourage a good engineering culture that sets up teams for success. In an ideal world, you should deliver to production daily. However, if you deliver software fast, but it is full of bugs, your product has a lower chance of succeeding. As an agile tester, one of your focus points has to be to speed up the feedback loop while maintaining good quality. Over the years I have laid across a few good practices that make teams build the product right and also build the right product.


Test Engineers are often treated as the last stand against finding problems before release, yet like all software activity; their focus is affected by the information available to them. In order to better understand the risk associated with changes and its potential impact, Test Engineers should be involved as early as possible.
The test engineers will be engaged when work is about to start (Kick-offs) and ideally, even before the work is about to start to really help refine the thinking and impact of future changes.

Test Engineers know who is working on what stories and know whom to contact if things start to go wrong. They sit down at the start of a story to agree on what needs to be done as part of the story and the feedback loop closed when a developer shows the code working on their machine.

Acceptance criteria is a way of ensuring that everyone understands what, specifically, will be delivered as part of a work item. Writing them, in too much detail too early, risks lots of rework (and worse, resistance to important rework) yet capturing them too late means that code will need to be reworked.

Getting precise agreement between the developers, Test Engineers, and the product owner is important to do just before the work begins. It is fine to start thinking about it in advance, but watch out for the rework associated with it.

As systems continue to grow, there is simply many more paths and emergent behaviors. Maintaining the regression test suite becomes more laborious as work increments or changes the system, with a difficult to achieve a balance between repeating yourself too much, allowing for test discovery, and still exercising the correct number of paths.

As a system evolves and grows over time, its complexity also increases leading to potentially more interesting behavior. Testing (even automated) still requires a choice in terms of understanding where the testing efforts apply to get the best return on investment. Risk-Based Testing provides a way to de-scope tests in a logical manner and allows visibility of the risk coverage.

This Definition of Done is a starting point (should be enhanced by the team through their retrospectives) and serves to help remind people during estimation sessions, and an explicit guideline about what “Done” means. It should be used as part of the onboarding process for new team members to really understand what are the different elements that potentially need to be considered during development.
A starting Definition of Done is:
  • All code has been committed into the correct places
  • Acceptance criteria pass on an integration environment
  • Acceptance criteria have been automated through the appropriate level of unit, integration and end-to-end tests
  • Necessary documentation for that story has been completed
  • The Continuous Integration environment assembled the final artifacts, run its quality gates and all builds are green (including all builds for dependent application components)
  • A demonstration of the functionality passes what the Product Owner and Test Engineer expect.
  • Any defects raised and agreed to be fixed by the Test Engineer have been fixed



Comments

Popular posts from this blog

Can projects do without Business Analysts?

My  last 2 gigs were a bit different to the usual ones from a team composition point of view. The bit that was different was that there weren't any Business Analysts on the team. My initial concerns were who would gather requirements? Who would analyse stories? Who would negotiate scope with the business? Who would be involved in Scope Management? Who would be our proxy customer? The above questions got me concerned, but it wasn't as bad as i thought it to be. The customer was co-located with the team and we had easy access to validate our understanding on business rules, scope, sign-offs etc. Our team composed of an IM, Devs and Testers. The IM was managing scope with the customer and getting the priorities. The team would then sit with the business to understand what the requirements were and we would create Epics and then further break them down into smaller chunks of workable stories. A thing to note here is that all roles would put their BA hat on and identify gaps if

BDD is over-rated

Over the past few years, I have tried to justify the use of a BDD (Behavioral Driven Development)  framework to express my tests, but not once have been able to say BDD has helped me address a  problem which writing tests the non BDD way would not have addressed. I do understand the value BDD brings to the table, but in most projects that I have implemented BDD on, we have tried to provide a solution (BDD) to a problem that does not exist. Let me try and explain. Lets look at the key benefits of expressing tests the BDD way (There could be more) Collaboration between Business and Development Ubiquitous Domain Language Focus on the behavior of the application Now, more often than not unless your business is co-located with the team, collaboration is not the easiest. The value BDD brings here, is the business validating our understanding by reading our tests expressed in the Given When Then format (BDD) and providing feedback, as BDD expresses the behavior of the system in a l