A lot has been written on the test-first approach in the agile iteration.  While developers say they do it, our analysis of over 5,000 completed software development efforts shows this is not true.  Yes, they might provide test criteria in their user stories.  However, little seems to be done with the conditions that are defined once the story is taken from specification through implementation.  Instead of taking the criteria through their full cycle, developers seem to rely on an independent test group to put them into action as they develop their software.  This would not be bad if the testers used the test criteria properly.  Unfortunately, testers seem to view the product as a black-box and test at the boundaries rather than looking internally at the details where most of the defects occur.  Developers say that this is okay because their toolsets catch most of these defects as the software is being built.  They then argue that those defects that pass through can be backlogged and fixed as the software goes through its next iterations.

As you would expect, such practices result in poor quality in terms of the number of defects found and fixed during the development cycle.  Quality declines by as much as 25 to 35% percent when such practices are used because many of the tricky errors are not caught until after the package is released to the user.  In addition, the use of third party testers is between 20 to 40 times more expensive than testing by the developers because there are added costs needed including those for collaboration, communications and management.  While time might be saved by having the package tested as it is developed by third-parties, the additional costs and decline in quality make such practices extremely wasteful.

We also looked at Independent Verification and Validation (IV&V) practices.  These add even more costs and effort.   We found that such IV&V efforts were only justified when the costs of defects were so high that every effort had to be made to ensure that the software was defect-free (safety and security critical applications like flight safety and nuclear power certifications).

Many argue that a better approach than independent testing and IV&V would be to have the developers test their own products as they design, develop and integrate them.  Such a practice is also much more in-tune with the test-first dogma that fills the literature.  “How do you do this,” you probably are asking.  It is easier than you think.  For example, in Extreme Programming, you task one developer to perform the testing as the other designs, develops and builds the product.  As another example, in Scrum, you would have the Scrum Master review the defect backlog with the team during the daily standup to make sure that the defects that the Product Owner deems as critical are fixed and the others are backlogged.  For both of these examples, you would have your agile quality subject matter expert review the data to make sure that the team is performing to expectations.  If they are not, then the expert would mentor the team members to correct the deficiencies noted.

Do such test-first practices result in higher quality products?  For sure.  Not only is the software better, it is cheaper to produce and gets delivered sooner.  Plus, developers acquire core competency in testing.  This is a “win-win” situation for everyone involved in the development.

This blog is taken from Volume 2 of my new book entitled “Agile Software Quality: One Team, One Approach, Build Quality In” which will be available on Amazon.