Many agile processes are just that: Processes. However, one of the keys to success that is often overlooked is the technical project infrastructure, and the discipline and craftsmanship required by the team. While agile is lightweight it also sets the entry level for professional standards higher than many organizations are used to.
It is all about setting a standard for quality and craftsmanship with zero tolerance for defects.
First of all we gain a big win by moving every artefact required to build the project into a source control system and creating a build script to assemble it all. Gone will be the days of “but it works on my machine”. It may seem obvious but for many projects even this is lacking.
The second step is building something that is installable. This means eliminating the waste of lengthy instructions on how to set up the system properly and replacing them with a script. This way it is easy to deploy the project frequently and without error. The frequent configuration problems that that pop up in complex environments with much manual configuration will be curbed.
Together, this enables us to know what we have, build it and deploy it in a controlled, repeatable manner with a single click or command.
So far, it does not take rocket science - just a bit of discipline.
Then, the next level of professional decency is to apply automated unit testing. Taking a test-first approach will give us a massive quality boost and also, as a secondary effect, a much better architecture. Poorly designed systems are so hard to test that test-first will force us to create better designs.
Needless to say, the tests should be run as part of the build and a failed test treated as a stop-the-line issue. There is no point in completing the build and deploying a system to a test environment when we already know that the application does not work. Instead, bugs should be fixed immediately when they are discovered.
All these steps can be done by the developers alone.
The next step is to introduce automated integration or acceptance testing. Here we engage the customer in defining the test cases. These tests are also run as part of the build. There are many levels of this spanning from integration testing below the GUI level using tools like FIT (http://fitnesse.org/) or through the user interface using frameworks like Selenium (for web applications - http://www.openqa.org/selenium/). The secret is to use simple tools and avoid the kind of brittle tests that break when underlying data changes or someone changes a few controls in the UI.
Applying automated acceptance testing is a long process. Many times it is practical to start with a small piece like a “smoke test” that tests a few central use cases that exercise the key elements of the system to provide a first-order estimate of its quality. For example when we worked on a system with a distributed object database we had a smoke test that validated that the changes made on one workstation were replicated to the its peers in the local network and to other notes at remote sites. This allowed us to quickly discard a lot of bad builds before spending a lot of time and effort going through the more complex test scenarios.
The key is to reduce the cycle time by not allowing ourselves to build bugs into the code: the sooner we know that the software is broken, the sooner we can fix it. Less bugs means more finished software, less work-in-progress, less risk, lower costs, shorter time-to-market and higher flexibility.
All in all that is not a bad result for applying a little professional discipline. It is the starting point for agile development.
(Updated 13. Dec 2006 - improved formatting.)