in

Ative at Work

Agile software development

Ative at Work

marts 2007 - Posts

  • Iterative Development Gone Wrong - The Mini-Waterfall Anti-Pattern

    One of the frequent mistakes in transitioning to agile development is to implement iterative development by doing consecutive "mini-waterfalls".

    No matter how iterative it might be, if you are relying on the "W"-word, you are still doing something wrong.

    When we postpone testing and completion to the end of the iteration we are shooting ourselves in the foot. Once testing begins we start to uncover all the mistakes and defects and with the clock running out there is no room left to maneouver - no time to descope or rescope or maybe even to complete some of it. Most often the result is to end the iteration with a number of unresolved defects and incomplete features.

    From a Lean perspective the problem is that everything becomes work-in-progress - we produce lose ends everywhere at the same time rather than completing features one by one to proper production-ready quality.

    We have seen this symptom on several projects now and the cause seems to be that testing is not integrated properly in the process. We really need to be test-driven, also on the acceptance/integration testing level to truly transition to agile development.

    This means that testing and QA should be moved to the front and be an on-going activity over the course of the iteration - not a frantically compressed activity at the end. In fact it is a first-class development activity that drives the whole project.

    Even when we are aware of this it is easy to get caught on the wrong foot.

    We often see experienced testers build their test cases around complex use case scenarios. This results in "big bang" testing where steps cannot be tested individually - the test hinges on a big set of deliverables rather than incrementally evolving with the application.

    The remedy is to plan the backlog in terms of small, testable slices. Even if you are working from Use Cases, break them into smaller "user stories" that describe a simple feature (usually one or two steps in the use case). Test the user stories individually and incrementally.

    The next step is to automate the acceptance tests so we get regression testing "for free". This allows us to sustain the quality at a known, high level.

    With this we are on our way to developing better software faster, and even when we get bogged down we have earned the right to not make any excuses. Instead of saying "well, we are sort of 80% done with 100% of the application and no, you we cannot deploy anything to production" we have earned the right to say, "Well, we suffered some setbacks, but we are 100% done with the 80% most valuable features. Let's put it into production and start reaping the benefits."

© Ative Consulting ApS