Ative at Work

Agile software development

Ative at Work

oktober 2006 - Posts

  • Iterative Means Baby Steps

    I am working on an agile development project in tandem with a database team using a traditional somewhat iterative waterfall approach.

    The application has some metadata that are basically a bunch of enumerations in the database.

    In order to support multiple languages in the GUI we had added a table with translations of the language specific text for one type of metadata to the development database. This way we have a language-neutral domain model and can ask for translations of various bits of the metadata (say, for populating dropdown lists with something humanly readable).

    This step taken, we asked the database team to incorporate the translation table in the database in the future.

    Then we continued working on the steps for fulfilling the requirements involving that particular dropdown list.

    Some days later the DBAs asked us about a general approach for translating all metadata. We discussed it briefly and they set off to work.

    When they delivered the database the week after we discovered that generalizing from that single example had been a very poor idea. Meanwhile we had uncovered some more requirements with the customer and that particular solution was very poor for a lot of the other data.

    In other words they had implemented the wrong thing for every piece of metadata except that first table. Waterfall at its best.

    This of course now forced the database group to undo a lot of work and come up with another solution spending valuable resources on the way. We had tried to think to far ahead, make uninformed decisions up-front instead of informed, on-demand decision-making. And predictably, the waterfall approach once again proved itself a loser.

    Now, on the application side no rework was needed: by focusing on solving only the specific, known problem we had produced no code to solve the unknown requirements. No code needed to be changed.

    Agile development done right is a big timesaver and it underlines the fact that iterative development is not about week- or daylong iterations, but "baby step" iterations of minutes and seconds.

    Posted okt 27 2006, 11:45 by Martin Jul with no comments
    Filed under:
  • Great Quote on Iterative Development

    "A complex system that works is invariably found to have evolved from a simple system that worked…."

    "A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system."

    - Jason Fried, 37 signals


  • Performance and Scalability Myths

    "When I hear the word performance I reach for my gun".

    In the fuzzy front end of a project when unknowns abound people need a sense of stable footing.

    Since it usually takes a long (calendar) time to understand the requirements and domain discussions tend to focus on more concrete things like the application architecture, and eventually everybody will be obsessed with performance. It is a safe harbour for “productive discussions” when the waters are full of unknown monsters like fuzzy or non-existing requirements, a vague understanding of the domain etc.

    So performance discussions ensue. Many times the customer wants to know the hardware requirements up front so we order heaps of multi-CPU servers, SANs, middle-tier application servers, front-end application servers, load-balancers etc.

    The next natural step is to think up an intricate object or component distribution strategy to ensure “scalability”.

    I don’t know of anyone who ever got fired for “designing a system for high performance and scalability” and building complex, buzzword compliant distributed application architectures.

    But maybe someone should be. Just to set an example.

    When I worked with the outstanding architect Bjarne Hansen he would often mutter, “Fowler, page 87”.

    He was referring to Martin Fowler’s book, Patterns of Enterprise Application Architecture. Page 87 has a section title in big bold letters: “The allure of distributed objects” – and it addresses precisely that.

    We worked on a system designed to deliver high performance. It had a set of multi-CPU servers, a fiber SAN, a separate multi-CPU application server, handwritten OR mappers using high-performance stored procedures and a complex distributed caching object database on top to keep all the clients in sync and take the load off the database server.

    As for performance…

    It took some 80 people around two years to build the system. I worked with Martin Gildenpfenning for a total of about two man-weeks to optimize away the bottlenecks in the application. At that point it ran faster on a developer desktop with all the components deployed on a single machine including client, server, database server and GIS system than on the high-powered multi-server deployment set-up.

    It turned out network latency was the limiting factor.

    The system had been designed to optimize a scenario that did not happen in practice.

    As Donald Knuth put it, more often than not premature optimization is the root of all evil.

    The optimization work we did was quite simple. We did so as performance as bottlenecks became evident as we neared the end of the development cycle. At that point we had a real working application and knew the use cases to optimize. Armed with a set of profilers the task was quite easy.

    In fact, the big lesson was that once the app is there and the real data is there it is easy to find the bottlenecks – and in general most performance bottlenecks can be solved in a very local manner: most of the bottlenecks could be removed by refactoring a single module only. Mostly it was a matter of replacing a algorithm with a better (non-quadratic!) one or introduce a bit of caching, for example, to not load the same object more than once from the database during the same transaction.

    Discussing performance early in a project embodies the complete fallacies of the waterfall project model. It does not work. It is broken. Given a good design, optimizing performance is a simple, measurement-driven task that belongs in the end of every iteration in the project.

    1) “It is easier to optimize correct code than to correct optimized code.” (Bill Harlan,

    2) Don’t pretend you can spec the hardware for a system before you even know the requirements.

    Someone said on the Ruby on Rails IRC channel:
    > “PHP5 is faster than Rails”
    < “Not if you’re a programmer”

    Google and O’Reilley’s “Hacker of the Year” award recipient for 2005, David Hanson, related how they built and deployed the first major Rails application on an 800 MHz Intel Celeron server running the full web-server-app-server-database application stack. It maxed out when they had about 20,000 users on the application and at that time it was easy to scale it out.

    At the same time Rails is widely claimed to “not be scalable” by the members of the J2EE “enterprise app” community. These are the same people who designed our multi-tier architecture that only maxed out the CPU idle times when it was put into production.

    So, alas here is a checklist for your next project:

    1. Get some hard, measurable targets for performance and expected transaction volume. 
    2. If performance comes up, ask for the CPU, network, I/O utilization statistics for the current system (if it exists). At least this will provide some guidance to whether or not performance will be an issue.
    3. If you are asked to design the hardware, suggest building the app on a single server with plenty of RAM and doing some measurements later. Even if performance becomes an issue, you will benefit from some additional months of Moore’s law to get a better deal on those Itanium boxes.
    4. Create a simple design with no optimizations: use an off-the-shelf O/R-mapper; resist the urge to build complex caches, keep everything in the same process, fight the urge to write those “performance enhancing” stored procedures. Figure out the common use cases and implement a spike with them. Even if the tools and simple design introduces a few milliseconds of overhead you will have saved plenty of man-months in development to pay for an extra CPU to compensate.
    5. Scale the application out, not the components. The big lesson from the big web sites is that buying a load balancer and a rack of cheap servers each running the full application is good enough. In fact, for most applications you don’t even need more than one server.
    6. Measure early, measure often.
  • ScrumMaster Certifications

    As of today everybody in Ative is a certified ScrumMaster. It has been a great course taught by Jens Østergaard and Jeff Sutherland, "the father of Scrum". We are looking forward to bringing the full Scrum arsenal to work for helping our clients create better software faster. 


  • Refactoring The Physical Workspace

    Agile development is all about stacking the deck in favour of success.

    We continuously adapt the application and process to improve it. In code, this is called refactoring - rearranging the parts to a simpler, more manageable form.

    But it does not end here.

    You should refactor your workspace, too.

    We just did that on my current project.

    Success is the result of setting the frame for efficient focused communications about the project: making sure the developers are moving in the same direction as the customer’s expectations.

    If the “information cost” is low everybody will have more information and be more likely to do the right thing. Chances that the application and the customers need diverge get smaller. The success expectancy goes up.

    So just as with programmatic interfaces the people interfaces should be clean and efficient. Hence the need to put people in a physical setting where it is easy to do the right thing.

    Realising that we were not moving as quickly as we wanted to we refactored our team office layout. Instead of sitting in two separate rooms, we moved the project manager and the customer’s project manager into the development team office, putting the customer and PM next to the GUI-guys. This way they can easily talk about the requirements, implement the acceptance tests (we are using Selenium), and the code.

    Excellent communication ensued.

    In the first two hours after moving people around the PM - who is also implementing a lot of the requirements as tests - started having much better conversations with the developers. The customer stepped in to clarify some issues. Then more developers stepped up with more questions and eventually the requirements for the central part of the application became much clearer.

    In a few hours we unveiled more mistakes and incongruencies between the application and the customer’s expectations than we had done in the previous two weeks.

    It results in some rework now (rather than later) and it drove home the point that communication is key success factor. Therefore refactoring the physical environment is a first-class activity in agile development optimization toolkit.

  • Why Acceptance Tests Matter

    While consulting on a migration project I was refactoring the way to load country reference data when I noticed some oddities in the business rules for formatting phone and fax numbers. I had changed from loading based on hard-coded country codes to using an Enum and mapping it to the correct instance in persistence layer.

    One of the methods to be refactored was a business rule saying something like:: if the country code is "DNK" (Denmark), "FRO" (Faroe Islands) or "GRL" (Greenland) then apply this kind of formatting to the telephone number.

    The specification of the rule was probably reverse engineered from the legacy code.

    The implementation was thoroughly unit tested and it was all green lights.

    But the rule was wrong. Maybe it worked one time in the legacy system when country codes were different, but definitely not any more.

    The business rule should have been specified as an acceptance test instead. 

    If that had existed we would have a rule like this:

    1. Create a Danish company in the Copenhagen municipality
    2. Enter a valid telephone number such as "80 12 34 56".
    3. Verify that it is formatted correctly with no blanks, eg.. "80123456".

    And we would have done the same for a company based on Greenland and one on the Faroe Islands.

    Then the test would quickly have discovered that the rule was broken - as it turns out, the country code is Denmark for all Danish companies whether they are in the south, on Greenland or on the Faroe Islands. What varies is their region specifier.

    So had we written the business rule specification from a business perspective and implemented an acceptance test for it we would have caught the error early. Instead, we had all green lights in the unit test but the application was broken.

    While unit tests are very valuable the importance of complete end-to-end acceptance tests cannot be overstated as they are much more likely to catch errors in the specification. In fact the acceptance test suite is the best way to ensure that what you deliver not only works, but also that it does the right thing.

  • Ugly Code - Exception Handling Anti-Pattern

    A message of despair from a consultant friend:

    I am working on a project and we have to deliver "the complete product" in 1 1/2 weeks. This morning i stumbled on this "Pattern" two places in one of our applications (WinApp) - now I am too afraid to look at the rest of the code base:

    catch(System.Exception ex)

    ..... sigh

© Ative Consulting ApS