in

Ative at Work

Agile software development

Ative at Work

september 2006 - Posts

  • Bug-by-Bug Compatibility

    When you are porting an old system you will often be confronted with the pragmatic decision to be bug-by-bug compatible with the old system at the integration points, since it is sometimes not feasible to fix the bugs in the clients to the system.

    On the other hand, we have the principle of "Don't propagate a bad decision."

    So the proper way to do it is two-fold:

    We separate the two concerns:

    First, we write the correct version of the new system. Then, we add a layer that is capable of introducing the bugs at the proper places. This way it is easy to locate the errors later as we upgrade

    So you might have the correct IWidgetService implemented as WidgetService. Then, we also construct a BugCompatibleWidgetService that wraps WidgetService and implements IWidgetService. This, then, is the component we register in our application.

    Thus we don't propagate the bad decision and keep a separation of concerns: one module that works and one responsible for breaking the result in a predictable manner.

  • Don't Port the Dead Code

    Once upon a time when Gildenpfennig and I were porting a mainframe to .NET we received specifications for one of the integration points - to another mainframe that would still be running.

    So we set forth and started implementing. More and more "protocol" ("services" in modern lingo) specifications came in - they took quite some time to dig out since the mainframe was not very well documented - in fact, the existing specification was so wrong that they had to have one of the mainframe hackers reverse-engineer the existing implementation.

    Anyway, the overall project structure was quite waterfall’ish with the customer driving the specification process and the development team consuming it.

    Needless to say, that approach did not work.

    We found out soon enough.

    Basically they got so hooked on reverse-engineering the mainframe that they spent quite some effort writing up all the possible query responses it implemented in the integration point.

    However, when we asked, "well, uh, about this latest LU62 protocol... which query triggers it?" there was no good answer... In fact, after some searching it turned out that a lot of the protocols that existed in the legacy system were no longer in use - the queries that used them no longer existed. It was just a lot of dead code left in the mainframe.

    So we learned an important lesson again:

    When refactoring - remove code that is no longer in use.

    And don't port the dead code.

  • Don't Propagate a Bad Decision

    If you get data in a weird format, don't let that mistake propagate into your app. 

    For example, I once worked on a legacy system that received data from an external source in a CSV file. Dates in this file were, split into three columns: day, month and year.

    Surprisingly this had led the system's designer to store dates in the database not as date values, but as three columns: day, month and year. The result of course propagated to the whole system so that every place dates were used they were three separate values instead of a semantically meaningful atomic DateTime class.

    Needless to say this caused all kinds of problems around the system.

    The lesson is very clear - and particularly something you need to observe at interfaces and system boundaries:

    Don't propagate a bad decision.

    Create a model that is semantically correct according to the business domain and implement a mapping at the interface points.

  • Notes on Mainframe Migration

    Migration projects are often initiated with the objective to save on operational cost. And due to the ridiculous price of keeping a mainframe going the business case is often very attractive. The savings from this alone leads to the decision to do a one-to-one conversion to a lower cost platform such as .NET. From my experience this is the first mistake.

    One-to-one conversions do not exist.

    The temptation to extend the system as it is being rewritten is simply too great, and many times there will be ample opportunity to simplify it and remove bugs as well.

    The problem in migrating mainframes is knowing what the old system does and more importantly why.

    Many times it will include a number of business rules that may or may no longer have any relevance. They may be hacks to compensate for a specific data quality issue 15 years ago. Or they may be of critical importance even today. You will be surprise how often someone will dig out the source code to the legacy system and tell you what it is doing without being able to specify why. 

    Knowing the current business requirements is the central issue.

    Additionally, since mainframe systems are usually thought of as a database there is a tendency to build the new system from the database and up. Often the old system will be described in terms of its database schema and this will be used as the foundation for the new system. This is an anti-pattern. Instead we should focus on the business requirements and design a proper architecture and a new database to fit this. 

    On top of this the quality of data in legacy systems is often somewhat lacking. Therefore the ability to migrate data from the legacy system to the new system is paramount. If you have any DBAs on your team you will be tempted to do the database migration by writing to the database without going trough the new application. It is important to resist this urge since it either gives you invalid data if the data quality is low, or forces you to implement the semantic validity business rule checking in both the data migration code and in the mainline application leading to duplicated effort and the possibility of inconsistencies.

    Data migration should be done through the new object model. And it should evolve concurrently with the rest of the application.

    To sum it up: when you do migration projects, remember these points:

    • DO allocate plenty of resources to uncover the requirements.
    • DO allocate some effort to do some business process reengineering to simplify things.
    • DO build it iteratively - you will be surprised by how much effort goes into figuring out the requirements so it is better to learn this quickly than missing a huge deliverable much later. By learning early you have more time to adjust scope and schedule.
    • DO design the system from the top-down from the business perspective, not from the database and up.
    • DO include the data migration effort in the main project and deliver it incrementally.
  • Agility is...

    "Agility is about discovering how wrong you are as quickly as possible"

    David Churchville in 3 ways to meet your software project deadline

    Posted sep 11 2006, 02:12 by Martin Jul with 3 comment(s)
    Filed under: ,
  • Managing Successful Software Projects

    Successful software development is all about managing the development process.

    Many projects fail or linger in the "80% completed" phase for years on end.

    The root cause is a tradition for measuring status according to the wrong metrics.

    "The requirements are complete", "The design phase is 50%" completed and other statements like this tell nothing about how much business value have been delivered. They merely confirm that the bureaucracy is keeping itself busy.

    Despite of this many software projects measure their progress in terms of such abstract milestones. The result is predictable and catastrophical.

    First of all since status reports do not tell anything about the distance to delivering a software system that provides actual business value, they are the cause of a lot of risk.

    What they do is rob the project sponsors of the ability to manage the project. Since they are not measuring what has been delivered but merely the bureaucratic artifacts produces they usually paint a rosy picture until very late in the project when it becomes impossible to hide the fact that the project is suffering.

    At this point, however, management has no time and budget left to react. In effect, they have been robbed of their tools of control and forced into a position where they can only accept budget and timeframe overruns or cancel the project.

    Therefore, this kind of status reporting should be considered sabotage.

    Unfortunately most software projects suffer from too many saboteurs. Not out of ill will but because the traditional best practices are nothing but sabotage.

    There is a way out, however:

    Measure status in terms of actual, delivered, tested, operational business value.

    This is the key metric to success. It is hard to game it since it only measures the actual results, not bureaucratic artifacts from the production process. And since it is aligned with the business the risk exposure of the project is dramatically diminished.

    The production process itself has to be aligned with this goal. Therefore we produce software in short iterations - 2-week cycles of specification, production and testing that deliver actual running code into the production environment.

    At the end of each iteration the result is in a known good state, the acceptance test - which is automated - confirms that the system is sound and it is ready to deploy to production.

    Doing this requires a disciplined team but the result is immediate. Reliable status, early warnings of deviations from the original plans and the ability to react with due diligence.

    When we founded Ative we pledged to put an end to working stupidly. We pledged to do our best to create better software, faster.

    If you would like to join us give us a shout.

© Ative Consulting ApS