in

Ative at Work

Agile software development

Ative at Work

september 2007 - Posts

  • JAOO 2007 - Uncle Bob, Agile on the Mainframe, Jazz and Intentional Software

    Robert C. Martin did the opening keynote with the title “Clean Code II – Craftsmanship and Ethics”.

    As always Uncle Bob is an inspiring speaker and the keynote was a good kick start of the conference – why we are here. Some of his well-known issues to focus on and improve:
    • Discipline
    • Short iterations
    • Don’t wait for definition: The customer doesn’t know what they want anyway. The team should participate in the definition of requirements.
    • Abstract away volatility:  Don’t place code together that change changes for different reasons. Eg. don’t mix GUI with business logic.
    • Commitment > Omitment: Help find a solution to the problems instead of just blaming someone else.
    • Decouple from others: Build stubs and simulators for your external dependencies.
    • Never be blocked: This is the prime directive. If you can’t get a QA environment when you need it, use a dev machine.
    • Avoid turgid viscous architechture: There is no single perfect architecture. Ivory tower architects are useless - architects must write code.
    • Incremental improvement: If you have bad code, improve you code incrementally. Every time you work on a part of code you should improve the quality, gradually improving the entire system.
    • No grand redesign: Related to the above point. Grand redesigns always fail. And where are the requirements for the redesign? In the current system, which will be a moving target and the redesign will never catch up.
    • Don’t write bad code. The only way to go fast is to go well. As a development team our product is code. If the code is a mess the product is a mess. Focus on code quality, you can change the development team, management or the process – but bad code stays.
    • Clean code: clean code is code with no surprises.  Make small functions (20 lines).
    • TDD
    • QA should find nothing:  Probably they will, but this should be the mindset.
    • 100% code coverage
    • Avoid debugging
    • Manual test scripts are immoral
    • Definition of done: Don’t have more than one definition of done, done counts.
    • Test through the right interface
    • Apprenticeships
    • Use good toolsConclusion: as always – he is so right. 
     Experiences with Agility in the Mainframe Environment
    Dr. Jan Mütter, LSD NRW.

    Dr. Jan Mütter from a German government mainframe project, told about his experiences with agile methods . The first year they worked the “traditional way”: the customer (the government) sending specifications to the development team. The team checked the specification. If it had errors or the team was responsible for any issues in it, the specification was send back to the customer.After one year and almost no progress, they decided to change the process:
    • Customer co-location, the whole team in one building.
      Findings: Remarkable improvement in team communication. Extensive rework on nearly all results (code and documents).
    • Planning game, weekly status meetings and adjustments. Detailed tracking of work packages.
      Findings: low leadtime from requirement to implementation.
    • Weekly build and deployment.
      Findings: problems with configuration and release management.
    • Weekly test, full automation on all tests.
      Findings: Lot of rework because of requirement changes. Test scripting very costly.
    • Release and configuration management.
      Findings: necessary but very costly.
    • Refactoring
    • Change Request management
      Findings: Lot of rework. No differentiation between work package, change and defect. Large positive effect on culture and discipline.
    • Project organization and communication. Software design team makes FQA in relation to requirement analysis.
    • Pair programming. The developers were encouraged but not forced to pair program.
      Finding: Low acceptance.
    • Common goal for the team and the customer (working together).
    They used VisualAge as development tool on the mainframe and the build in ITF test tool. This combined with the fact that they didn’t have a lot of “old” mainframe developers made it possible to make weekly builds and automated tests.

    The conclusion: The environment (goverment company and goverment customer) were a big challenge. E.g. the “service” organization (system operation) did not have the same goal as the team, it took one year to install and configure WebSpherer on the maninframe.The transition gave the project a major improvement, but are still a lot of things to improve.

    Developing Software like a band plays Jazz - From Eclipse to Jazz"
    Erich Gamma, IBM.

    Erich Gamma is a Distinguished Engineer at IBM Rational Software's Zurich lab. He is one of the leaders of the Jazz project. He was the original lead of the Eclipse Java development environment and is on the Project Management Committee for the Eclipse project.Erich gave a introduction to “jazz” and used the Eclipse project at reference (“the eclipse way”).The Eclipse way (in short):
    • 6 week iterations.
    • Milestones first (time boxed). Make milestone results visible.
    • End game: Tight schedule. Process rules have high priority, bug rate and velocity rates go down.
    • Team first. Focus on the team. Small teams. Different time zones (8 locations). Each team have their own plan. Many roles in the team -> every one can “step in”.
    The Jazz product gave the team a lot of information about the project (current status etc.) and can ease many tasks. E.g. it can auto generate a server environment as a copy of the last build (very usefull if/when the build fails).

    Conclusion: Jazz has some interesting features, but it is targeting big organizations (like IBM) or control freaks. If you use the full feature set, it will not necessarily give you a more agile project team. A more lightweight product and process might give you a better result. 

    "Intentional Software - Democratizing Software Creation".
    Charles Simonyi, Intentional Software Corporation. Henk Kolk, VP and CTO, Capgemini

    Charles gave an introduction to how they try to accelerate innovation by integrating the business domain experts into the software production process. They used a case from Capgemini (Netherlands) where they used the Intentional Software tools to create pension systems.

    Their approach is to ask the business domain experts how they describe a new pension product. In this case they use a large Excel spreadsheet to describe the business rules. Capgemini made a customized view/ editor to the customer that looked like the spreadsheet they know.

    Before this the customer used 5 very skilled developers and 3 years to create a new pension product. With this new approach (combined with the Intentional Software tools) it took one person 3 months to create a new pension product. They also converted apox. 40 products within 6 months.

    Charles used a CAD application as semaphore: the designer draw a circle, it can be a cylinder or a ball. If you change the view you can see how the object looks. The designer doesn’t care about how it is stored in the database. The base system (the CAD system) can be used in many different types of design projects, it is not customized for one specific customer or project. It holds common information’s and properties for “objects” regardless of their usage.

    Conclusion: This session showed a red line from the 1970’s where Charles created the first WYSIWYG editor, his time from Microsoft where he was the father of Excel and Word.

    The session opened our eyes for a “new world” of software development. The question is when will it be available to the general market, and not just selected partners?

  • The TDD Controversy - JAOO 2007

    TDD will ruin your architecture! When Jim Coplien said this in his presentation about Scrum and Architecture at the JAOO 2007 conference he really got people's attention. His statement sparked one of the most passionate debates at the conference.

    On Roy Osherove's initiative we did an open-space session discussing the topic.

    The controversy seemed to be related a lot to what TDD really means. It became quite clear that TDD means different things to different people.

    Starting with testing we did reach consensus on some key issues, however:

    • Automated testing is a key enabler for agile development.
    • Test-first is the most effective approach for this.
    • Test-first on the micro-level such as writing a "SaveModifiedInstanceShouldUpdateAuditTrail" test for a Save method seemed to be acknowledged as a Good Thing. Micro-level here means developer testing on a sufficiently low level of integration. There was no consensus on whether to call this unit testing or subsystem/component testing.
    • Testing only the return values of methods has limited benefit, and many of the people attending did not write tests for simple state like property getters and setters. Also, Jim mentioned some studies showing that only a small percentage of actual bugs are related to state.
    • Testing the behaviour - especially the interactions between objects - is where the real value is at. This is one of the reasons that mocking frameworks are so popular.

    There was no clear consensus on whether or not to use a test-first approach for testing the application from a business perspective, ie. testing it on the macro level. One argument was that you need to have a little bit of the application in place before it makes sense to integrate it so it would not be truly test-first. However, no-one questioned the value of automated testing on the macro-level (see the post on why acceptance tests matter) and we all recommend doing it at the earliest possible moment.

    The microlevel testing mentioned above is what I usually call unit testing (although it may not be what unit testing originally meant). It involves a class under test in a controlled environment. This means that we have mocked the interesting classes that it interacts with, but full domain classes may be used as well. For example, if we are testing a mortgage calculator that takes a house and a loan profile as inputs and interacts with pricing service I would usually recommond creating a semantically valid, complete house with the real domain classes (from a factory - the ObjectMother pattern). I would explicitly create the loan profile (amount, duration, interest rate etc.) in the test since it contains the "interesting" variables related to the calculation and I would mock out the service that provides the current ticker price for loans by interest rate that the service uses to calculate the mortgage. The point is that even though I call it a unit test we are at a low but non-trivial level of integration.

    Learning to be good at this takes some time since it is easy to write bad tests - in fact, Roy is currently writing a book on this called The Art of Unit Testing to help beginners climb this curve. Jim mentioned Behaviour Driven Development several times as an example of how to do this. From my personal perspective BDD looks like an elegant expression of where you can go with micro-level testing and experience.

    Now, what about TDD then? Jim quoted some studies of student programmers showing that TDD done strictly from the YAGNI principle leads to an architectural meltdown around iteration three. One of the examples he mentioned was that of a bank account where TDD would lead to modelling the account as the balance (possibly with an audit trail) whereas a domain analysis would lead to modelling it as a collection of transactions with the balance being a function of these transactions.

    However, let's also note that it is entirely possible to do bad architecture without TDD. From my perspective it is also very hard to do no architecture even with TDD if you have experienced people on the team. The reason is that they will know sound patterns and structural concepts that they will use, even unconsciously. For example, Jimmy Nilsson wrote a book to define a default architecture for .NET business applications, and Ruby on Rails provides a common architecture for web applications. Using these appropriately help keep your large-scale structures sound. This is one of the reasons why alpha architects working in known territory can avoid architectural meltdown in the large even with TDD. You can still make mistakes in the domain-specific modelleing. In fact I don't recall a single sizable project, big upfront design or not, where we did not learn something that would guide us to a better architecture if we had to do it again.

    But, alas, as someone said in the conference: Good judgement comes from experience - and exerience comes from bad judgement. The goal is not necessarily to avoid failure entirely but rather to make it short-lived and inexpensive to correct. We are aiming for cost-efficiency rather than paralysis.

    I will leave the controversy at this. As you can see there is a consensus about most of the issues related to test automation and test first. If that's your definition of TDD then by all means, please keep doing it!

  • Jaoo 2007

    We're off to Jaoo 2007 next week.
    We are bringing some poker planning cards and some "I O Scrum" t-shirts, please give us a hint if you are interested.

    Every day we will post a summery from the sessions of the day.

  • Bye, bye, "Done, but" - The Flat Organisation and Enterprise Scrum

    Lean organisations tend to be very flat with few layers of management. They rely on self-organisation and leadership. In this context, then, what is the role of the middle layers in the organisation? This is one of the points Ken Schwaber addresses in "The Enterprise and Scrum". The answer is amazingly simple.

    The middle layers are Scrum teams that prioritize the work for the subteams and integrate the results.

    In other words they prepare the product backlog for the subordinate teams and provide leadership, coordination, infrastructure and support for the teams below. This means that the integration teams are complete, cross-functional Scrum teams with full delivery capability. Compared to many traditional organisations this means that release engineering activities, QA, etc. move up in the organisational hierarchy, making the downstream steps in the value stream the integrators and bosses of the earlier steps. We are, in effect, a creating kanban system for the whole development process. Through this we create flow and remedy the common failure of deferring integration work to the end only to learn too late that the parts do not fit well together.

    This organisation of Scrum teams builds on lean principles such as frequent integration and "jidoka" - stop-the-line bug fixing. Every team is responsible for integrating its own work and that of its subordinates. If a subteam delivers something that is not good enough, the work is pushed back down and not accepted. This way, we make sure the details are always correct and shippable. As a result, the whole product remains integrated and shippable.

    The ScrumMaster on the integration team is facilitates the removal of impediments that cannot be removed by the front-line teams themselves. This creates a system for multi-layered kaizen where the change is always initiated by the people who experience the problems and escalated to the proper level of authority. Through this the integration teams switch from command-and-control to serving the teams below.

    One of the benefits from this structure is that the Product Owner on the integration team defines the product backlog for the subteams based on the coarser-grained backlog and priorities from the next level up. In this way the integration team and its subteams is treated as a virtual, cross-functional team. Since we are not allowed to produce loose ends, it forces us to actually start only activities that can be completed in a sprint with the necessary coordination between subteams. We have to begin with the end in mind.

    For example, if a backlog item for adding a new report for to a reporting application is selected, we have to make sure to request the whole feature - both the new GUI (a pretty diagram, maybe), and the necessary updates to the back-end server application to provide the data. If there are separate teams working on this the feature is split across different team backlogs. In many cases this requires coordination between multiple teams when planning. The key point is to never commit anything to the integration sprint backlog unless all aspects of it are committed to the sprint backlogs of the subordinate teams.

    If this is not possible we have to break it down into smaller pieces or leave it in the product backlog. We never accept work that cannot be completed and demonstrated at the end of the iteration.

    When we implement Scrum in organisations this is something that it usually perceived as quite frustrating in the beginning. Many people have a strong habit of doing for partial work. However by only committing completable work we reduce one the great time-consuming activities in project management, namely managing dependencies between projects, trying to get features done in isolated, non-integrated layers and the coordination and "expediting" work to lobby different teams to change their plans to get it all correct and integrated on time.

    The end result is a system where business goals and priorities flow from the top down and demonstrable business results flow from the bottom up. Every iterations ends with completely integrated realisations of the top level backlog items. It means goodbye to, "we're done with the new report, but we have to wait three months for the database team to expose the data for us", and hello to, "here's your new report, ready for use". That's a pretty good return on eliminating bureaucracy.

© Ative Consulting ApS