in

Ative at Work

Agile software development

Ative at Work

  • JAOO 2007 - Uncle Bob, Agile on the Mainframe, Jazz and Intentional Software

    Robert C. Martin did the opening keynote with the title “Clean Code II – Craftsmanship and Ethics”.

    As always Uncle Bob is an inspiring speaker and the keynote was a good kick start of the conference – why we are here. Some of his well-known issues to focus on and improve:
    • Discipline
    • Short iterations
    • Don’t wait for definition: The customer doesn’t know what they want anyway. The team should participate in the definition of requirements.
    • Abstract away volatility:  Don’t place code together that change changes for different reasons. Eg. don’t mix GUI with business logic.
    • Commitment > Omitment: Help find a solution to the problems instead of just blaming someone else.
    • Decouple from others: Build stubs and simulators for your external dependencies.
    • Never be blocked: This is the prime directive. If you can’t get a QA environment when you need it, use a dev machine.
    • Avoid turgid viscous architechture: There is no single perfect architecture. Ivory tower architects are useless - architects must write code.
    • Incremental improvement: If you have bad code, improve you code incrementally. Every time you work on a part of code you should improve the quality, gradually improving the entire system.
    • No grand redesign: Related to the above point. Grand redesigns always fail. And where are the requirements for the redesign? In the current system, which will be a moving target and the redesign will never catch up.
    • Don’t write bad code. The only way to go fast is to go well. As a development team our product is code. If the code is a mess the product is a mess. Focus on code quality, you can change the development team, management or the process – but bad code stays.
    • Clean code: clean code is code with no surprises.  Make small functions (20 lines).
    • TDD
    • QA should find nothing:  Probably they will, but this should be the mindset.
    • 100% code coverage
    • Avoid debugging
    • Manual test scripts are immoral
    • Definition of done: Don’t have more than one definition of done, done counts.
    • Test through the right interface
    • Apprenticeships
    • Use good toolsConclusion: as always – he is so right. 
     Experiences with Agility in the Mainframe Environment
    Dr. Jan Mütter, LSD NRW.

    Dr. Jan Mütter from a German government mainframe project, told about his experiences with agile methods . The first year they worked the “traditional way”: the customer (the government) sending specifications to the development team. The team checked the specification. If it had errors or the team was responsible for any issues in it, the specification was send back to the customer.After one year and almost no progress, they decided to change the process:
    • Customer co-location, the whole team in one building.
      Findings: Remarkable improvement in team communication. Extensive rework on nearly all results (code and documents).
    • Planning game, weekly status meetings and adjustments. Detailed tracking of work packages.
      Findings: low leadtime from requirement to implementation.
    • Weekly build and deployment.
      Findings: problems with configuration and release management.
    • Weekly test, full automation on all tests.
      Findings: Lot of rework because of requirement changes. Test scripting very costly.
    • Release and configuration management.
      Findings: necessary but very costly.
    • Refactoring
    • Change Request management
      Findings: Lot of rework. No differentiation between work package, change and defect. Large positive effect on culture and discipline.
    • Project organization and communication. Software design team makes FQA in relation to requirement analysis.
    • Pair programming. The developers were encouraged but not forced to pair program.
      Finding: Low acceptance.
    • Common goal for the team and the customer (working together).
    They used VisualAge as development tool on the mainframe and the build in ITF test tool. This combined with the fact that they didn’t have a lot of “old” mainframe developers made it possible to make weekly builds and automated tests.

    The conclusion: The environment (goverment company and goverment customer) were a big challenge. E.g. the “service” organization (system operation) did not have the same goal as the team, it took one year to install and configure WebSpherer on the maninframe.The transition gave the project a major improvement, but are still a lot of things to improve.

    Developing Software like a band plays Jazz - From Eclipse to Jazz"
    Erich Gamma, IBM.

    Erich Gamma is a Distinguished Engineer at IBM Rational Software's Zurich lab. He is one of the leaders of the Jazz project. He was the original lead of the Eclipse Java development environment and is on the Project Management Committee for the Eclipse project.Erich gave a introduction to “jazz” and used the Eclipse project at reference (“the eclipse way”).The Eclipse way (in short):
    • 6 week iterations.
    • Milestones first (time boxed). Make milestone results visible.
    • End game: Tight schedule. Process rules have high priority, bug rate and velocity rates go down.
    • Team first. Focus on the team. Small teams. Different time zones (8 locations). Each team have their own plan. Many roles in the team -> every one can “step in”.
    The Jazz product gave the team a lot of information about the project (current status etc.) and can ease many tasks. E.g. it can auto generate a server environment as a copy of the last build (very usefull if/when the build fails).

    Conclusion: Jazz has some interesting features, but it is targeting big organizations (like IBM) or control freaks. If you use the full feature set, it will not necessarily give you a more agile project team. A more lightweight product and process might give you a better result. 

    "Intentional Software - Democratizing Software Creation".
    Charles Simonyi, Intentional Software Corporation. Henk Kolk, VP and CTO, Capgemini

    Charles gave an introduction to how they try to accelerate innovation by integrating the business domain experts into the software production process. They used a case from Capgemini (Netherlands) where they used the Intentional Software tools to create pension systems.

    Their approach is to ask the business domain experts how they describe a new pension product. In this case they use a large Excel spreadsheet to describe the business rules. Capgemini made a customized view/ editor to the customer that looked like the spreadsheet they know.

    Before this the customer used 5 very skilled developers and 3 years to create a new pension product. With this new approach (combined with the Intentional Software tools) it took one person 3 months to create a new pension product. They also converted apox. 40 products within 6 months.

    Charles used a CAD application as semaphore: the designer draw a circle, it can be a cylinder or a ball. If you change the view you can see how the object looks. The designer doesn’t care about how it is stored in the database. The base system (the CAD system) can be used in many different types of design projects, it is not customized for one specific customer or project. It holds common information’s and properties for “objects” regardless of their usage.

    Conclusion: This session showed a red line from the 1970’s where Charles created the first WYSIWYG editor, his time from Microsoft where he was the father of Excel and Word.

    The session opened our eyes for a “new world” of software development. The question is when will it be available to the general market, and not just selected partners?

  • The TDD Controversy - JAOO 2007

    TDD will ruin your architecture! When Jim Coplien said this in his presentation about Scrum and Architecture at the JAOO 2007 conference he really got people's attention. His statement sparked one of the most passionate debates at the conference.

    On Roy Osherove's initiative we did an open-space session discussing the topic.

    The controversy seemed to be related a lot to what TDD really means. It became quite clear that TDD means different things to different people.

    Starting with testing we did reach consensus on some key issues, however:

    • Automated testing is a key enabler for agile development.
    • Test-first is the most effective approach for this.
    • Test-first on the micro-level such as writing a "SaveModifiedInstanceShouldUpdateAuditTrail" test for a Save method seemed to be acknowledged as a Good Thing. Micro-level here means developer testing on a sufficiently low level of integration. There was no consensus on whether to call this unit testing or subsystem/component testing.
    • Testing only the return values of methods has limited benefit, and many of the people attending did not write tests for simple state like property getters and setters. Also, Jim mentioned some studies showing that only a small percentage of actual bugs are related to state.
    • Testing the behaviour - especially the interactions between objects - is where the real value is at. This is one of the reasons that mocking frameworks are so popular.

    There was no clear consensus on whether or not to use a test-first approach for testing the application from a business perspective, ie. testing it on the macro level. One argument was that you need to have a little bit of the application in place before it makes sense to integrate it so it would not be truly test-first. However, no-one questioned the value of automated testing on the macro-level (see the post on why acceptance tests matter) and we all recommend doing it at the earliest possible moment.

    The microlevel testing mentioned above is what I usually call unit testing (although it may not be what unit testing originally meant). It involves a class under test in a controlled environment. This means that we have mocked the interesting classes that it interacts with, but full domain classes may be used as well. For example, if we are testing a mortgage calculator that takes a house and a loan profile as inputs and interacts with pricing service I would usually recommond creating a semantically valid, complete house with the real domain classes (from a factory - the ObjectMother pattern). I would explicitly create the loan profile (amount, duration, interest rate etc.) in the test since it contains the "interesting" variables related to the calculation and I would mock out the service that provides the current ticker price for loans by interest rate that the service uses to calculate the mortgage. The point is that even though I call it a unit test we are at a low but non-trivial level of integration.

    Learning to be good at this takes some time since it is easy to write bad tests - in fact, Roy is currently writing a book on this called The Art of Unit Testing to help beginners climb this curve. Jim mentioned Behaviour Driven Development several times as an example of how to do this. From my personal perspective BDD looks like an elegant expression of where you can go with micro-level testing and experience.

    Now, what about TDD then? Jim quoted some studies of student programmers showing that TDD done strictly from the YAGNI principle leads to an architectural meltdown around iteration three. One of the examples he mentioned was that of a bank account where TDD would lead to modelling the account as the balance (possibly with an audit trail) whereas a domain analysis would lead to modelling it as a collection of transactions with the balance being a function of these transactions.

    However, let's also note that it is entirely possible to do bad architecture without TDD. From my perspective it is also very hard to do no architecture even with TDD if you have experienced people on the team. The reason is that they will know sound patterns and structural concepts that they will use, even unconsciously. For example, Jimmy Nilsson wrote a book to define a default architecture for .NET business applications, and Ruby on Rails provides a common architecture for web applications. Using these appropriately help keep your large-scale structures sound. This is one of the reasons why alpha architects working in known territory can avoid architectural meltdown in the large even with TDD. You can still make mistakes in the domain-specific modelleing. In fact I don't recall a single sizable project, big upfront design or not, where we did not learn something that would guide us to a better architecture if we had to do it again.

    But, alas, as someone said in the conference: Good judgement comes from experience - and exerience comes from bad judgement. The goal is not necessarily to avoid failure entirely but rather to make it short-lived and inexpensive to correct. We are aiming for cost-efficiency rather than paralysis.

    I will leave the controversy at this. As you can see there is a consensus about most of the issues related to test automation and test first. If that's your definition of TDD then by all means, please keep doing it!

  • Jaoo 2007

    We're off to Jaoo 2007 next week.
    We are bringing some poker planning cards and some "I O Scrum" t-shirts, please give us a hint if you are interested.

    Every day we will post a summery from the sessions of the day.

  • Bye, bye, "Done, but" - The Flat Organisation and Enterprise Scrum

    Lean organisations tend to be very flat with few layers of management. They rely on self-organisation and leadership. In this context, then, what is the role of the middle layers in the organisation? This is one of the points Ken Schwaber addresses in "The Enterprise and Scrum". The answer is amazingly simple.

    The middle layers are Scrum teams that prioritize the work for the subteams and integrate the results.

    In other words they prepare the product backlog for the subordinate teams and provide leadership, coordination, infrastructure and support for the teams below. This means that the integration teams are complete, cross-functional Scrum teams with full delivery capability. Compared to many traditional organisations this means that release engineering activities, QA, etc. move up in the organisational hierarchy, making the downstream steps in the value stream the integrators and bosses of the earlier steps. We are, in effect, a creating kanban system for the whole development process. Through this we create flow and remedy the common failure of deferring integration work to the end only to learn too late that the parts do not fit well together.

    This organisation of Scrum teams builds on lean principles such as frequent integration and "jidoka" - stop-the-line bug fixing. Every team is responsible for integrating its own work and that of its subordinates. If a subteam delivers something that is not good enough, the work is pushed back down and not accepted. This way, we make sure the details are always correct and shippable. As a result, the whole product remains integrated and shippable.

    The ScrumMaster on the integration team is facilitates the removal of impediments that cannot be removed by the front-line teams themselves. This creates a system for multi-layered kaizen where the change is always initiated by the people who experience the problems and escalated to the proper level of authority. Through this the integration teams switch from command-and-control to serving the teams below.

    One of the benefits from this structure is that the Product Owner on the integration team defines the product backlog for the subteams based on the coarser-grained backlog and priorities from the next level up. In this way the integration team and its subteams is treated as a virtual, cross-functional team. Since we are not allowed to produce loose ends, it forces us to actually start only activities that can be completed in a sprint with the necessary coordination between subteams. We have to begin with the end in mind.

    For example, if a backlog item for adding a new report for to a reporting application is selected, we have to make sure to request the whole feature - both the new GUI (a pretty diagram, maybe), and the necessary updates to the back-end server application to provide the data. If there are separate teams working on this the feature is split across different team backlogs. In many cases this requires coordination between multiple teams when planning. The key point is to never commit anything to the integration sprint backlog unless all aspects of it are committed to the sprint backlogs of the subordinate teams.

    If this is not possible we have to break it down into smaller pieces or leave it in the product backlog. We never accept work that cannot be completed and demonstrated at the end of the iteration.

    When we implement Scrum in organisations this is something that it usually perceived as quite frustrating in the beginning. Many people have a strong habit of doing for partial work. However by only committing completable work we reduce one the great time-consuming activities in project management, namely managing dependencies between projects, trying to get features done in isolated, non-integrated layers and the coordination and "expediting" work to lobby different teams to change their plans to get it all correct and integrated on time.

    The end result is a system where business goals and priorities flow from the top down and demonstrable business results flow from the bottom up. Every iterations ends with completely integrated realisations of the top level backlog items. It means goodbye to, "we're done with the new report, but we have to wait three months for the database team to expose the data for us", and hello to, "here's your new report, ready for use". That's a pretty good return on eliminating bureaucracy.

  • Implementing Scrum in the Enterprise - Agile 2007 Day Four

    Ken Schwaber has been working on enterprise-wide Scrum implementations recently. In his session today he shared his experiences and offered a roadmap for the process.

    First off all there is no "Enterprise Scrum" - it is the same simple, empirical framework for developing complex systems that is applied everywhere rather than just in software development.

    This means that even senior management will also be working iteratively with a backlog and showing demonstrable results at the end of every iteration. This in turn, means radical transparency through the whole organisation at all levels.

    In Schwaber's experience the enterprise-wide roll-out of Scrum take some time. He offered the case of a 1000-developer organisation. Here, they spent six months on the roll-out (training everybody in the Scrum practices) and after that 3-5 years to implement it make it stick by iteratively removing the impediments to producing quality software in the organisation. As he said, "Culture eats strategy for breakfast", so this phase is all about changing people's habits and getting the waste out of the system. During this implementation a senior management executive Scrum team is in place using Scrum to deal with the impediments and demonstrably solve the top issues as they become visible during the transition.

    There is no "Done" - once the Scrum framework is implemented the organisation is in a state of continuous improvement, reflecting on its changing environment and evolving to stay ahead of the game.

    He offered this road-map:

    First, do a pilot project to learn the Scrum basics, then try Scrum on the most challenging, critical project in the organisation to make a proven case for everybody that is works.

    Then do the enterprise roll-out. This is where it becomes top-down and is implemented at all levels.

    Here is how the executive team works:

    First, create a backlog of what is wrong today - what is in the way of developing quality software (eg. "we have too many projects", "we don't integrate often so we have no visible status", "we produce too many lose ends"). Then commit to fixing it and use the Scrum process to fix it - first things first. Senior management has to demonstrably fix the top issue or issues in one iteration. Then attack the next top one, etc. etc.

    Meanwhile, the Scrum training for the entire organisation starts. This is the roll-out.

    For all the work, the Scrum team (max 9 people) is the basic building block, and Scrum-of-Scrums is use for hierarchical breakdown of the organisation.

    The benefits is a very visible status in the entire organisation, that it is all working from a single backlog and aligned with creating business value rather than some arbitrary and sometimes detrimental performance metrics (Jim Highsmith did a talk on agile performance management at the conference with some very good findings on this topic). There will be a very simple and open control structure encouraging an open environment over command and control - no need for additional committees and boards to control things at an often too late time.

  • Corporate Judo - Guerilla Tactics For Agile Transformation - Agile 2007 Day Three

    Today, we did a Discovery Session at Agile 2007. The topic? How do we implement agile. We worked in five teams that each created a roadmap for implementing agile based on their experience and their role in the organisation: management, team and customer.

    A big thank you goes out to everyone who participated. Please add your comment and notes to this blog entry.

    Michelle K. Cole, VP of Operations at ENVISAGE Technologies kindly wrote up the findings from the five teams as they presented. Here they are.

    Where Should a Manager Start?

    • get executive sponsorship
    • create vision & trust
    • education about benefits of agile
    • create common language and measurements
    • show value to team (what’s in it for me), benefits to the projects
    • provide a supportive environment
    • figure out success factors
    • set up agile advocate group
    • identifying if you need external coaches
    • establish team maturity model
    • staffing – hire those people that might be good in this environment (you might even need to fire some if there is too much resistance).
    • empower them to come up with a better solution
    • create a system of custom ownership
    • use peer pressure to bring other people along

    The other Management team offered this approach: 

    • do a big kick-off event
    • send key people to conferences like Agile 2007 etc.
    • provide readers/skeptics with books - some people learn well from books and many techies like to read a lot.
    • pilot a project & praise success
    • encourage culture of trying things
    • meet with individuals to understand pain points and processes to keep
    • Establish brown bags/training for people that like that style
    • Rearrange workspace so team sits together

    Guerilla Tactics for Teams - Developers/Project Managers/QA

    • how to overcome resistance to sitting in team room
      • have stand up in team room and don't let them leave afterwards!
    • how to get space for collocation
      • squatting long term
      • have stand up outside mgmt office
      • furniture on wheels
      • midnight runs
    • resistance to pairing
      • o stealth pairing (ask people to help another)
      • o set up a pair station
      • o coolest equipment in the pairing machines
      • o have appropriate hardware to pair (2 keyboards, two monitors, movable desks)
    • resistance to collaboration
      • make sure there are common goals
    • resistance to agile
      • convince the cool kid it is cool
      • explain there isn’t that much change
      • bust the myths
    • mandated process
      • ask forgiveness
    • resistance to TDD & automated tests
      • demonstrating the usefulness
      • pair with fans of TDD
    • involve QA
      • have the business analysts work with QA on acceptance tests
    • PM resistance
      • Show other non-software projects
      • Shadow existing projects
      • Send the PM on vacation

     

    Also from a Team perspective

    • Agile is not binary; it is a continuum.  You are someplace.  Where do you want to be tomorrow?
    • Change should not be inflicted upon people.  You should figure out what you can do and start to make that change.
    • As a developer, start doing TDD.
    • As a team, start doing stand up.
    • Agile provides many benefits.  Figure out which benefit you are trying to achieve most and start with the following:
      • Focus on Feedback - Fix length iterations with retrospective
      • If you need Flexibility - TDD is a good practise
      • Communication Issues are addressed Daily stand ups

    Customer team

    This was a very interesting team. Most of the people in the session had experienced resistance to implementing agile. The people from Yahoo, however, had not - for their two teams they just did a team meeting, introduced the agile practises and since the teams liked it they decided to do it.  Unlike most of the other people in the session they were completely self-organising and met no resistance to the change from their team, the management or the rest of the organisation.

    The group presented some suggestions for going agile

    • Get trained together to improve buy in
    • Be where the developers are; wait for them to ask questions
    • Us v. them – start doing things together as a team, may want to get an outsider to help build trust
    • Define product backlog & prioritize

    Other Ideas

    Doing Agile with Outsourcing

    • Use Scrum of scrums to synch the teams and beware of time-zone issues.
    • Have people go to other locations for a while to build trust and improve collaboration.
    • Find a balance of documentation – increase to improve communication.

    Partial change - do it gradually (actually, other cases like the Salesforce.com experience report we mentioned the other day advocates going all-in).

    Survey people on what they need and fix it - it is often small things that make a big difference

    • If the rooms feel to sterile let people hang personal items from ceiling (eg pictures)
    • Add plants

    Empowerment based on company directives

    Room inversion

    • Switch to work in conference rooms and use offices for personal time (eg. phone calls).

  • Impressions From Agile 2007 - Day Two

    Dave Thomas told about his experience with large-scale agile projects. His take on plans:

    The Plan is really just The Best Wrong Answer

    Having said that however, he also noted that he had a very good track record with hitting the shipping dates - the worst project was Visual Age for Java which was three weeks late. His secret? Letting the team own the plan and doing estimates in ranges (wideband delphi) to prepare the planning session, then iteratively negotiating over the estimates until a consensus is reached.

     

    Release Engineering - Getting the Development Infrastructure Right

    Dave Thomas also named Release Engineering - the work to create the development infrastructure, automated builds and testing and providing dashboards with status for the project to be a key activity. In fact, his experience is that around 10% of the total effort should go into this area. The key points:

    • it is not something you outsource to the IT-department - instead, use real developers and co-locate them with the development team the first 3-4 months of the project as they work to set up configuration management, automated build, integration and testing.
    • Do focus on dependency management - you need to have thin slices of all the components with automated acceptance tests for the interfaces in place early so no-one is blocked by missing components.
    • All components should be tested independently - and provide acceptance tests for all interfaces.
    • Set up an automatic "Product Development Dashboard" with all the vital information - the goal is to eliminate the need for asking, "how's it going". This includes build status, test status, coverage etc. 

     

    Jim Highsmith did a nice presentation on agile from a management perspective, stating that the key to success with agile is to implement it throughout the enterprise. Doing it in silos is a suboptimisation.

    The key benefits of going agile:

    • it helps solve issues on developing features in a large, complex system.
    • it improves quality, and
    • it improves predictability

    The big value in agile is all the things you don't do.

    • not wasting time on the features you don't need (most features aren't used anyway).
    • the value is driven by the business needs, not by some arbitrary priorities.

    The essence - from John Wooden: "Be Quick, But Don't Hurry".

  • Impressions From Agile 2007 - Day 1

     

    Finally, the biggest agile event of the year is under way! We have a big Danish delegation here this year - other than Ative we have met people from  Bankdata, BRFkredit, GoAgile, BestBrains, NDS, Systematics here.

    Martin Jul (left) and Jan Elbæk (right)

    Agile Metrics

    The first day of the conference was a half-day with only a Research-in-progress track in the morning. One session focused on metrics for agile.

    There was a big discussion about the usefulness of metrics. It was clear from peoples experience that good metrics can be useful but using the wrong metrics can be devastating. To this William Krebs of IBM listed 5 Deadly Sins of metrics - metrics that had a lot of overhead to report, the scorecards with traffic lights (people tend to game them to make everything "green"), not fixing the problems that were identified by the metrics, using an inconsistent format so that data could not be compared, and forcing a process on the team.

    Coni Tartaglia and Prasad Ramnath presented a list of metrics that their team at Primavera Systems had decided to use for their project. As part of their Retrospective they had decided on a number of metrics called their "Sprint Build Inspection".

    To measure "Coded to Standards" they used FxCop (for .NET) and FindBugs (for Java). These are static analysis tools that spot common problems in code such as naming conventions, performance and design issues etc.

    To measure the unit tests they used code coverage. It was interesting to see that they defined "good enough" as a coverage of 56% or more. However they mostly used this metric to track how they were progressing towards better coverage as they were developing from a legacy starting point.

    For "functionally tested" they mandated an automated test for each requirement and reported on the number of automated tests. They segmented these into areas - manual (they had gone from many to none), FIT tests, Silk tests and other tests.

    For measuring if the application was "Integrated" they reported on the percentage of unit tests passing, the percentage of FIT tests passing, the percentage of Silk tests, manual and other tests passing. This was used to gauge the quality of the application.

    Additionally, they measured "number of priority 1 defects". The team had made a working agreement and decided that at the end of each sprint no priority one defects were allowed, and during the sprint a maximum of 3 priority one defects were allowed per team and no more than 40 total. Also, no bugs older than 30 days were allowed.

    Finally they used an interesting qualitative approach to overall shippability, namely asking the team to assess whether they would be able to ship the application in 30 days or not. This would be the lithmus test of the overall quality.

    After this we had a big discussion in groups and people also named a number of other useful metrics like not allowing code with higher cyclomatic complexity than 10 (to eliminate the worst design issues), measuring compliance with various internal and external standards (go/no-go) and to measure positive approval from the customer (some would say that would be the ultimate metric!), and also to inspect and adapt the set of metrics on the way, adding and dropping metrics as needed.

    Experience Reports - Implementing Agile in the Enterprise

    The really big case study here was Chris Fry and Steve Greene from Salesforce.com. Their case: converting a 200+ person development organisation from waterfall to agile in three months. Yes, three months. They didn't like the way their velocity and release frequency had dropped and bet everything on fixing it quickly.

    As Chris Fry noted - there's never a good time to convert to agile, so just pick a bad time and go.

    Here's how they did it: they created created a cross-functional roll-out team with the leads of all the involved silos to oversee the transformation project - development, QA etc.

    Then, everybody got a copy of Ken Schwabers Scrum book and over a two-week period, everybody received a two-hour crash course on agile development.

    Then they converted the organisation into 30 Scrum teams and let go. The company had a 30-day sprint cycle, but teams were allowed to subdivide these sprints into 1 or 2-weeks sprints if they liked.

    Additionally, the sent 30 people from the development organsation for ScrumMaster certification and 35 product owners were also certified to ensure buy-in from the business side. One of they key findings was that they should have started with training the product owners earlier. This is also our experience.

    Some of their key findings

    • The Product Owners should be trained first (ScrumMasters can be trained later)
    • They would like to have used more open-space style events with the individual team members to get more buy-in from the people doing the work faster.
    • They should have introduced outside coaching help earlier.
    • They gave key executives deliverables for the roll-out to assure their buy-in.
    • As they moved to self-organising teams the had very clear rules about the "Done"-ness. This was not up to the individual teams. Instead they set a corporate standard for QA, documentation, test etc.

    The listed a number of keys to success

    • Get executive commitment - this also means that shipping dates are never negitiable. The sprint length and shippability can never be compromised.
    • Focus on the principles - not the mechanis. For example, some teams did not like the Daily Scrum agenda, so they allowed them to make their own agenda as long as they kept meeting daily to communicate about the project and share important information.
    • Also, they focused a lot on automation. Continuous integration with daily build and test was put in place and bugs were treated as stop-the-line issues.
    • Finally they adviced to provide radical transparency - in fact, they sent out a company-wide email with key daily metrics every day to provide a heartbeat for the transition.

    Finally they offered some advice for people embarking on a similar transformation project:

    • Set up a dedicated cross-functional roll-out team to guide the transition
    • Get professional help - outside coaches.
    • Focus on moving the average teams up - get several teams to excellence. Don't waste your time on the high and low performers but focus on getting the bulk to a high level.
    • Provide a heartbeat in the form of a daily metric to everyone.
    • Decide on proper tooling early - for their organisation using Excel for backlog/burndown did not quite scale.
    • Encourage visibility and over-communicate. People need to hear the agile principles again and again to avoid fallbacks.
    • Experiment, be patient and expect mistakes. Just fix them quickly as they arise.
    • And don't take the slow way - go all in, and convert the whole organisation in one shot.
  • Meet us at Agile 2007

    We're off to Agile 2007 next week where I will be hosting a Discovery Session on how to go about implementing agile practises. It is called "Corporate Judo - Guerilla Tactics for Agile Transition". If you are interested in this topic and would like to share your experience to identify patterns for going agile, please join us there. It is on Wednesday at 16:00 in Meeting Room 2.

    Here's the abstract:

    How do we become better change agents? In the agile community we have all "seen the light." We know all the things that are broken in traditional IT organizations. We also know all the solutions. Yet for some reason we are not very efficient at making the transition happen - and even when we manage to change things for the better the organizations often revert to their old ways when we leave. In this workshop, we will work in groups and draw on our experience to identify the best "corporate judo moves": those that make the agile change happen, and make it stick.

    Also, we would like are to meet our blog friends while we are there. If you are interested, please contact me at mj@ative.dk.

  • Planning Poker

    Estimation and planning is an essential part of development, even for agile projects. We have all seen lots of worthless plans, that many in fact that we are tempted to throw planning out altogether. Estimation does not need to be boring, it does not need to be that inaccurate. It can actually be kind of fun...

    In his excellent book Agile Estimating and Planning Mike Cohn discusses the philosophy of agile estimating and planning and shows you how to get the job done.

    In this post I will mention Planning Poker (coined by James Grenning - Planning Poker) as a simple but very effective technique. The rules are few and simple:

    • Each team member gets a deck of estimation cards.
    • The product owner presents one user story at a time.
    • The product owner answers any questions the team might have.
    • Each team member selects a card representing his estimate.
    • When everybody is ready with an estimate, all cards are presented simultaneously.
    • If estimates differ, the high and low estimators defend their estimates.
    • The group briefly debates the arguments (time boxed).
    • A new round of estimation is made.
    • Continue until consensus has been reached.

    I have used Planning poker for sprint planning in scrum teams - but also in release planning when management required long term planning 6-9 months ahead. Using planning poker we had enough accuracy and did not use lots of time trying to detail tasks up front that nobody knew much about - and that perhaps would be replaced by more important stuff before being implemented.

    Ative Planning Poker cards

    So what you need is some planning poker cards to get started doing estimation the agile way. Contact Ative and get a deck of cards.

    A full deck of Ative card is 13 cards spanning from 0 to 100 and 2 special cards:

     "?" = "I have absolutely no idea at all" (to many of these played tells us that the user story most likely is not ready for the current sprint)

    "Coffee" = "I need a short break. I'm too tired to think"

  • Dijkstra on Software Quality

    "The vision is that, well before the end of the seventies have run to completion, we shall be able to design and implement the kind of systems that are now straining our programming ability at the expense of only a few percent in man-years of what they cost us now, and that besides that, these systems will be virtually free of bugs. These two improvements go hand in hand. In the latter respect software seems to be different from many other products, where as a rule a higher quality implies a higher price. Those who want really reliable software will discover that they must find means of avoiding the majority of bugs to start with, and as a result the programming process will become cheaper. If you want more effective programmers, you will discover that they should not waste their time debugging - they should not introduce the bugs to start with. In other words, both goals point to the same change."

    Edsger W. Dijkstra in his 1972 Turing Award Lecture, "The Humble Programmer" (Communications of the ACM, October 1972, Volume 15, Number 10, p. 859-866).

    Not only does Dijsktra name the concept of speed through quality, but he also touches a lot of other issues that are still relevant today as part of the agile/lean software development agenda. Highly recommended reading!

     

  • Retrospectives - Adapting to Reality

    Accelerated learning or "Inspect and adapt" is one of the key lean principles. It is the central feedback loop that keeps our work in sync with the real world.

    In agile one of the ways to do this by performing a "retrospective" at the end of every iteration. Its focus is gathering data and turning it into actionable improvement activities (kaizen). Esther Derby and Diana Larsen have written a wonderful little book on this: "Agile Retrospectives: Making Good Teams Great". It has a wealth of information on how to get the most from retrospectives and includes a number of useful tools.

    Derby and Larsen use a five-stage structure for the retrospective: setting the stage, gathering data, generating insights, deciding what to do and closing the retrospective. Inside this structure they provide a lot of data gathering tools. For example, Diana Larsen recently blogged about a tool for data gathering called FRIM. It works by drawing a diagram where we group the  observed issues by frequency (how often the issue occurs) and impact (how much it impacts the project). Then we use this diagram to pick the most important issues to address for improvement activities in the next iteration.

    In our agile consulting work we have visited quite a lot of projects that do "evaluations". When this is the case the kaizen step is usually missing. Instead people just fill out forms saying "as usual we had trouble with integrating the application.". Needless to say, the kaizen step is the point of the whole excercise. Merely gathering data is just bureacratic waste.

    One of the most common ways to transform the observations into actionable kaizen activities is to name the key issues for the next iteration. This technique is called "Do, Don't and Try" (Alistair Cockburn describes a retrospective activity called Reflection Workshop with "Keep, Problems and Try" in his book, Agile Software Development).

    In this activity we mark out three sections on a whiteboard:

    "DO" is where we put the things we should do or keep doing. This is all the things that makes us more efficient. We don't list all the good practises, just the ones we are currently learning and need to do consciously until they become habits. An example could be "Talk to the customer about the requirements before we start implementing them."

    "DON'T" is the section with all things that we do that lead us to step on the usual landmines. These are the fires we are currently fighting and the things we want to stop doing. Tom Peters calls this a "To Don't List". Example: "Don't over-architect the application to support speculative future requirements".

    The third section, "TRY", is the section for new things we want to try out and evaluate. Example, "Try adding automated acceptance tests with FIT".

    Everybody contributes. As the team names items we cluster them so similar items are grouped in one. Then, the next step is selecting the most important ones as focus for the next iteration.

    We normally recommend a simple process called "dot voting" where each team member puts a dot in each area to mark what they rate as the most important item. This is usually enough to make it clear what the most important issues are. We post these on a big visible "Do / Don't / Try" chart in the team working area. They give focus to the things we should do consciously for the next iteration. Then, as part of the next iteration retrospective, we look back and evalutate the results.  

    We keep the things that help us for as long as they keep helping us.  

    There is a very important point that Jan Elbæk mentioned in a workshop on agile development with Scrum that we hosted recently. When we name the critical issues most often the best action is to do nothing! Merely being aware of the bad behaviour is often enough to stop it. Try this light-touch approach first. If we give in to the urge to create bureaucratic control systems to steer our way around the problem in the future there is a big risk that the bureacracy stays after the specific problem is solved. This is the recipe for building enterprisey organisations.

    That is not our goal, however. The goal is to solve the problems at hand, not to load the process with bureacracy.

  • “Implementing Lean Software Development: Practitioners Course”, by Mary and Tom Poppendieck

    Having attended the two day “Implementing Lean Software Development:  Practitioners Course”, I encourage you to join Mary and Toms classes or speeches if you get the chance, they are very inspiring and their knowledge about lean and agile are amazing. 

    Mary and Tom have done a great job in transforming Lean principles from the manufacturing environment to software development. The course take you trough the Lean history, give you a brush up on the theory behind Lean and give you a lot of god techniques, tools and ideas on how to improve your existing projects and use the power of Lean.

     A few “lessons learned”:  
    • Success factor for implementing Lean: look at the whole supply chain, not only development or test. Think products (end to end) and not only projects.
    • Use pull rather than push. Be aware of utilization: an operations manager will react when a servers cpu utilization is getting close to 100% - but what about people utilization …
    • When implementing Lean it isn’t enough just to use Just-in-time and stop-the-line. It is the people doing the actual work that have the potential to make it a success. They have the good answers and ideas.
    • All queues and stacks of work is WASTE. Remove the cues, improve the cycle time and get better overall performance and quality. Value stream maps are a powerful tool to identify queues and their size.
    • Without automatic tests you are asking for trouble. Don’t buy the “we don’t have time to implement automatic tests” excuse, if you wait your problems will grow…
    • And NO it is not ok to have a backlog that contains work for more than 3 iterations, including ideas for the future – get rid of the WASTE and focus on the important stuff that adds value to the customer.
  • Impressions From Our Lean/Agile Dinner with Mary and Tom Poppendieck

    This Tuesday we hosted a dinner for the Danish Agile User Group with Mary and Tom Poppendieck in the great Restaurant Grønnegade. The charming old house from 1689 with its modern European slow-food set the stage for an evening of great conversation about the state and future of lean and agile software development.

    Mary Poppendieck and Martin Jul

     

    Around 20 Danish agilists attended and following a question/answer session with Mary and Tom we had an excellent five-course dinner. The room was buzzing with lively conversation - people trading war stories and experiences with implementing agile and lean software development in their organisations. It was a great night with plenty of inspiration for the daily work.

    Tom Poppendieck with Danish agile practitioners.

    This post is a collaborative montage of impressions from the participants - from the Q/A with Mary and Tom and the conversation in the group. Please add your impressions below.

    Just to get the ball rolling one of the things I discussed with Henrik Thomsen, Lone Møller Klitgaard and Mary Poppendieck was Scrum and its shortcomings.

    Henrik got the ball rolling by noting that Scrum is too limited for his taste since it only addresses a small part of the total picture, excluding, for example, many people issues. Henrik talked about how he has used the Soft System methodology to visualise the stakeholders and predicting conflicts in the development process - clearly something that is not addresses by Scrum.

    The Ative experience with introducing Scrum is that it is a very simple yet powerful set of rules. This makes it a great starting point for agile practices. However, we are not dogmatic about Scrum: we keep looking at the overall picture and fixing the impediments through a classic lean inspect-and-adapt cycle. For example, we usually spend quite a lot on time with the technical team to ensure a minimum level of professional craftmanship (configuration management, automated build and deployment, automated unit and acceptance testing etc. - we have blogged about this earlier). We have also experienced the need for a lot of training for the Product Owner - in fact, this is usually much harder than getting the development team to adopt the Scrum practises.

    The big thing was that the discussion framed the things we work with every day: from the perspective of lean - which is essentialy a framework for thinking about processes - agile is a lean implementation. This also means that it is context specific and constantly evolving. Therefore it makes no sense to talk about The Only Way to agility but rather to treat all the agile practises as a toolbox and mix-and-match the things we need in a particular context.

    Just as the doctor does not order you to take all the pills in the pharmacy or prescribe only aspirin for any ailment, agilist should use skillfull means, too. We need to know and have experience with a broad range of agile practises, but we only advice the specific medication that the patient needs and in just the right amount. And to tie it back to the discussion about Scrum, Scrum is not the cure for every problem.

     

    Please add your impressions from the evening in the comments below

    • what was your highlights? (things you talked about, people you met, ...)
    • what did you learn?
    • what inspired you the most?
    • how are you going to use it in your work?
    • about the evening - "keep this"/"try this"/"don't do that again" feedback for arranging something like this in the future.

     Thank you for making it a great night and for posting your impressions!

     

  • Iterative Development Gone Wrong - The Mini-Waterfall Anti-Pattern

    One of the frequent mistakes in transitioning to agile development is to implement iterative development by doing consecutive "mini-waterfalls".

    No matter how iterative it might be, if you are relying on the "W"-word, you are still doing something wrong.

    When we postpone testing and completion to the end of the iteration we are shooting ourselves in the foot. Once testing begins we start to uncover all the mistakes and defects and with the clock running out there is no room left to maneouver - no time to descope or rescope or maybe even to complete some of it. Most often the result is to end the iteration with a number of unresolved defects and incomplete features.

    From a Lean perspective the problem is that everything becomes work-in-progress - we produce lose ends everywhere at the same time rather than completing features one by one to proper production-ready quality.

    We have seen this symptom on several projects now and the cause seems to be that testing is not integrated properly in the process. We really need to be test-driven, also on the acceptance/integration testing level to truly transition to agile development.

    This means that testing and QA should be moved to the front and be an on-going activity over the course of the iteration - not a frantically compressed activity at the end. In fact it is a first-class development activity that drives the whole project.

    Even when we are aware of this it is easy to get caught on the wrong foot.

    We often see experienced testers build their test cases around complex use case scenarios. This results in "big bang" testing where steps cannot be tested individually - the test hinges on a big set of deliverables rather than incrementally evolving with the application.

    The remedy is to plan the backlog in terms of small, testable slices. Even if you are working from Use Cases, break them into smaller "user stories" that describe a simple feature (usually one or two steps in the use case). Test the user stories individually and incrementally.

    The next step is to automate the acceptance tests so we get regression testing "for free". This allows us to sustain the quality at a known, high level.

    With this we are on our way to developing better software faster, and even when we get bogged down we have earned the right to not make any excuses. Instead of saying "well, we are sort of 80% done with 100% of the application and no, you we cannot deploy anything to production" we have earned the right to say, "Well, we suffered some setbacks, but we are 100% done with the 80% most valuable features. Let's put it into production and start reaping the benefits."

More Posts « Previous page - Next page »
© Ative Consulting ApS