in

Ative at Work

Agile software development

Ative at Work

Impressions From Agile 2007 - Day 1

 

Finally, the biggest agile event of the year is under way! We have a big Danish delegation here this year - other than Ative we have met people from  Bankdata, BRFkredit, GoAgile, BestBrains, NDS, Systematics here.

Martin Jul (left) and Jan Elbæk (right)

Agile Metrics

The first day of the conference was a half-day with only a Research-in-progress track in the morning. One session focused on metrics for agile.

There was a big discussion about the usefulness of metrics. It was clear from peoples experience that good metrics can be useful but using the wrong metrics can be devastating. To this William Krebs of IBM listed 5 Deadly Sins of metrics - metrics that had a lot of overhead to report, the scorecards with traffic lights (people tend to game them to make everything "green"), not fixing the problems that were identified by the metrics, using an inconsistent format so that data could not be compared, and forcing a process on the team.

Coni Tartaglia and Prasad Ramnath presented a list of metrics that their team at Primavera Systems had decided to use for their project. As part of their Retrospective they had decided on a number of metrics called their "Sprint Build Inspection".

To measure "Coded to Standards" they used FxCop (for .NET) and FindBugs (for Java). These are static analysis tools that spot common problems in code such as naming conventions, performance and design issues etc.

To measure the unit tests they used code coverage. It was interesting to see that they defined "good enough" as a coverage of 56% or more. However they mostly used this metric to track how they were progressing towards better coverage as they were developing from a legacy starting point.

For "functionally tested" they mandated an automated test for each requirement and reported on the number of automated tests. They segmented these into areas - manual (they had gone from many to none), FIT tests, Silk tests and other tests.

For measuring if the application was "Integrated" they reported on the percentage of unit tests passing, the percentage of FIT tests passing, the percentage of Silk tests, manual and other tests passing. This was used to gauge the quality of the application.

Additionally, they measured "number of priority 1 defects". The team had made a working agreement and decided that at the end of each sprint no priority one defects were allowed, and during the sprint a maximum of 3 priority one defects were allowed per team and no more than 40 total. Also, no bugs older than 30 days were allowed.

Finally they used an interesting qualitative approach to overall shippability, namely asking the team to assess whether they would be able to ship the application in 30 days or not. This would be the lithmus test of the overall quality.

After this we had a big discussion in groups and people also named a number of other useful metrics like not allowing code with higher cyclomatic complexity than 10 (to eliminate the worst design issues), measuring compliance with various internal and external standards (go/no-go) and to measure positive approval from the customer (some would say that would be the ultimate metric!), and also to inspect and adapt the set of metrics on the way, adding and dropping metrics as needed.

Experience Reports - Implementing Agile in the Enterprise

The really big case study here was Chris Fry and Steve Greene from Salesforce.com. Their case: converting a 200+ person development organisation from waterfall to agile in three months. Yes, three months. They didn't like the way their velocity and release frequency had dropped and bet everything on fixing it quickly.

As Chris Fry noted - there's never a good time to convert to agile, so just pick a bad time and go.

Here's how they did it: they created created a cross-functional roll-out team with the leads of all the involved silos to oversee the transformation project - development, QA etc.

Then, everybody got a copy of Ken Schwabers Scrum book and over a two-week period, everybody received a two-hour crash course on agile development.

Then they converted the organisation into 30 Scrum teams and let go. The company had a 30-day sprint cycle, but teams were allowed to subdivide these sprints into 1 or 2-weeks sprints if they liked.

Additionally, the sent 30 people from the development organsation for ScrumMaster certification and 35 product owners were also certified to ensure buy-in from the business side. One of they key findings was that they should have started with training the product owners earlier. This is also our experience.

Some of their key findings

  • The Product Owners should be trained first (ScrumMasters can be trained later)
  • They would like to have used more open-space style events with the individual team members to get more buy-in from the people doing the work faster.
  • They should have introduced outside coaching help earlier.
  • They gave key executives deliverables for the roll-out to assure their buy-in.
  • As they moved to self-organising teams the had very clear rules about the "Done"-ness. This was not up to the individual teams. Instead they set a corporate standard for QA, documentation, test etc.

The listed a number of keys to success

  • Get executive commitment - this also means that shipping dates are never negitiable. The sprint length and shippability can never be compromised.
  • Focus on the principles - not the mechanis. For example, some teams did not like the Daily Scrum agenda, so they allowed them to make their own agenda as long as they kept meeting daily to communicate about the project and share important information.
  • Also, they focused a lot on automation. Continuous integration with daily build and test was put in place and bugs were treated as stop-the-line issues.
  • Finally they adviced to provide radical transparency - in fact, they sent out a company-wide email with key daily metrics every day to provide a heartbeat for the transition.

Finally they offered some advice for people embarking on a similar transformation project:

  • Set up a dedicated cross-functional roll-out team to guide the transition
  • Get professional help - outside coaches.
  • Focus on moving the average teams up - get several teams to excellence. Don't waste your time on the high and low performers but focus on getting the bulk to a high level.
  • Provide a heartbeat in the form of a daily metric to everyone.
  • Decide on proper tooling early - for their organisation using Excel for backlog/burndown did not quite scale.
  • Encourage visibility and over-communicate. People need to hear the agile principles again and again to avoid fallbacks.
  • Experiment, be patient and expect mistakes. Just fix them quickly as they arise.
  • And don't take the slow way - go all in, and convert the whole organisation in one shot.
Published aug 14 2007, 02:05 by Martin Jul
Filed under:

Comments

 

ativeadmin said:

There is a paper on the Salesforce transformation story here: www.agile2007.org/.../028_fry-LargeScaleTransition-final_703.pdf

august 15, 2007 1:49

About Martin Jul

Building better software faster is Martin's mission. He is a partner in Ative, and helps development teams implement lean/agile software development and improve their craftmanship teaching hands-on development practises such as iterative design and test-first. He is known to be a hardliner on quality and likes to get things done-done.
© Ative Consulting ApS