in

Ative at Work

Agile software development

Ative at Work

Retrospectives - Adapting to Reality

Accelerated learning or "Inspect and adapt" is one of the key lean principles. It is the central feedback loop that keeps our work in sync with the real world.

In agile one of the ways to do this by performing a "retrospective" at the end of every iteration. Its focus is gathering data and turning it into actionable improvement activities (kaizen). Esther Derby and Diana Larsen have written a wonderful little book on this: "Agile Retrospectives: Making Good Teams Great". It has a wealth of information on how to get the most from retrospectives and includes a number of useful tools.

Derby and Larsen use a five-stage structure for the retrospective: setting the stage, gathering data, generating insights, deciding what to do and closing the retrospective. Inside this structure they provide a lot of data gathering tools. For example, Diana Larsen recently blogged about a tool for data gathering called FRIM. It works by drawing a diagram where we group the  observed issues by frequency (how often the issue occurs) and impact (how much it impacts the project). Then we use this diagram to pick the most important issues to address for improvement activities in the next iteration.

In our agile consulting work we have visited quite a lot of projects that do "evaluations". When this is the case the kaizen step is usually missing. Instead people just fill out forms saying "as usual we had trouble with integrating the application.". Needless to say, the kaizen step is the point of the whole excercise. Merely gathering data is just bureacratic waste.

One of the most common ways to transform the observations into actionable kaizen activities is to name the key issues for the next iteration. This technique is called "Do, Don't and Try" (Alistair Cockburn describes a retrospective activity called Reflection Workshop with "Keep, Problems and Try" in his book, Agile Software Development).

In this activity we mark out three sections on a whiteboard:

"DO" is where we put the things we should do or keep doing. This is all the things that makes us more efficient. We don't list all the good practises, just the ones we are currently learning and need to do consciously until they become habits. An example could be "Talk to the customer about the requirements before we start implementing them."

"DON'T" is the section with all things that we do that lead us to step on the usual landmines. These are the fires we are currently fighting and the things we want to stop doing. Tom Peters calls this a "To Don't List". Example: "Don't over-architect the application to support speculative future requirements".

The third section, "TRY", is the section for new things we want to try out and evaluate. Example, "Try adding automated acceptance tests with FIT".

Everybody contributes. As the team names items we cluster them so similar items are grouped in one. Then, the next step is selecting the most important ones as focus for the next iteration.

We normally recommend a simple process called "dot voting" where each team member puts a dot in each area to mark what they rate as the most important item. This is usually enough to make it clear what the most important issues are. We post these on a big visible "Do / Don't / Try" chart in the team working area. They give focus to the things we should do consciously for the next iteration. Then, as part of the next iteration retrospective, we look back and evalutate the results.  

We keep the things that help us for as long as they keep helping us.  

There is a very important point that Jan Elbæk mentioned in a workshop on agile development with Scrum that we hosted recently. When we name the critical issues most often the best action is to do nothing! Merely being aware of the bad behaviour is often enough to stop it. Try this light-touch approach first. If we give in to the urge to create bureaucratic control systems to steer our way around the problem in the future there is a big risk that the bureacracy stays after the specific problem is solved. This is the recipe for building enterprisey organisations.

That is not our goal, however. The goal is to solve the problems at hand, not to load the process with bureacracy.

Comments

 

Diana Larsen said:

Martin, I enjoyed this post and agree with you that continuous improvement is at the heart of all effective retrospectives. I'll add your Do Don't Try activity to the ones I keep on my website. I worry a bit about stopping at naming the critical issues. I've not seen many team situations in which simply recognizing ineffective behavior is enough. You don't need "bureaucratic control mechanisms." Just an idea about how to proceed and someone(s) on the team to commit to taking action. For instance, Bas Vodde has written about "now actions" that work well for his teams.

www.scrumalliance.org/.../61-plan-of-action

I look for the thing team members have the most energy to tackle, identify the now action(s), ask who will commit to it, and make a task card to take into iteration planning. Also lightweight, and more likely to result in improvement.

Avoiding bureaucracy is certainly a worthy goal, and stopping at naming may not create action. I recommend you find an activity in between the two to sustain your team's Kaizen.

Thanks for the post.

oktober 23, 2010 4:48
 

Luke Hohmann said:

Can you share a picture or sketch of this technique? It sounds really compelling, but I'm not sure how I'd organize it in a team. Thanks.

marts 25, 2011 7:19

About Martin Jul

Building better software faster is Martin's mission. He is a partner in Ative, and helps development teams implement lean/agile software development and improve their craftmanship teaching hands-on development practises such as iterative design and test-first. He is known to be a hardliner on quality and likes to get things done-done.
© Ative Consulting ApS