Ative at Work

Agile software development

Ative at Work

november 2006 - Posts

  • Myths about SCRUM - "it's a daily stand-up meeting"

    Recently I have visited three different project teams that all claimed to be doing Scrum.

    When I quizzed them it turned out that what they did was just short a daily status meeting.

    Now, if that was all there is to Scrum we could have saved the money we spent on ScrumMaster certifications for everybody in Ative. It's is a good start but there is much more to Scrum.  

    The daily status meeting - colloquially called "The Daily Scrum" - is a key activity, however.

    Every project - Scrum or not - can benefit from it since it focuses the team on the situation and raises the awareness of the problems that need to be addressed. For truly dysfunctional teams just the simple fact that it provides a few minutes where everbody takes off their headphones to talk to each other about how to achieve their goal is a big boost to the project.

    Three simple rules apply to the Scrum meeting that distinguish it from old-fashined status mettings. It's short agenda is simply that every team member must present the answers to the following three questions:

    1. What did I do yesterday?
    2. What will I do today?
    3. What is blocking me from working efficiently towards our sprint goal?

    That's all. Timebox it to around 2 minutes per person. Use an hour glass ("minute glass") if necessary so you don't fall into the trap where some old-school project manager type does all the talking and concludes by asking if anybody has anything to add.

    At Maersk Data Defence we also used two additional extra questions from Craig Larman's "Agile & Iterative Development":

    1. Do you have any new items to add to the Sprint backlog?
    2. Have you learned or decided anything new of relevance to the team members (technical, tools, requirements, smarter ways of working...)

    The essence of Scrum is that of a self-directing, self-organising team. This means that the meeting is not about reporting status to a manager, the goal is for the team to self-organise around achieving its goal and removing any obstacles in the way.

  • Great Quote on Testing

    While I was paging through the great Uncle Bob presentation "The Prime Directive: Don't Be Blocked" I noticed a great quote on testing from Kent Beck:

     "You only need to test the things you want to work."


    Posted nov 19 2006, 12:55 by Martin Jul with no comments
    Filed under: ,
  • Saved by the ITimestampProvider

    If you are doing any kind of timestamping on your data, testability requires you to get the timestamps from a mockable provider, rather than using unpredictable and thus untestable values from the system.

    For this purpose we usually inject an ITimestampProvider with methods to get the UTC time into any classes that need it.

    Earlier we worked on a military project that did not do this. Instead it relied on a combination of the application and the database assigning timestamps.

    Unfortunately SQL Server DateTime does not offer the same precision as System.DateTime in .NET meaning that the timestamps were truncated to a lower precision when written and so an object read back from the database would not equal the object that was written.

    This is bad for testing. The OR-mapping code in the system was legacy code (meaning: code without tests) so they didn't know this until very late in the project. At that time, fixing it incurred a great cost.

    On our current project we are using an ITimestampProvider and assigning timestamps in the application. The datastore is used for just that - storing data.

    A side-effect of this is that when we discovered some failing tests in the persistence layer due to timestamps being truncated by the database we only had to modify the application in one place: have the timestamp provider provide data that matches the precision of the datastore (we don't need more than a precision of seconds anyway).

    In effect, the requirement to make the code testable forced us to introduce better, more loosely coupled design which in turn saved us a lot of work downstream.

    This way test-driven development is not just about line-by-line code quality. It also drives the system towards a higher quality design.

  • Code Reviews and the Developer Handbook

    We’re six months into a project. The code base is 53,000 statements, 2/3 of which are tests.

    We have been working 6 months according to a set of standards: test-driven development with unit and integration tests, model-view-control architecture, NHibernate OR-mapping for the persistence layer, and the Castle container for weaving it all together.

    Then an external consultant shows up. He has been hired to check the quality of the work to ensure that a “normal” developer can take the code and work with it inside a “reasonable” amount of time.

    I convince him to look at a few architecture diagrams, before he starts looking at the code. I try to show him how to find the implementation code corresponding to the tests but he is not interested in the tests. His goal is to ensure that code is properly documented – and in his world this means the implementation only, not the tests.

    Instead he starts at the top with one of the client applications, a web-based intranet for editing data in the system. He starts browsing the code – looking through the container and some of the controller and model classes that it uses.

    A few weeks later we get a report.

    There should be more comments in the code (our guideline is to write simple, readable code rather than spaghetti with comments).

    Oh and member variables should be prefixed with an m, value parameters with a v and reference parameters with an r. And the type of a variable must be derivable from its name.

    He also raises a number of concerns about the use of NHibernate and Castle. Basically he wants us to write documentation for these as well (never mind they come with documented source and books and on-line articles).

    More or less the audit says to use his idea of what constitutes a good project. So basically he is testing the documentation quality against an arbitrary set of requirements.

    We need to devote a lot of effort to convince him.

    So, note to self – a pragmatic solution for documentation and auditing:

    • Write a simple developers handbook with coding and documentation guidelines etc. and have the customer sign it off. 
    • Involve the auditor early to check the guidelines. 
    • Use the handbook as the guideline for audits. 
    • In any event have the auditor provide a set of “functional requirements” to the documentation, eg. “it should be possible to easily find out what a method is doing” rather than saying arcane things like “there should be a two lines of comments for every branch statement”. 
    • Create a HOWTO document for each of these requirements and add it to the developer handbook: for example, “The requirements are captured in integration tests using the Selenium tool, they are named so and so.” etc.. 

    Update Nov. 6, 2006: Please note that I'm not advocating a heavy-weight developer's handbook, just a small document that shows the file structure, naming conventions, how to get the source from version control, how to build and run the tests etc. I've spent time on a project with a big guideline on whether or not to put curly-braces on a separate line etc. and its a waste of time. For code layout I recommend to just write readable code that looks like the rest of the code in the system and rely on the default formatting in IDE.

© Ative Consulting ApS