Heh, if you ask a developer how he's doing, he's always 90 percent done :)
This isn't even from ill will, but simply because developers tend to focus on the features they've created, and not the bugs they've introduced. The fact that they are deeply immersed in their own code doesn't help them make accurate estimates.
As you wrote, the only way to measure progress is by (automated) testing. Visual Studio Team Suite (particularly Team Foundation Server) provides good tools for this ;)
The Ative guys has a blog with a lot of insightful development project gotchas. This one, in particular,...
This is especially true, when building domain models on top of mainframe systems (i.e. with a PL1 mapping layer). Don't be tempted to let the data-centric types of this ancient language propagate into your domain model! It is the first indication that you are in fact synthesizing the data model and not the business domain.
Hmmm... dead code is just a question of treating your code like a house you must live in - keep it clean. Perhaps your mainframers should be sent on a "Code Hygiene 101" course. Now the real question is why mainframers insist on proting the sick and jurasic naming-convensions of the underlying database (most likely DB/2).
The tyranni of the DBA's must come to an end! ... Or is it just a perveted way of encrypting code, thus rendering an obsolete technology indispensable, by making it appear more vital than it actually is?
I can't stress the importance of bullet-point four enough! If you base your new system on the datamodel instead of the business domain, you are entering a world of trouble. In all but the most complex of systems data storage is a simple implementation detail, don't make a big deal out of something insignificant.
Any fool can create code that is understandable for a computer, but it takes a bit more effort to make it understandable for a human. The only way of achieving this is by baseing the system in the human world (i.e. the business domain) and not i the computer's world.
It is a false statement that an agile project require an especially disiplined/skilled group of developers to succeed. This is the case no matter which methodology your project uses. However in agile projects it is just so painstakingly obvious that your development team is of insuficient quality.
Poor developer quality, is unfortunately an ailment that has plagued our business for too long.
The dinosaurs were killed by a huge meteor! Why did mainframes go free ?
That is so very true! In the project team I'm current with we are wrong, very wrong indeed, but I'll be damned if we know it, even after 1 year. But small step by small step the team begins to realize the danger, in not embracing the ideas implied in the quote above.
You are absolutely right. For some reason, however, the object revoluation never truly made it into the mainstream - sure, there are lots of systems built in OO languages out there but a lot of them lack a proper domain model. It is always easy to pick on the mainframe systems out there but a lot of newer systems lack it too. Just look at any random web application and you are most likely to find a mix of embedded SQL code in the app and a "domain model" that consist of just datasets...
Worse yet is the fact that even in the case of systems like this that have a layered architecture the dependencies are most likely wrong. The typical layered application has a persistence layer at its bottom and the application is a client to this.
This is plain wrong.
I believe strongly in a strong domain model and a dependency inversion so that the application exposes an interface for the persistence layer it needs - stated in terms of the domain model. Then a persistence layer should be provided to it that matches the interface. This way it is clear that the application is the central part and the persistence service is just an implementation detail. Its primary purpose is to provide persistent storage of the domain model, not to dictate the data structures in the application.
This reminds me of a senior project manager who told that he knew the team could not meet its next deadline but he kept it secret in order not to kill their motivation.
Apparently he thought the team would not notice that their velocity was too low or that maybe if he didn't do anything they would work faster...
Unfortunately this kind of behaviour is all too common.
Project management is all about getting the truth out there in the open - making everything transparent and working to solve the problem. Hiding the bad news and painting a sunny picture is not management. It is sabotage.
In a situation like the one above the proper response is to say, OK we are not going to make it and then MANAGE the situation - merely working overtime does not truly solve the problems. It is more like driving on the motorway in the second gear and spending more time on the road rather than switching to a higher gear.
You have to solve the problem at its root cause. If the velocity in a software project is too low it is probably because the quality of the system is too low. It could be that the architecture needs to be fixed (refactoring) or that the team is putting too much effort into coding bugs into the application. In the latter case it would probably be wise to teach the team how to stop the bugs from sneaking in rather than letting bugs into the code base and spending a lot of effort later to find and remove them later.
LOL! That story (PM lying about velocity) is funny... and tragic, because it's probably pretty common.
Working overtime is probably the least effective method, in trying to meet your deadlines. In my experience it leads to hasty (and often bad) decisions made late at night and slacking on principles (such as TDD - DOH!).
Oh yeah ... and feel free to edit my first comment and add the missing words. It was written after a heated debate on whether or not Unittests provide value for money. The ignorance encountered in this discussion, bugged my mind enough to deprive me of my basic writing skills.
Actually there is a bonus anti-pattern in that statement. Why write System.Exception? Either you are using explicit class-paths. OR you have created your own System-namespace (i.e. using a subfolder in your project).
Whether you fancy obscuring your code with endless namespace-referencing or creating redundant namespaces, you should consider a different career that software developer... How about full-time Quake gamer?
I guess there is a reason why frameworks like FIT exists and should be available for developpers to run.
I guess - the above recommedation only apply if you are capable of making a loosely coupled design. Judging from what I have seen so far, this simple principle is very very had to uphold.
Good design always requires a lot of discipline.
For example, one time I was working with a a guy on my team that said that it would be a very easy to fix a particular problem by inserting a small hack in a certain module. And he was going on and on about how to do it. I had to stop him and tell him that no matter how much he explained it, all I heard was that he said "hack".
Basically unless there is a lot of discipline many developers are willing to forget the principle of separation of concerns and the single responsibility principle. And after that optimization requires much more work since you have to optimize everywhere rather than locally in a few central components. But on the other hand we have to set the baseline at the level of proper craftsmanship - if you have a bunch of legacy, spaghetti-coding cowboys on your team the first priority is not performance - it is getting to the state of a working application with much more focus on the basics.
Even when we were working on what was the biggest .NET-project in Europe we only spent around 2 weeks performance tuning it out of multiple years of development.
In principle I totally agree, that you should adress the designissues before you adress the performance problems. However...
On this massive project I am currently working on, there is a wast codebase with numerous security- and performance problems. None of them are very hard to fix - theoretically. However the codebase is so strongly coupled that it is impossible to adress locally without a major and expensive refactoring effort.
So all I'm saying is this: Be realistic up front! If your organisation does not have the in house skills to produce professional software, but would like to get a piece of the pie anyway, you better start adressing performance issues as soon as possible, because it is no trivial task later in the process.
I think we agree - but we are talking about two separate things:
First, there is the "design for performance" trap that leads people to complex distributed designs that actually make everything slower because of the network latency etc. This is bad bad bad. Design for the simple case, then refactor as necessary.
Then, there is performance optimization on the working application. This is something to measure continuously (this means run a test with some production-sized data sets on each increment, not just a minimal test set). Then as soon as some puts code into the application that pushes it away from its performance target - fix it. Since we are using small increments there is less code to optimize than if we cross our fingers and hope that it will go away later. Moores law is helpful, but since many projects specify their hardware upfront they don't even get any benefit from this. So the goal is to get to a known good state - including performancewise - and stay there.
And as you say - since a lot of organisations lack these basic skills there is plenty of work for us in the coming years teaching them to write better well-performing software faster.
How did we get here.... Evolution baby, Not Revolution :-)
Good quote - Thanks!
Ouch, that's a difficult situation to handle, since that external consultant is probably highly trusted by the customer. In any case, there's nothing you can do except what it seems like you ended up doing: Educate the quy.
Regarding code comments, I always like to hit people in the head with Fowler's Refactoring, where you can read (among many other gems) that code comments are usually a code smell.
Regarding the naming conventions, you can always guide people to the official Microsoft naming guidelines (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpgenref/html/cpconnamingguidelines.asp). In addition, Code Analysis will issue a warning if you post-fix variable names with type names, so that advise just goes plainly against the official .NET guidelines. We also don't use prefixing anymore.
So, I'm not disagreeing with you at all, but just wanted to point out that for a lot of the issues you mention, you can just point people to the official .NET development guidelines from Microsoft :)
Regarding member variables, I have more to say at http://blogs.msdn.com/ploeh/archive/2006/03/08/CodingStyles.aspx, but that's just my personal style :)
In fact, we use the Framework Design Guidelines as a foundation, along with the static code analysis that you mention. FxCop is truly amazing - just make sure to enable it from day one since it will be hard work to fix the issues later. Furthermore it is a good idea take a look at some of the performance rules and turn them off as necessary - while they make for better performing code they are sometimes a bit on collision course with simplicity - an example being that the rule is to prefer returning and enumeration or collection rather than an array.
One of the best articles I've seen about code commenting is by Tim Ottinger of ObjectMentor. His basic point is that a comment is an apology for not writing something that is clear and easy to understand. It is called Apologies in Code and is is well worth reading - including the debate in the comments: http://butunclebob.com/ArticleS.TimOttinger.ApologizeIncode
What a perfect timing for this article.
I've been on a project for a little more than 12 months now. With reference to both Ottinger and Fowler, I've been trying to tell my colleagues, that commenting is bad bad bad! They didn't listen. They wanted comments and it was made law, so every public member had to have comments. We even turned on the feature in VS.Net that makes a build fail, if the comments are not in there.
So yesterday I had to access some older part of the codebase. The naming wasn't very good... in fact one could be as bold as to call it cryptic. "But hey" i thought "we have all these nice comments that wil lexplain to me what these cryptic statements mean". But to my (not so great) surprise, there were no comments on this code and the meaning of the code was VERY har to learn.
What would have been an easy task turned into a lengthy "learning" session. Ofcourse the badly name, non-commented code, was also very bad in it's internal structure, so it was very hard to grasp what the original developer had wanted his code to do.
This experience has made me dislike the idea of code comments even more than I used to. It should be obvious to everyone that comments should NEVER take precedence over good naming and structure of the code. It is possible to forget putting in the comments, but you can never forget writing the code; the code HAS to be there, the comments don't, so one should focus on making the mandatory parts as good and readable as possible.
Actually with bad code it is often a good example to just go through a refactoring - renaming stuff and extracting methods or introducing Method Objects really go a long way.
With the particular reviewer I mentionend I actually went through this on a piece of code that he particularly fancied - it had a bunch of nested ifs and switches and what have you and loads of comments placed in boxes or * stars. His point was that even though it was complex, it was easy to understand.
After refactoring it to simple code with no comments it became obvious that the code had did not handle some of the border cases correctly (the unit tests did not cover these completely, so the code had multiple problems, not just readability).
To sum it up, the conclusion is all in favour of Tim Ottinger. Code comments ARE apologies.
Yeah ... I totally agree. Both with Ottinger and you. The problem is that we're using a single checkout Source Control System that is unable to merge code. I could refactor the bad code, but that would have to be done after hours when no one else was working on the project. It can get extremely complicated to refactor, if you don't have access to all the code (because it is checked out by others), that needs to be changed.
Usually I just change the bad code I encounter right away, as long as the changes isn't too extensive, but sometimes the changes cascades out to be a huge change, that is not readily done. You can really come a long way with "extract method" and some creative renaming!
Jan, have a look at the excellent presentation "The Prime Directive. Don't Be Blocked" (http://butunclebob.com/files/SDWest2006/ThePrimeDirective.ppt) by Uncle Bob. He argues that in the case of a blocking version control system like that, use a non-blocking version control system (eg. Subversion) inside each sprint and only do a check-in to the other "official" source control system periodically. This way, you're not blocked but you're still compliant with corporate policy (sort of...).
Comments (and documentation in general) are not bad, and I want to oppose the statement that they are. IIRC, what Fowler means (and what I personally mean) is that *inline* comments are an excuse for proper naming. That doesn't mean that you shouldn't fill out meaningful XML commentf for your public API, but that concerns the hard part of writing out all the stuff that can't be automated (by tools such as GhostDoc). Be honest now: Can you really get by with just the naming of types and members in the .NET framework? If you are like me, that's probably enough more than 50% of the time, but still, there are times when you'll have to resort to the documentation. What do you look for in the documentation? I know I look for a brief overview that tells me the purpose of the code, and then I look for an example. It's not always that a proper name can completely capture the purpose of the code, so documentation (i.e. XML comments) still has its place.
Regarding code analysis: If you encounter a rule where you disagree, you can obviously turn it off, but before doing that, be sure to read the documentation for that rule carefully to ensure that you fully understand the reasoning behind the rule. I find the vast majority of the rules to be perfectly reasonably, once you understand their motivation. If you are still convinced that the rule is counter-productive, I'll strongly suggest that you provide this feedback to the FxCop team (e.g. via http://blogs.msdn.com/fxcop/). I've done this quite a few times, and have always found the team very responsive to feedback, and I know for a fact that if you have your arguments in place, you can convince them to change or even drop a rule - just be prepared that you may have to convince people like Krzysztof Cwalina :)
Concerning non-blocking source control, Team Foundation Server has that :)
Mark, that's an excellent point. The .NET Framework documentation is definitely useful. So, I have to give you that.
On the other hand - not a lot of people develop frameworks. Many projects think they do, but most of the time they are really just building an application: all the developers using it will have access to its source and its unit tests, so they will have plenty of documentation available at no extra cost to the project.
My experience with code comments is that when they are mandatory you will have something comments like "Get the First Name" on a property called FirstName. So, most of the the comments add no value anyway.
We once worked on a big RUP project (jokingly called document-driven-documentation since we had more than 10000 pages of design documents with nicely coloured UML-diagrams). When we did the integration we hardly ever touched the docs. Number one tools was looking at the assemblies in "Lutz" (Reflector - http://www.aisto.com/roeder/dotnet/), next to look at the source and sometimes even the unit test, but hardly ever the external documentation. It would be out of synch, incorrect and since it was written upfront when nobody knew what was needed it had pages of the simple stuff like API and interface descriptions which were availble in the code anyway, and class diagrams (also available in Lutz) and other stuff but never the stuff we needed to know. So the net value of the huge investment that was made documentation the system was close to zero. And it even pretended to have a "framework" at its base.
I agree that third party frameworks should come with documentation. But in application development I have yet to experience that XML comments are worth the time spent writing them. As Martin Jul puts it, people have access to all code and it has all the information they need. If it’s properly structured its even easily readable, so why bother with the extra work of translating the code to prose?
On my current project I believe most methods have XML comments. Still, I mostly see programmers trying to understand the code using Lutz Roeders Reflector (the application has quite a few assemblies, that’s why this is easier than using VS.NET). Nobody uses the XML comments and the reason is that the XML comments are either incorrect or GhostDoc-style. I think this is the common case on most projects.
Regarding inline comments I completely agree with Tim Ottinger: they are apologies. The best example I can think of is from my current project where an inline comment reads: "The following code is error prone. I made it myself". The programmer even wrote his initials next to the comment :)
I can agree with you for the most part - however code comments are usefull, if done correctly. There are three types of comments:
- "How does the det code do what it does" (BAD: Sorry but I only write encrypted code).
- "What does the code do" (BAD: Sorry but the comment must be updated everytime this method is changed).
- "What do I intent the code to do" (GOOD!)
The last type of of comment carry information that can never be captured in written code: A high level abstraction of what the original developper wanted this method to do.
Yes this information can to some degree be captured in Unit Tests, but face it, you would probably have to capture this information from several tests, AND it will only provide this information IF the unit tests are made TDD.
In fact most unit tests would benefit profoundly if the author would write a short comment on what he INTEND to test.
However it is hard to write this type of comments as Steve Mcconnell has noted.
Well, what all the Martins (including Fowler, I'd guess) seem to be saying is that they quite prefer meaningful names instead of comments, and I can only agree. GhostDoc-style comments are worthless, but what I was trying to say is that sometimes, you do simple things for complex reasons (like the one you describe at http://community.ative.dk/blogs/ative/archive/2006/09/29/Migration-_2D00_-Bug_2D00_by_2D00_Bug-Compatibility.aspx), and a meaningful name may just be too cumbersome. In this case, a code comment explaining what the code does, and why it's there, may save other developers some time down the road.
Marin J: About being blocked... what do you do, when blocked by old-fashioned politics ? I cannot use another SourceControl system. If I do i get fired. But that would probably be a good thing :-). I do agree about "not being blocked"; I'm a frequent reader of Bob's blog.
With regards to Xml Comments, my point is that it shouldn't be mandatory. If you cannot make the purpose clear through good naming and wellstructured code, you SHOULD use a comment. However, in my experience, those cases are pretty rare. I often refactor even simple methods, to make their purpose even more clear, so you can cast a quick glance at it and get the idea right away.
It is possible to refactor your code to a point where it is almost as readable as plain text; although that would probably be a bit over the top. I just like to rely on looking at the actual code instead of hoping that my team mates remembered to update their comments. And ofcourse there is the .Net Reflector; IMO one of the best developer tools ever made!
About being blocked and company politics: Without knowing if Martin Jul is refering to Uncle Bob when taking about being blocked, I think being blocked is a reminder to us as developpers/architects as much as companies.
If you are being blocked because of old fashion politics mandate that you should use a defective source controle, your company have a problem (and believe me - they know it if this is the case).
If you are being blocked because you must use a source control system that isn't state og the art, you have a problem of attitude.
If you are being blocked because your company isn't state of the art on all tools (e.g. by using VSS instead of SubVersion), you would be blocked in 95% of all companies!
Of cause you should let your company know that there are better products available, but if you can't work on anything but the best of the best, you may have earned yourself a spot on the Novell NetWare Code-guru team.
Hehe ... this discussion wasn't really about Source Control Systems. It was about the way we work, and how that can sometimes be a blocking mechanism. If I cannot access all the code that should be changed due to a refactoring, it's very hard to do that particular refactoring.
This means that in the case I describe, I cannot refactor the badly structured pieces of code to a point where no comments are needed... I guess comments ARE a good thing on this projekt. So one could conclude that comments are good if you follow a really bad work process.
I still like to rely on well structured code with meaningfull naming... as an old colleague used to say: "Naming is everything".
Allong the same lines: I my current project the system-time is injected at startup. This is huge advantage during both unit- and regressiontesting, since it is very easy to simulate "passage of time". This refaktoring has saved the project vast amounts of testing complexity, thus expenses.
>In effect, the requirement to make the code testable forced us to introduce better, >more loosely coupled design which in turn saved us a lot of work downstream.
People often ask me why we need to have interfaces and loosely coupled classes. When I tell them 'It's because that's the way software ends up when you need to test stuff', most look at me in disbelief. They probably thought it was just because i wanted to be fancy. ;) Test first leads to fancy TESTABLE & FLEXIBLE software.
I love it when developers come back after trying this reluctantly for 2 month and say 'ooh my, this is cool'. The joy in their eyes makes up for the endless discussions and frustration in trying to explain to them why TDD is the way to go.
Jan! Stop whinning and go back to work!
You probably have a form to fill and send for the next database change. And after that you have a build to perform. That shouldnt take more than 2-3 days!
This post, like many of your other agile posts, made me wonder about a thing: How would you apply these organisational changes when the software project deals with a fixed price project in response to a large public tender?
In all my career, this has been the most common form of software project, and for large-scale projects, I predict that this will be the case for many more years.
The challenge arises from what I perceive as a contradiction between this project model and the agile/lean methodology of putting customer value in focus. It seems that all the points you make in this article hinges on mindset of adressing customer value first and foremost. It also presupposes that value can somehow be ranked, so that you can prioritize most valuable features highest.
If you ask a customer behind a public tender about value, you will quickly discover that features have a binary value: If it's part of the tender's requirements, its value is positive infinity (the project delivery cannot be accepted without this feature). If it's not a requirement, in terms of this kind of project, its value is zero. (I'm simplifying a bit here, but you I hope you get the point.)
The typical response I get from other agilists is that the customer needs to be re-educated, but that is a bit of an arrogant attitude, and even if it's right (which it may be), it's not very realistic, if nothing else, then for legal reasons.
Now, you don't have to convince ME that agile methods may be better, and it's not even that I can't convince development organizations that agile principles may be beneficial - it's just that it's so darn hard to apply these principles when you have a completely unyielding customer at the other end of the table.
BTW: On a completely different matter, can you please turn on full postings in your feed again? A couple of posts ago, your feed changed to contain only excerpts, which forces me to go to your site every time I need to read the full article. It kind of defeats the purpose of syndication...
- > Mark: I'm pretty sure you agree with me, but I just felt like claryfying.
In my oppinion, value (requirements) most certainly CAN be ranked. Imagine that you're building a car for your customer:
1. Requirement: Brakes.
2. Requirement: Must have top speed at 200mph.
Now I would argue that req. 1 has a higher value than req. 2. It would be intolerably dangerous to have a car going 200mph, without brakes. The opposite is not as much so. In a way you can say that to implement req 2 you need to have req 1 in place, which is some kind of ranking. The car would work safely without req. 2, but would never be allowed to hit the streets without req. 1.
This might not be the best of examples, but I hope you get the idea. I'm not saying that it's easy to convince a customer about prioritizing requirements in a real world situation, I'm just saying that it's a very good practice.
I really don't believe in the old ideas of "Every requirement should be in place, in time and at the exact right quality". Oh well .. maybe if you only have 2 simple requirements, it would probably be possible :-). We SHOULD use the knowledge gained throughout the development process to adjust what it is we're doing, therefore it's is VERY practical to prioritize.
BTW: Excelent post, I haven't gotten around to reading those books yet, but they most certainly are on my To-Read list!
In fact, I find your example quite reasonable as a metaphor, and I do agree that requirements often CAN be ranked. However, that doesn't mean that the customer is willing to do so.
This may not even stem from bad will. Large, fixed-price projects are often a result of specification by committee, and concensus about ranking can be virtually impossible to achieve (even reaching a concensus about the requirements in the first place is often a difficult process).
You could argue that the whole concept of large fixed-price projects is flawed, and that customers should adopt an agile mindset as well, but that's probably not going to happen overnight.
It seems to me that if a development organization wants to be agile under such circumstances, it must abandon the idea of the customer as an active, cooperating participant. Since customer value is still an indispensable component of the agile mindset, one or more persons in the development organization must assume the role of the customer's advocate internally in the organization. Interestingly enough, this corresponds very closesly to MSF's Product Manager role.
"agile under fixed-price contracts" is a great subject for a bigger post. Thanks. Maybe we will get back to that.
Anyway, for now, there is a useful technique called "exchange requests" that allows the customer to be somewhat agile inside a fixed-price contract, namely by developing the application iteratively and allowing the customer to exchange backlog items for new backlog items of the same size. This way, the customer gets the benefit to change priorities about stuff that has not been implemented yet - even put in completelyu new features to replace all the low-priority stuff at the bottom of the pile.
You will find an article about it here: http://www.nayima.be/download/agilefixedprice.pdf
I agree with you Mark. The customer might not be willing to work in an agile way and he might also be adamant on doing a fixed price project.
BUT, I doubt that we can deny the "fact" that requirements DO change and also that requirement SHOULD change. I think that it is our job, as Software Professionals to tell our customers this. If we can agree with our customers, that these are indeed facts, then we can begin to talk about how to handle them. Agile methods are all about handling these two facts.
If the customer won't agree, then at least we told him that the project in danger, long before that problems start appearing. But come on, who, in their right mind, would say that they know EXACTLY (every requirements) what their buisness needs for the next couple of years ? Agile methods will, if applied correctly, give the client exactly what they need, but without the upfront knowledge.
I think that the agilists have a good case, when trying to convince customers not to do the big upfront requirements analysis. In most cases, some compromise, can be found.
I really do think that the idea of "fixed price projects" is indeed flawed, because you need devine insight, to be able to get everything right upfront. And you cannot calculate the price of an unknown amount of work. "Price-ranged" projects however, are very compatible with agile methods. In my experience, it's not that hard to convince the customer to go with a price estimate instead of a fixed price, expecially, when they have the option to stop the project, whenever he want and still get real value for his money.
The "fact" that requirements do and should change over time are as fundamental to me as Newtons 3rd. :-)
This has been quite an interesting discussion, which has prompted me to think a bit about these things, as well as reaching some interim conclusions:
1. Large, fixed price projects are not going to go away any time soon, particularly from public customers in the EU, since it's the LAW that even moderately sized projects must invite tenders.
2. The whole agile principle about actively involving the customer suddenly strikes me as constraining the development process model to enterprise development only. ISVs have no (direct) customer with whom they can interact about the next product milestone. Again, the most apparent solution is to include a customer advocate, such as the Product Manager role from MSF.
I once attented a seminar about agile development held by Craig Larmann. He said something that really sounded good to me. Instead og trying to get the customer to rank the features, ask them to put a business value on the features. What is feature X worth for the customer in terms of money? I haven't tried it but it might make more sense to the custiomer and maybe even give them some insight in how hard it is to estimate anything. So, when the customer asks? 'How long would it take to build X'? Throw it back at them and start by asking, 'What is the business value for you company'?
There are well documented method for doing fixed-price-projects in an agile way. I cannot find the refference right now, but it shouldn't be hard to find. The Scrum method suggests that we calculate the totalt cost of all deliveries (weekly, monthly or whatever is agreed upon), and give that as the price quote on fixed price projects.
But it also says that the project can be terminated prematurely, when the client deems that he has what he needs. this is ofcourse possible due to the agile dictum of "Deliver working software often", which means that each deliverable is a fully functional subset of all the defined features for the project.
Therefore i think it IS very possible to do agile pitches on big projects.
With regards ISV going agile, and the customer role, this is also well defined within the Agile world. Jim Coplien has written a exellent book called "Organizational Patterns of Agile Software Development", which has a pattern called "Customer Proxy", which deals with this situation. I'm guessing it is kind of the same thing af the Product Manager of the MSF.
I will agree, that it requires some amount of work and dialog to get this to work, but I'm confident, that it is possible to do agile projects with most (if not all) kinds of customers.
Here's a good article, that addresses, some of the issues, pressented in this discussion. http://www.agilejournal.com/content/view/211/33/
PingBack from http://www.deployjava.com/blog/?p=58
This is Tim Ottinger from Object Mentor (quoted above, thanks Martin Jul!).
I think that it was kind (and appropriate) of you not to mention the company that consultant came from. We actively recommend not using warts or encodings, writing simple code with meaningful names (including for explanatory variables and functions), and using tests to explain code.
I think that the industry is turning this way as a general rule, with large and stodgy companies typically maintaining the old status quo (test-last, individuals rather than pairs, comments rather than clarity). Things will turn your way eventually, or sooner if we can make it so. It may take a little while.
Can you please, PLEASE change your feed back to full articles?
Your last five articles have only been syndicated in excerpts, which makes them unreadable in an aggregator :(
I think there are a few places where comments are extremely useful, if not essential.
When writing frameworks, comments make it easy to document the product - I'm a big fan of Doxygen, see for example http://www.oofile.com.au/oofile_ref/html/ which is generated from our code.
In particular, products like Doxgyen let you use comments to tag areas of code as being part of a particular group, allowing you to describe associations between code that your programming language may lack.
The most essential use of comments, which I don't think anyone has mentioned, is to document the weird behaviour of system APIs, other people's code or typical traps. Yes these are apologies but they are apologies for someone else's behaviour, like a Pedestrians Beware sign in front of a hole in the sidewalk.
For example, an inline comment might read "you may be tempted to use the GetDocument() call here but although it compiles it returns an empty document, hence our more complex use of GetHtmlDocument()".
The bottom line is - will this comment Save or Waste someone else's time?
Following the links to Ward's Wiki, I found a great example: http://c2.com/cgi/wiki?BlindAlley
A few more thoughts (literally stuff I woke up with) prompted me to come back and re-read this posting.
On the topic of "external consutants" - given his very specific mandate, it might be worth challenging management as to the relative cost-benefit. Say you assigned a couple of simple programming tasks, that required understanding the code base, and they just went out and hired someone to do them. Would this have been cheaper than the consultant? It would certainly be a more scientific test of the "pickupability" of your source base.
I really like the list of things for your programmer handbook however you missed something, or maybe didn't bother making it explicit. You say that the libraries are well-documented. Does your handbook or code have links to that documentation? Do you also have locally cached copies that are going to be permanently retrievable?
What happens in a year's time when someone is doing maintenance and wants to use the version of those external libraries you were using at the time? Have you made sure all those dependencies are archived with a release?
I have made a configuration change, so you should be getting full articles for future posts.
Thanks for your comments.
As a rule of thumb I think everything that is needed to build the system should be in placed under source control so I can set up a new machine by just fetching the project from source control and get going. So as you might have guessed we did indeed put the external libraries (NHibernate, Castle Project, Rhino Mocks, Selenium etc) in the source control as well. I usually keep both the source package with docs and everything and the binaries that our application needs to reference. This way it is easy to upgrade to later versions and everything will be there in the future.
I know some people don't like the idea of putting "too much" (binaries etc) under source control but hey - for me, disk space is so much cheaper than not being able to build the system.
In fact, I even know of some people who put all their tools including compilers unders source control (embedded systems guys) since they sometimes have to fix the software on a machine that has been sitting in a field for ten years.
Tim Ottinger of Object Mentor wrote an article about this post at
Here is some more background information on the case mentioned that I have posted as a comment to Tim's article on his blog:
Let me provide some more background on the case.
One that particular project I was a consultant (architect/tech lead) working with a customer’s team. Our customer itself was another consulting company working for an enterprise organisation. The project was to replace their legacy mainframe with a .NET application. The enterprise organisation had hired a code reviewer to check the project. He came from yet another consulting company.
We used standard, but modern techniques like TDD, an inversion-of-control container (Castle Project) and an OR-mapper (NHibernate).
The consulting company we helped learned to appreciate our agile practices – initially there was the normal resistance to change but in the end it is hard to argue against results.
For example, when we introduced the OR-mapper (NHibernate) the DBAs put up a fight since myth has it that Stored Procedures always perform better. Only when we measured and showed that it did in fact perform well and that it would free up four DBAs from writing SPs for many months did the resistance die out. It is hard to argue against facts like that.
The resistance from the code reviewer was basically due to his assignment. He was asked to review the documentation. The intention was to check the documentation to make sure that the enterprise organisation could hire someone else to work on the application afterwards. The underlying agenda was to avoid supplier lock-in. This, of course, is perfectly reasonable.
The problem was that the reviewer had almost fourty years of experience in application development but none with unit testing, mocks, inversion-of-control, OR-mappers and Selenium (automated web integration testing).
From that perspective, documentation is the green comments in the source code – the XML that can be extracted to a help file. And – since comments provide the only value from this perspective – no value was assigned to properly named methods or simple, readable code with tests that document the expected behaviour.
He actually managed to find one piece of code in the system that he liked. It had comments (apoligies!) – it was a complicated business rule reverse engineered from the legacy system.
To make a point I refactored it to its simplest extreme and removed all the comments and at that point it became apparent that the business rule was specified in a way that made no sense.
From my perspective that proved the case for readable code.
However from the perspective of the organisation paying for the project all they saw was consultants disagreeing about the right approach.
Eventually however some Asian consultants were in-sourced to the project to write a bunch of data export jobs and after we proved that we were able to get them up to speed quickly we did not have any more interference from the code reviewer.
So the story ends well.
There will be resistance to new ways of working, but it is possible to overcome it by showing good results.
The big lesson learned was that if code reviews or other external factors are allowed to interfere there is good wisdom to be found in the principles behind the Agile Manifesto – “customer collaboration over contract negotiation”. Focus the work on proving that the method is sound by showing good results.
For non-developers direct experience is much more powerful than quarrelling over abstract concepts of “good code”, so demonstrating that their concern about vendor lock-in was addressed properly removed the obstacles.
That’s another facet of agile development.
Great conclusion. On the surface it's "only brave" to conclude, that fast equals job satisfaction. It could sound like a shallow argument - and some may think, that it's not a valid business argument.
If anyone tells you so - send them my way. I will share my numorous experiences with clever, engaged, enthutiastic and busy developers. And also my storyes about frustrated developers from enviroments, where they feel, that they are wasting their TIME...
When people think that Lean leads to a more stressful workplace, I believe it's because they think that the efficiency of lean removes all the "rest" periods during the day.
When working, most people change their productivity up and down many times during a day. Sometimes we are super focused on a problem and delivering close to 100% of our capacity and then shortly after (typically, when a significant fragment of the problem is solved), we slow down and relax (getting coffee or surfing the internet for a short while).
This is in most cases an efficient approach to working and it IS sustainable for long periods. All Agile disciplins dictate that we must be able to work at a sustainable pace indefinately, so if Lean would dictate a process where people burned out, it would not be an Agile practice.
When Lean talks about efficiency, it's not about removing the natural brakes we take, but removing the "on"-time that is not producing value for the product, as Martin describes. In other words, we only spend our capacity on work that produce value.
This leads to a work day that is no different from our "normal" workday with regards to how much time we spend at peak performance and how many "brakes" we take. It just makes the peak performance times more efficient and creative and in most cases, a lot more fun. Since we only get to do important work that has high value, there's a good chance that we don't have to do boring and repetitious work (repetitious work should be automated whenever possible).
Just my 2 cents.
I counted more than 5 why's :-).
Good post though!
John Nack the product manager of Photoshop reports that adobe is modernizing the dev. cycle on their flagship product. Read his publicity blurp here:
A less diluted and slight more vitamin full version in this interview here:
PS. The interview makes me wonder about two things:
1.) Now... Now! first now Adobe begins to balance quality and features. Cutting dev. time and reducing bugs. Better late than never.
2.) Is the word bugalanch is actually buzzword compliant. Is it a miss spelling of Buggala (the big inland lake island in Uganda) or a typo of a bug-avalanche?
What are you clever guys take on this.
Why'ing is a useful technique. The problems I have encountered as a consultant using this, is that I often haven't got enough knowledge of the domain to ask the really pertinent questions. And that just blind why'ing feels like you are asking questions in order to ask questions. Often there are "good" reasons for why things are as they are, of course - or maybe I am just to easy to placate/convince.
This is had to guess at what meant. You need a test-driven sentence writing process :-)
"Why not let the tests to create a set a know test data when it starts and then test against that?"."
"This is had to guess at what meant. You need a test-driven sentence writing process :-)"
It seems that I need that too.
What I meant was: The following sentence makes no sense.
Jesper, I like your statement by example tecnique :-).
My guess is that what was meant was:
"Why not let the tests create a set of test data when it starts and then test against that".
BTW: I used WordUnit (http://www.waterfall2006.com/beck.html), when writing this :-).
It was indeed a very nice evening in the company of greatness (not only refferring to our american guests). I had some nice conversations with a people that I hadn't seen for quite some time and I always enjoy reminiscing about some past project :-).
I didn't have any specific discussions about Lean, but I did enjoy Tom and Mary's comments... especially Tom quoting Edsger Wybe Dijkstra, about the horrors of the PL/1 language.
Having attended both the two day “Implementing Lean Software Development: Practitioners Course” and the great dinner, I encourage you to join Mary and Toms classes or speeches if you get the chance, they are very inspiring and their knowledge about lean and agile are amazing.
A few “lessons learned”:
- Success factor for implementing Lean: look at the whole supply chain, not only development or test. Think products and not projects.
- When implementing Lean it isn’t enough just to use Just-in-time and stop-the-line. It is the people doing the actual work that have the potential to make it a success. They have the good answers and ideas.
- All queues and stacks of work is WASTE. Remove the cues, improve the cycle time and get better overall performance and quality. Value stream maps are a powerful tool to identify queues and their size.
- Without automatic tests you are asking for trouble. Don’t by the “we don’t have time to implement automatic tests” excuse, if you wait your problems will grow…
- And NO it is not ok to have a backlog that contains work for more than 3 iterations, including ideas for the future – get rid of the WASTE and focus on the important stuff that adds value to the customer!
At the end of the evening (after a few glasses of vine) the .NET vs. Java discussion started – then it was time to go home ;-)
Thank you all for a spectacular evening.
Meeting with old and new friends is always delightful. Mary and Tom had more than a few good views and stand points to share with us this evening. One that I brought with me to remember and never forget, was the point about spending time and energy on stakeholder analysis. Freely quoted from Mary, responding to a question about how we get management support to agility: “Learn what is important to them”. This, I would say, goes for the customer as well as our colleagues. We should take time out to listen and learn what our surroundings and our key stakeholders find important and value. This will help us priorities and focus on what is important not only to us but also the judges of our success.
I hope to meet you all soon, for another chat and maybe some splendid food…
Here are the book recommendations from Mary's talk:
The book about getting software to true production-level quality where it not only works but also works without putting too much strain on the operations and maintennance people is "Release It!: Design and Deploy Production-ready Software" by Michael T. Nygard (out on Pragmatic Programmers).
The book about the business case for lean software development (higher profitability through realeasing earlier etc) is "Software by numbers" by Mark Denne and Jane Cleland-Huang.
Can you please let me know how u convience ur DBA's (refer a text part taken from your blog)
"For example, when we introduced the OR-mapper (NHibernate) the DBAs put up a fight since myth has it that Stored Procedures always perform better. Only when we measured and showed that it did in fact perform well and that it would free up four DBAs from writing SPs for many months did the resistance die out. It is hard to argue against facts like that."
I need some input for the same please email me firstname.lastname@example.org
thanks in advance
to prove that NHibernate was "good enough" we set up an experiment.
We used the mainframe database and wrote an application with two different persistence layer implementations. One was written by the application team with NHibernate, the other was written by the DBAs and used stored procedures.
When we measured the implementations against each other we were quite surprised. NHibernate won in two respects:
First of all the DBAs much more time writing the persistence layer than we spent on the NHibernate version (if I remember correctly, about 4 times more).
Then it turned out that NHibernate queries were faster than than the code written by the DBAs - it used fewer reads for its queries. The DBAs were quite dumbfounded by this. When we looked at the NHibernate code in the query analyser they concluded that "they could have written that, too". They were very skilled, so they had the ability, but from an economical point of view it made much more sense to go with NHibernate since the cost in terms of manpower was much less.
And from an agile point of view I would always start by doing that and then only resort to stored procedures or whatever other tricks when we measure that we have actually hit the limits of the OR-mapper. This way, you can probably get 90%+ of the persistence layer at a very low cost and maybe implement a few critical operations as SPs if necessary, rather than taking the huge cost of implementing it all by hand.
The key is that using an OR-mapper does not remove the option to add some handwritten database code later if necessary.
One word of caution, however. When you design you domain model think carefully about the "aggregate" pattern from Domain-Driven Design (http://domaindrivendesign.org/) - it is very important to design you model in small, independent parts so you don't have to pull the whole database into memory to work on it.
There is also a great tendency towards waterfall embedded in agile practises such as RUP and AUP, where inception and elaboration reminds me greatly of specification phases of normal waterfall projects...
I think it is very important to gain trust on the business side first. Guys such as Ken Schwaber many times describes Scrum as a somewhat "better" methodology than the rest, and also takes on a rather arrogant view of organizations not being able to understand the "good stuff" about Scrum. I do not share this view. I agree on agile processes being good, but it is very hard work changing an organization, and to me it is not enough just by starting out in development department with one project all of a sudden going agile, Scrum... you need to change the mind of many business guys first... to me the clever way is: run a Scrum project on the Scrum implementation itself, thus identifying the key stakeholders and the product owner, then start winning those guys...
This is an important topic. One visual I have often used for an appropriate image of how an iteration goes is one that was created by Jean Tabaka of RallyDev. I included it in a presentation I did at the RSDC (see page 12 here www.numbersix.com/.../PPM31.pdf).
I agree with everything you've said, and I can recognize the professionalism in your approach.
I've only got one nit to pick, and that is about profilers.
When I am looking at a single-thread program that needs optimizing, I just run it under a debugger, halt it with a pause key, examine the call stack, and do it again. Depending on how bad the performance problem is, it can take very few samples to see it.
The reason it works is that it points to specific call instructions that can be midway up the call stack, and the fraction of call stacks they appear on is an estimate of how much time you could save by removing them.
Profilers can only narrow you down to the routine containing those calls (with luck), leaving you with some guesswork, and guesswork is not good. The sampling method eliminates the guesswork.
I've never seen a performance problem in single-thread software that could not be found by this method in very short order, and it almost never something one could have guessed ahead of time.
After you do one such optimization, the whole process can be repeated several times, culminating in large speedup ratios.
Thanks for the feedback. The picture from Jean Tabaka in your presentation is very good indeed. Maybe I should start adding more pictures to this blog.
Your work on Scrum with RUP is really interesting since we meet a lot of organisations with some kind of iterative RUP implementation. In that case, mini-waterfalls are the officially prescribed way to develop software but it is also quite clear that it presents some problems.
However, it is very hard to convince someone who just spent a long time implementing RUP that it must go. It is much more practical to help them tune it, and to that effect getting the inspect-and-adapt loops in there and planning by feature really helps push in the right direction.
If the team is sufficiently small (up to maybe 10-15 people) they usually experience the value of completing features one-by-one after a few iterations and then the transformation away from the mini-waterfall is much easier.
One of the main dynamics against it that it also becomes painfully obvious exactly how much work gets done - and how much work is being blocked by organisational issues that were previously hidden by allowing a lot of work in progress.
Thanks for your profiling tip - I love it for its simplicity.
I think performance is all about having a full arsenal of practises at hand - profiling through the debugger like that, writing performance data to log files (eg. response time from web service calls), or using different profilers.
I use a combination of different profilers to crack the really hard problems - some are good at analysing memory usage, others use a "sampling" approach like the one you describe - these are good for really big profiling problems without adding too much overhead to the application - and yet others use some kind of code instrumentation to get detailed line-by-line performance data. I still haven't found a single profiler that does it all well.
I totally agree with your experience that the bad code is never where we expect it to be. For example, I was profiling an application last week where we all assumed that its was slow due to calling a lot of slow web services. Yet the profiler revealed that we could also improve responsiveness by making small changes to a few frequently called internal methods for getting configuration data and resources (localisation). Having done this and measured again we now know with certainty that it is only slow because of slow web services and that no more significant gains are possible in the scenarios we profiled by improving the client-side performance alone.
For distributed processes like yours, I have a method which I do not claim is easy, but it is effective. I get a detailed log of all the messages, timestamped to the millisecond, and go through them one by one, making sure I understand the reason for each one, and looking for suspicious time lapses. Often I find that either duplicate message exchanges are happening, or that there is an unnecessarily long delay between response and acknowledgement due to some some problem on our side that can be fixed. It takes a few hours to do one iteration of this cycle, but the result is worth it.
There is a paper on the Salesforce transformation story here: www.agile2007.org/.../028_fry-LargeScaleTransition-final_703.pdf
looks enjoyable !
we also discussed resistence of ornagnization in our session.
I am currently planning a medium-large project in my head, and am trying to scope out all the things which I will need to get nailed down before setting forth on the good ship "entrepreneur".
Managing development will be one of them. I've had the (mis)fortune to mostly work on existing software, meaning I have just had to fit in with whatever was already in place (usually not much). If you were starting a new project from the ground up, what would you think of:
* TDD in helping manage communication between developers?
* TDD during prototyping and rapid development
* TDD and design specification?
* TDD from the client perspective?
"TDD done strictly from the YAGNI principle leads to an architectural meltdown around iteration three"
But with short iterations refactoring your model at iteration three is not only easy but expected. :-)
It doesn't violate YAGNI (in my opinion) to have some basic understanding of your problem domain and an awareness of architecture issues.
Let me try to answer your questions without using the "TDD" word.
* TDD in helping manage communication between developers?
In my experience a good set of automated tests written well is an excellent vehicle for communicating the intent of the programme. Ths means that they serve as a communication mechanism that can replace a lot of documentation.
On one very large project where we had a set of around 20 rules that all domain classes had to adhere to
I created some "cross cutting tests" that would reflect through the .NET assemblies and test these rules against all the domain classes. Overall it was a RUP project with a lot of documentation. This simple testing practise caught many more errors of compliance to the rules than the code and documentation reviews - and since we could run it with every build it took us no extra effort. Also, it was a trigger to talk to people about how to write good code so they would learn to avoid making the same mistake over and over.
However, I also think it is a good idea to have a short developer handbook with the project with pictures of the central concepts to give a quick visual introduction to the key architectural elements. In my experience pictures of behaviour (sequence diagrams etc) are muh more useful than pictures of the static structure such as class diagrams (these can easily be generated from code anyway).
See my article: community.ative.dk/.../Code-Reviews-and-the-Developer-Handbook.aspx
* TDD during prototyping and rapid development
Normally I always advice test-first - also during prototyping and especially if you want to do rapid development. The fact is it takes less time to prevent errors from creeping in than putting them in, seeking them out and removing them again. So I would definitely go for writing tests first.
* TDD and design specification?
You definitely need to have an architecture. Jim Coplien adviced to spend a minimum of three days of architecture up-front (and a maximum of one month). Usually if we work in known territory we will do this very quickly since we have an experienced team we can say, "let's go with the overall architecture from project XXX with these modifications..." That will get is into a known good zone quickly and then we can focus our effort on the application domain-specific stuff, not so much the architectural infrastructure things like organizing into layers, which application container and persistence technology to use etc.
* TDD from the client perspective?
One of the findings with automated testing is that it improves the quality of the application dramatically and speeds up the development process which makes for a happy customer.
Jim Coplien has posted a new article about this discussion:
Religion's Newfound Restraint on Progress
Roy Osherove has also written an article about the discussion:
The Various Meanings of TDD
Just some notes on your very sensible post.
On tests: You need to edit the number of tests that are expressing the requirements in order to make it comprehensible for other people. Often you will have to or more levels of tests: Those that show the requirements and tests these, and those that test strange corner behaviour. It can be quite confusing to navigate tons of tests in order to understand the purpose of the code (example/intentional code can sometimes help here)
Auto-creation of diagrams for classes is one of the worst ideas ever, and in my experience creating a usefull class diagram is more akin to stating the metaphor of the system than to pressing the "create class diagram" button. A lot of classes used are just for support and for intermediary use as adapters etc.
On domain knowledge:
The idea of coding without trying to understand the domain is doomed (I think), you should at least use time on getting to know the context you are working in, and that takes soem time. For a new project with a new crew, I would use at least a week up front trying to get a grasp on the fundamentals - a fool with a tool and all..
Great list. There is a new translation of an Ohno book that is excellent: <a href="curiouscat.com/.../workplacemanagement.cfm">Workplace Management</a>.
We have an enhancement request for that functionality - I’ll append your comments to that request.
it's quite good of saying "Keep it clean", the habit our team pursues.
Raymond Lewallen wrote on waste in software development here:
a "feature" has been implemented and in some corner cases (does not occur often) there is a bug. And there is a workaround for the users that takes them 10 Minutes. Investigating the cause of the bug might take 2-40 hours and fixing it might take 5 hours. Would you fix it, if there are features that are not yet implemented at all?
So I think fixing everything immediately is a waste of time. You have to do some "triage". And therefore you probably will create technical debt.
The whole point of the article it to eliminate technical debt.
If you had a car and sometimes in some corner cases, when you turn a corner at a certain speed and your car breaks down and you have to start your journey again, would you take it to your mechanic to get it fixed?
Think about it - if I'm a user, times how many users, using your product and I have to spend 10 minutes (times x number of users) to do this workaround, how much productivity are users losing over time.
So is a 45 hour investment really a waste? It could be a waste compared to doing nothing - sure - but I think you'd probably make the investment based on productivity loss and potential customer loss - not to mention having healthy software that can be continually built on at a continual pace.
The mindset is to move from a "work-around" culture to a stop-the-line culture.
Hi Victor and Andrew
Thanks for taking part in the conversation.
If you look into Theory of Constraints you will find a good discussion about why stop-the-line makes sense.
The idea is that you look at the throughput of the whole organisation. In a sequential process there will be at least one step that has the lowest capacity, so it is the one limiting the total output.
Many management systems for production are focused on resource utilization - in TOC we care only about maximizing the utilization of the limiting resource and in fact it is recommended to have slack (less than full utilization) in the other steps.
The main principle then, is to look at how we utilize the limiting resource. We want to make sure that we are not wasting it by letting it be idle, by producing the wrong product or by having to do rework (see the lean wastes: community.ative.dk/.../Lean-Principle-Number-1-_2D00_-Eliminate-Waste.aspx). We want to do everything possible to get the most from it to improve the throughput of the whole system.
Let's assume that the limiting step is the development team. Imagine a stream of work flowing from idea to production software. Now, imagine that the development team emits a piece of defective software. What happens now is that the piece comes back and they have to do it again, and since the overall throughput of the whole process is governed by the development team's capacity it means that the defective piece has limited the output of the entire development process:
If it takes one unit of time to do one feature, and one more to fix the defect when it comes back it means we get only the value for one feature for the price of the two units. Since it governs the output of the entire system this is a very sizable reduction if the defect production rate is high.
It is often a good idea to add a quality control step before the limiting step to make sure that it does not waste its capacity working on something defective that will require it to work on the same feature again later.
This is one of the reasons why Scrum planning works since we have the Conversation about the requirements with the Team and Product Owner and try to clarify the acceptance criteria BEFORE we start working. This is essentially a quality control gate to make sure that we do not waste the limited resource on producing the wrong thing.
As for Victor's example, Deming has a good discussion about quality control and a statistical model for calculating the cost/benefit of quality control (eg. testing) in "Out of the Crisis". It would be interesting to relate this to Victor's example and the specific process and cost pattern.
The idea from TOC is that you always have to look at the whole system and optimize for the current bottleneck. This is not necessarily the same at every phase of the project (eg. at one point the developers could be the constraint, later in the project it could be the systems testing team's time that is most precious and should not be wasted).
Goldratt's The Goal is an excellent business novel that explains the principles of Theory of Constraints and Clarke Ching has just written a new book on the same theme for software development called Rolling Rocks Downhill (I have only read a preproduction manuscript of it so far, but it is very good).
I agree, and the Leader's Handbook is excellent. Read more online about <a href="curiouscat.com/.../">Deming's management ideas</a>.
John, thanks for the link - http://curiouscat.com/deming/ is a great collection of Deming information.
"You can almost feel the underlying assumption that more management is the solution for all problems."
Mmm.... more management.
#9: The dependencies between projects is not visible. This is a very valid point, however, I do not understand what Agile has anything to do with it. Clearly this is the responsibility of a Program Manager.
Regarding inter-project dependencies whether it is "agile" or not is not important. The important point is to solve the problem, and the first step is, as Computerworld also advices, to map out the dependencies.
Mapping the process to better manage it is not enough, however. The second step - and this is from lean, or from business process reengineering if anyone remembers that - is to make a better process.
Unfortunately I have seen many organisations trying to manage complexity rather than improving the process and making things simpler.
nice way of future conferencing and taking the point of views from the employees.......
keep it up.......
Suggestion for improvement:
Summary of the 14 points in English would be good.
Thanks for the feedback. You can find a summary of Deming's 14 principles here:
Also, I have published a new revised 2nd edition of the book in Danish - download it here:
I don't currently have the time to translate the text into English myself, but I would be happy to publish a translation if anybody is up for it. Thanks :-)
Your translation of Muda works well but is only a part of the equation when eliminating waste. According to my Japanese Sensei (nickname Joe), there are the other 2 other "M's" in addition to Muda you would want to consdier to make this a more complete approach that collectively describes wasteful practices to be eliminated.
Mura: Unevenness in an operation; for example, an uneven work pace in an operation causing operators or processes to hurry and then wait.
Muri: Overburdening equipment, processes or operators.
Hope this helps - Happy Hunting
Is estimation and planning really essential part of development , yes that's tru , but expierience is more important. First worthless plan is for lear "how to learn", with second we learn prioryty and third probnably be great :)
Here is another great lean reading list on InfoQ:
This is to inform you about the publication of my new book about lean manufacturing. It is entitled "The Truth about Toyota and TPS". It deals with Toyota past success, recent events and what is next for lean.
I would like to use your picture from the Toyota plant in my lecture slides for the course "Software Quality Management" at TU Munich. The slides are freely available on the Web.
Would you be okay with that?
You are most welcome :-)
Thanks for the post, just what I was looking for. I noticed that when I tried to be efficient and not waste time during the daily scrum it lead to situation where everybody just said how much to deduct their estimate and people were avoiding to speak about problems. Fast, but not scrum :). Maybe thats what you need the cert for, creating open but efficient atmosphere for scrums.
The key to successful stand-ups is that everybody understands why it is important and valuable. People are smart, so if it is just a daily nuisance with no visible benefit it will degenerate.
Great article on how Scrum reduces rework. One of the things that I would assume are obvious here is that you have a product owner that is available and apparent throughout the process to allow the team to make the changes necessary to utilize Scrum with these capabilities. You could almost say: "Having an involved and available Product Owner takes care of 91% of issues in rework."
Yes, it is definitely our experience that getting the Product Owner role to work is the single most difficult thing in implementing Scrum.
In many large organisations the PO is usually a team of people since no single person has enough knowledge about the domain across all departments and this, of course, which makes things more difficult.
It is worth the effort, though.
I have process data showing how we evolved the scope along the way and discarded a huge amount of low-value ideas before implementing them (almost 2x the delivered scope) on a project that delivered a much better and relevant system on time and budget than originally planned all due to working closely with the product owner on backlog and priorities.
Sounds like those organizations need a "product management team" to align the work.
I've found that organizing such a team helps them (all those POs) get together and prioritize work, etc.
Just for clarification
Lean talks about "Andon" and not "stop the line". "Stop the line" could be one state of Andon. Primarily "Andon" is a notification system of a quality / process problem. The alert can be activated manually by a worker using a pull cord or button. The alert is usually a signal light - yellow or red. Red could also mean that the line is automatically stopped - but that is not implicit. Therefore I would say that it should rather be called "Pull the line" because you can pull the line for a yellow signal which does not stop the line. Stopping the line is only an utmost action.
Nevertheless, bugs could be issues for the red signal. But not all bugs require to stop the line - they need focused attention.
Martin, I enjoyed this post and agree with you that continuous improvement is at the heart of all effective retrospectives. I'll add your Do Don't Try activity to the ones I keep on my website. I worry a bit about stopping at naming the critical issues. I've not seen many team situations in which simply recognizing ineffective behavior is enough. You don't need "bureaucratic control mechanisms." Just an idea about how to proceed and someone(s) on the team to commit to taking action. For instance, Bas Vodde has written about "now actions" that work well for his teams.
I look for the thing team members have the most energy to tackle, identify the now action(s), ask who will commit to it, and make a task card to take into iteration planning. Also lightweight, and more likely to result in improvement.
Avoiding bureaucracy is certainly a worthy goal, and stopping at naming may not create action. I recommend you find an activity in between the two to sustain your team's Kaizen.
Thanks for the post.
Many people came to work today to spend their time on waste? Some maybe! But not most. So what is waste, and how do you identify it?
Some waste is obvious. But other forms of waste are more difficult to spot or to solve. I’m sure in most organisations it’s sometimes very difficult to identify what is waste and what is not. Some processes or conventions might seem wasteful, but actually provide real value elsewhere in the organisation, or prevent other forms of waste from emerging later. Other activities may seem valuable, but actually do not really result in any real value.
You are absolutely right that it is often difficult to identify the waste. If it were, I am sure it would have been already removed.
The root cause is frequently that the causal dependencies between the things we do and the downstream outcomes are not always clear.
Sometimes the scientific method is the great arbiter. You can make an experiment with a control group and measure the impact of what you do compared to not doing it.
However, in my own experience, and this is what I heard from the lean senseis at Toyota as well, you often don't need statistical analysis and the whole apparatus.
If you map out the value stream - all the activities that go into producing a certain product, and try to relate them to how they create value for the customer, a lot of the waste becomes very clear.
It is often hiding in interactions between departments.
For example, there is often a great deal of management work related to planning and scheduling so each department can maximise its utilization, rather than maximizing the value creation flow across departments.
For example, we worked with a company that had a very expensive bug tracking system and detailed change management processes with executives meeting to prioritize which defects should be fixed first.
We did a multi-year project with this company and eliminated the need for that whole process. By keeping the software at a constantly high quality the list of open issues was tracked on post it notes, and since the turn-around time on fixing anything that was found was on the scale of hours, there would be no bugs to prioritize in the executive meeting.
The bug tracking system and the bureaucracy around it was suddenly waste.
Of course, if the organisation is routinely creating defective software it may be better to have the process in place, so it was surely invented to save resources and improve the outcome, but it was hinged on low quality.
When the quality improved, a lot of the bureaucracy became obsolete.
Therefore, we should always strive to improve quality and revisit our assumptions - why are we doing this? what would need to change to make this unnecessary?how can we make this activity obsolete?
Once it becomes clear that you have a lot of bureaucracy BECAUSE your quality is low it points very clearly to where you should start if you want to remove the waste.
Starting the other way around just makes you look like a fool since you have the sickness and you just suggested to not even try curing the symptom.
What is waste?
First you need to realise what you costumer realy want, and pay you to do.
Next, you must focus on do just what you're paid to do.
Code is one, for instance.
How can you work with the minimum stock possible?
You must realise that many requirements are inventories, and you must work with only one at a time.
In order to resolve only one requirement, how many documents you need? How many handoffs you need?
Working with only one requirement, temporary forgeting every thing else, you will find what is absolutaly necessary to produce that one requirement, and what is waste.
Hope it helps.
Thanks for the excellent input. It is very good advice and very actionable.
Thanks a lot for sharing.
Can you share a picture or sketch of this technique? It sounds really compelling, but I'm not sure how I'd organize it in a team. Thanks.
Managing development will be one of them. I've had the (mis)fortune to mostly work on existing software, meaning I have just had to fit in with whatever was already in place (usually not much).
Lots of good facts about test and test code here.
One interesting thing can be added: Test code seems to have lower code quality than production code. Thus people are even less motivated to create good tests ...
Pingback from Test-Driven Development Misconceptions | Agile Pain Relief
Pingback from Allow the Work to Happen - Aaron Sanders | Agile Coach