Thursday, March 29, 2012

Better software development estimates


Is it possible to estimate software tasks better?  In certain circumstances, my experience tells me yes!  It's not just a wish (/me waves to commenter on last post).

First, to establish what kind of estimates I'm talking about.  It's not possible to estimate anything past two months, in my experience.  When I was working at Microsoft and heard lots of estimates from other teams, I realized that "This software product will take us two years" really meant "We have no idea and we will never finish a product that recognizably matches our plan".  A one year estimate turn into a somewhat recognizable outcome in two or three years.  Even a six month estimate, while it might turn into a year-long project fairly reliably, ends up finishing quite different tasks than the planned tasks which originally led to the six month estimate.

Other things that make a difference in estimation accuracy:
 * What language is being used?  Snags can be a bigger time dilator with a compiled or lower-level language like C, whereas hitting a snag in Python might not throw out the estimate that much.
 * Is something new being integrated? Any time a task involves compiling and linking a new library, or adding a new Ruby gem, I know the estimate is weaker.
 * How routine is it? Adding a new page to a Ruby on Rails project can be pretty damn predictable.
 * How many people are involved?  Estimating a one-person task is way more accurate than estimating a team project.  Even a one-person task that has an issue that requires another person's answer is less accurate than the task that can be done by one person independently.
 * How many other estimated tasks are dependencies?  If you have a chain of tasks to estimate, a change in one task can throw out the others.

And then there's bias.  Have you ever noticed that some developers always estimate high and some always estimate low?  It's intriguing, because even though they're consistently wrong, they are consistently wrong in the same direction.  That means they are actually giving management good information, if management knows their biases.  I once managed one optimist and one pessimist in the same team for over two years.  The optimist always estimated about 1/3 of his actual time required. The pessimist always estimated about three times the actual time required.  I would assign what looked like three months of work to the pessimist, and what looked like a week and a half of work to the optimist, and they would finish around the same time.

One of the things I really do love about Agile (and Pivotal Tracker reifies this) is how it understands the above points.
 * Task length? Agile encourages developers to break work down into smaller tasks (no three-week tasks in any agile process I've ever seen)
 * How many people are involved?  Agile encourages issues to be resolved before the estimate is even made, and it is designed for single-person tasks.
 *  How many unfinished dependencies are there?  Agile encourages planning with a shorter horizon, so the chain of dependencies is usually reduced.
 * Consistent bias?  Agile tracks velocity, not accuracy, so a consistent bias is simply a constant part of a consistent velocity.

With all that, one of the main things that intrigues me, and this is what I'm unpacking from my previous post, is whether better feedback would help developer estimates get even better than Agile already makes them.  Agile does not measure time spent so it doesn't give developers feedback that would allow them to either fix a consistent overall bias, or to start to recognize tasks that need to be estimated a little higher.

2 comments:

Abarshini said...


Thanks for sharing, I will bookmark and be back again









Agile Software Development

Anonymous said...

Ever read The Mythical Man Month: Essays on Software Engineering?

It's an older book (even the updated one), but has a lot of good nuggets that seem as (more?) relative now as when it was published.

Blog Archive

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.