Friday, July 18, 2008

Bugs - Should You Estimate the Effort?

Is there a purpose to estimating the effort that a bug (defect) will take to fix? I think not.

Presuming that the bug has been properly prioritized (id est -- will it be fixed in this release or the next), there is little reason to spend the time estimating the effort that bug will take to fix.

For the purposes of this argument, I don't consider the following bugs:

- Text and UI fixes. The UI shouldn't be implemented by a traditional developer; it should be largely defined and implemented by the team UI designer.

- Misunderstandings. Spec said this, developer implemented that. This isn't a defect in the code, it is a misunderstanding or failure in the communication process -- both of which should be examined and fixed.

- Spec changes. Not a bug, but a feature request or revision.

So, considering legitimate bugs -- something implemented to do a certain behavior, but doesn't -- what is the point of estimating? Presumably the original implementation by the developer was thought-out (and developer tested!), but it isn't behaving now as expected. How can someone actually presume to know how long it will take to: first, find the issue; second, fix the issue? Assuredly most any estimate provided will either be too short or too long -- so what is the value?

Bugs should be prioritized, and then fixed. Don't waste effort on estimation.

4 comments:

The Novasio Zoo said...

What about a design flaw? One where it functions according to business requirements and functional specifications, but when it comes to efficiency it is lacking and logged as a defect based on design. I think estimates and new use cases are warranted at this point. What do ya think?

Anonymous said...

I call performance issues such as this a FTS -- failure to specify.

If there are performance requirements, they should be defined beforehand. If there are none, then poor performance isn't a design defect.

But ignoring that -- how would you go about estimating that? Performance (and optimization) is very dependent on developer experience (and requires the proper tools), and is difficult to estimate. Time boxing may be appropriate (i.e. spend 2 days...) but that isn't an estimate.

Reg said...

Hi Bret, I'm Mindi's brother-in-law, that's how I ended up here.

Estimating/scheduling in general is one of the hardest problems and something I've spent years in searching for the ideal solution. No luck, it's just a tough problem.

We're currently using FogBugz with it's built-in "Evidence Based Scheduling" which is the best tool and method I've come across. I highly recommend taking a look at their method if you're not familiar with it already. It works really well for new features, but not so hot when it comes to bugs. The FogBugz team handles this by tacking on an additional 6 weeks of development time to every release (so they don't estimate individual bugs). Their experience has shown that's about what it takes them to flush out the bugs.

We don't have that much history on our team so we've been attempting to estimate our bugs as we go. We do so by breaking things into 2 week iterations. At the start of each one we determine which bugs and features will be assigned for the upcoming iteration and attempt to estimate each one. For bugs we typically do much rougher estimates (like full or half days). For us it's either this or tack on some guessed at number of months to the development schedule. Either way we don't have a great handle on when we'll be finished.

Anonymous said...

Hay Dad I like your two blogs and your posts, the are very informative.