Posts

Sorted by New

Wiki Contributions

Comments

Agile programming notices that we're bad at forcasting, but concludes that we're systematically bad. The approach it takes is to ask the programmers to try for consistency in their forecasting of individual tasks, and puts the learning in the next phase, which is planning. So as individual members of a development team, we're supposed to simultaneously believe that we can make consistent forecasts of particular tasks, and that our ability to make estimates is consistently off, and applying a correction factor will make it work.

Partly this is like learning to throw darts, and partly it's a theory about aggregating biased estimates. What I mean by the dart throwing example, is that beginning dart tossers are taught to try first to aim for consistency. Once you get all your darts to end up close to the same spot, you can adjust small things to move the spot around. The first requirement for being a decent dart thrower is being able to throw the same way each toss. Once you can do that, you can turn slightly, or learn other tricks to adjust how your aim point relates to the place the darts land.

The aggregation theory says that the problem in forecasting is not with the individual estimates, it's with random and unforseen factors that are easier to correct for in the aggregate. The problem with the individual forecasts might be overhead tasks that reliably steal time away, it might be that bugs are unpredictable, or it might be about redesign that only becomes apparent as you make progress. These are easier to account for in the aggregate planning than when thinking about individual tasks. If you pad all your tasks for their worst case, you end up with too much padding.

Over the long term, the expansion factor from individual tasks to weeks or months of work can be fairly consistent.