How to see into the future, by Tim Harford
The article may be gated. (I have a subscription through my school.)
It is mainly about two things: the differing approaches to forecasting taken by Irving Fisher, John Maynard Keynes, and Roger Babson; and Philip Tetlock's Good Judgment Project.
Key paragraph:
So what is the secret of looking into the future? Initial results from the Good Judgment Project suggest the following approaches. First, some basic training in probabilistic reasoning helps to produce better forecasts. Second, teams of good forecasters produce better results than good forecasters working alone. Third, actively open-minded people prosper as forecasters.
But the Good Judgment Project also hints at why so many experts are such terrible forecasters. It’s not so much that they lack training, teamwork and open-mindedness – although some of these qualities are in shorter supply than others. It’s that most forecasters aren’t actually seriously and single-mindedly trying to see into the future. If they were, they’d keep score and try to improve their predictions based on past errors. They don’t.
Here is an application for consideration. I'm not a software developer, but I get to specify the requirements for software that a team develops. (I'd be the "business owner" or "product owner" depending on the lingo.) The agile+scrum approach to software development notionally assigns points to each "story" (meaning approximately a task that a software user wants to accomplish). The team assigns the points ahead of time, so it I a forecast of how much effort will be required. Notionally, these can be used for forecasting. The problem that I have encountered is that the software developers don't really see the forecasting benefit, so don't embrace it fully. For examples in my experience, they don't (1) focus as much in their "retrospectives" (internal meetings after finishing software) about why forecasts were wrong or (2) assign points to every single story that is written, to allow others to use their knowledge. They are satisfied if their ability to forecast is good enough to tell them how much work to take on in the current "sprint" (a set period of usually 2 or 3 weeks during which they work on the stories that are pulled into that sprint), which requires only an approximation of the work that might go into the current sprint.
Some teams tend to use it as a performance measure, so that they feel better if they increase the number of points produced per sprint over time (which they call "velocity"). Making increased velocity into a goal means that the developers have an incentive for point inflation. I think that use makes the approach less valuable for forecasting.
I believe there are a number of software developers in these forums. What is the inside perspective?
Max L.
My inside perspective from using this system is that, at least the way we use it, it is not useful for forecasting. In each sprint approximately 40-50% or so of the tasks actually get finished. Most of the rest we carry over, and occasionally a few will be deprioritized and removed.
The points values are not used very systematically. Some items aren't even assigned a point value. For those that are, the values do generally tend to correspond to amounts of effort of 'a day', 'a week', or 'a month', but not with very much precision.
We certainly don't focus on whether or not our points predictions were accurate to the amount of time actually taken during retrospectives. I think that in the past 12 months, all but one or two of our retrospectives have ended up being cancelled, because they aren't seen as very important.
Probably we are not utilizing this system to its fullest capacity, but our group isn't dysfuncitonal or anything. The system seems to work pretty well as a management tool for us (but is not so useful for forecasting!).