How to see into the future, by Tim Harford
The article may be gated. (I have a subscription through my school.)
It is mainly about two things: the differing approaches to forecasting taken by Irving Fisher, John Maynard Keynes, and Roger Babson; and Philip Tetlock's Good Judgment Project.
Key paragraph:
So what is the secret of looking into the future? Initial results from the Good Judgment Project suggest the following approaches. First, some basic training in probabilistic reasoning helps to produce better forecasts. Second, teams of good forecasters produce better results than good forecasters working alone. Third, actively open-minded people prosper as forecasters.
But the Good Judgment Project also hints at why so many experts are such terrible forecasters. It’s not so much that they lack training, teamwork and open-mindedness – although some of these qualities are in shorter supply than others. It’s that most forecasters aren’t actually seriously and single-mindedly trying to see into the future. If they were, they’d keep score and try to improve their predictions based on past errors. They don’t.
Here is an application for consideration. I'm not a software developer, but I get to specify the requirements for software that a team develops. (I'd be the "business owner" or "product owner" depending on the lingo.) The agile+scrum approach to software development notionally assigns points to each "story" (meaning approximately a task that a software user wants to accomplish). The team assigns the points ahead of time, so it I a forecast of how much effort will be required. Notionally, these can be used for forecasting. The problem that I have encountered is that the software developers don't really see the forecasting benefit, so don't embrace it fully. For examples in my experience, they don't (1) focus as much in their "retrospectives" (internal meetings after finishing software) about why forecasts were wrong or (2) assign points to every single story that is written, to allow others to use their knowledge. They are satisfied if their ability to forecast is good enough to tell them how much work to take on in the current "sprint" (a set period of usually 2 or 3 weeks during which they work on the stories that are pulled into that sprint), which requires only an approximation of the work that might go into the current sprint.
Some teams tend to use it as a performance measure, so that they feel better if they increase the number of points produced per sprint over time (which they call "velocity"). Making increased velocity into a goal means that the developers have an incentive for point inflation. I think that use makes the approach less valuable for forecasting.
I believe there are a number of software developers in these forums. What is the inside perspective?
Max L.
My inside perspective is that story points aren't an estimate of how much time a developer expects to spend on a story (we have separate time estimates), but an effort to quantify the composite difficulty of the story into some sort of "size" for non-developers (or devs not familiar with the project). As such it's more of an exercise in expectation management than forecasting.