When estimating the relative size of user stories in agile software development the members of the team are supposed to estimate the size of a user story as being 1, 2, 3, 5, 8, 13, ... . So the estimated values should resemble the Fibonacci series. But I wonder, why?
The description of http://en.wikipedia.org/wiki/Planning_poker on Wikipedia holds the mysterious sentence:
The reason for using the Fibonacci sequence is to reflect the inherent uncertainty in estimating larger items.
But why should there be inherent uncertainty in larger items? Isn't the uncertainty higher, if we make fewer measurement, meaning if fewer people estimate the same story? And even if the uncertainty is higher in larger stories, why does that imply the use of the Fibonacci sequence? Is there a mathematical or statistical reason for it? Otherwise using the Fibonacci series for estimation feels like CargoCult science to me.
2^n
might space the numbers too far, so why not use the Fibonacci sequence, which is aboutc*phi^n
? – Squib