One interesting thought came after I read the point 8:
Software systems and software products each typically cost 3 times as much per instruction to fully develop as does an individual software program.I immediately thought about that old estimation rule: estimate how long it will take to be done, and then multiply it by two. I always though it to be unserious or a manifestation of intellectual laziness. What, you aren't able to say how long this effing program will take to be written? Just sum up the parts (as shown below)!
But let's look at the problem from a little more different angle: what if programmers do not take into account the costs of polishing the program, making it user-friendly, plus all the problems and dependencies from other programmers located in other deparments of the company - "L'enfer, c'est les autres"! Normally you do not take this factors into account when estimating, and you even cannot estimate it properly. We use either a brute-force 20% project buffers or a statistical estimate based on the PERT method, but it somehow doesn't work out that good.
So maybe we should take the above citied measurement data, at use it when estimating? Just multiply our technical-only estimate by the "real world" factor? Personally, I still cannot believe that the "real world" factor be so high as 3! Am I too optimistic? But I think that frw=2 should be a good choice!
Although I'm a little afraid here, as I always imprement a given piece of code faster than my own PERT estimate. But recently, when estimating the entire system, I missed the point completely: new requirements, unexpected dependencies on not so great libraries, the internal pressure to please the customer... This all sums up in a greater project! An at that point I should have been to muliply so high as by 4! Welcome in the real world.
* Harry W. Boehm, "Industrial Software Metrics: A Top-Ten List", http://csse.usc.edu/csse/TECHRPTS/1985/usccse85-499/usccse85-499.pdf