Slashdot reports on a survey of a gazillion project managers on why their projects fail. The results are interesting, but the authors also present an all-powerful mathematical formula that lets you slice and dice your project into a single number that supposedly indicates risk. So far, so statistical.
They (or the ACM, publishing the article) helpfully present a “worked example” of the formula, which uses subjective ratings in six areas, each multiplied by a weighting, then all added together to give you your distance from doom. Great. But they (or the ACM) get their maths wrong.
In case the graphic is fixed, here’s a transcription of the worked example right now:
Fit between blah blah blah: 5 x 3.0 = 15.2
Level of blah blah: 6 x 1.9 = 11.6
Use of blah: 1 x 1.7 = 1.7
Similarity to blah: 3 x 1.5 = 4.5
Project simblahcity: 7 x 1.1 = 7.4
Stablahlity of blah: 9 x 0.8 = 7.3
Overall blah: 48
2/6. See me. (And their numbers actually add up to 47.7 not 48, but they’re probably “rounded”.)
I make the correct answer 47.5, due to my superpower ability to both multiply and add. But hey, it’s all fuzzy semi-meaningless stats since it starts from a subjective score. Simply bias your biases to reduce your risk!