Hubbard writes about performance measurement in chapter 11. He notes that management typically knows what are the relevant performance metrics. However it has trouble prioritizing between them. Hubbard's proposal is to let the managament create utility charts of the required trade-offs. For instance on curve for programmer could have on-time completion rate in one axis and error-free rate in the other (page 214). Thus the management is required to document how much one must increase to compensate for drop in the other. The end product of the charts should be a single index for measuring employee performance.
Too bad you didn't get positive feedback. The awaited praise for discoveries keeps scientists going. In terms of money the smart decision would have been to hide the results from parents to keep the dollars flowing in.
Hubbard talks about measurement inversion. "In a business case, the economic value of measuring a variable is usually inversely proportional to how much measurement attention it usually gets." This thread contains discussion about possible reasons for it. The easiness/familiarity aspect that you imply is probably one of them. Others include declining marginal value of information on certain subject and the destabilizing effect new measurements might have for an organization.
It's easy to imagine that measurement inversion would also apply to common measurements on personal life.
I read the chapter on diet. Adams claims that "Science has demonstrated that humans have a limited supply of willpower." This idea is important for at least this chapter. However as Robert Kurzban has noted it is a weak theory that cannot be falsified. I would prefer Adams to use the concept of willpower as another self-delusion to help optimize one's systems.
I like the coin example. In my experience the situation with clear choice is typical in small businesses. It often isn't worth honing the valuation models for projects very long when it is very improbably that the presumed second best choice would turn out to be the best.
I guess the author is used to working for bigger companies that do everything in larger scale and thus have generally more options to choose from. Nothing untrue in the chapter but this point could have been pointed out.
Douglas W. Hubbard has a book titled How to Measure Anything where he states that half a day of exercising confidence interval calibration makes most people nearly perfectly calibrated. As you noted and as is said here, that method fits nicely with Fermi estimates.
This combination seems to have a great ratio between training time and usefulness.
I estimated how much the population of Helsinki (capital of Finland) grew in 2012. I knew from the news that the growth rate is considered to be steep.
I knew there are currently about 500 000 habitants in Helsinki. I set the upper bound to 3 % growth rate or 15 000 residents for now. With that rate the city would grow twentyfold in 100 years which is too much. But the rate might be steeper now. For lower bound i chose 1000 new residents. I felt that anything less couldnt really produce any news. AGM is 3750.
My second method was to go through the number of new apartments. Here I just checked that in recent years about 3000 apartments have been built yearly. Guessing that the household size could be 2 persons I got 6000 new residents.
It turned out that the population grew by 8300 residents which is highest in 17 years. Otherwise it has recently been around 6000. So both methods worked well. Both have the benefit that one doesnt need to care whether the growth comes from births/deaths or people flow. They also didn't require considering how many people move out and how many come in.
Obviously i was much more confident on the second method. Which makes me think that applying confidence intervals to fermi estimates would be useful.