312-251-1301 info@cfachicago.org
Log In

how-to-measure-anythingPath dependence is the phenomenon often used to explain why people sometimes persist with practices that are no longer optimal or economically rational. Statistics is another area where path dependence has struck. The statistical techniques that students learn in school, the ones that practitioners apply in industry, and the ones researchers use in journal publications often aren’t the best or the most appropriate ones but rather the ones that continue to be used because they’ve always been used.  Douglas Hubbard’s How to Measure Anything (2010) attempts to update some of those techniques for the 21st century.  In addition, he offers some refreshing perspectives on behavioral finance and the biases that adversely affect decision makers, even the so-called professional decision makers at the executive level in industry and government. Finally, he offers a unifying framework for decomposing complex problems into individual variables, assessing the value of reducing the uncertainty for each of those variables, measuring those variables, and finally determining probabilities through Monte Carlo simulations and Bayesian statistics.

Starting with antiquated statistical techniques, every former stats 101 student probably remembers going through some type of hypothesis testing exercise such as testing if a coin is fair, a drug works, voters prefer a candidate, etc. Those tests take a null hypothesis, such as assuming that a coin is fair, flipping it multiple times, and then determining the probability of observing a series of outcomes if the coin were fair. If the probability of observing a series of outcomes on a supposedly fair coin is less than some arbitrary threshold, usually five percent, the experimenter rejects the null hypothesis and concludes that the coin is not fair.  For example, the probability of observing five heads out of five flips on a fair coin is 3.13%, which would cause an experimenter using the five percent threshold to reject the null hypothesis that the coin is fair. The five percent shibboleth comes from the statistician Sir Ronald Fisher’s 1925 paper “Statistical Methods for Research Workers.” He wrote a year later in “The Arrangement of Field Experiments” (1926) that the threshold was arbitrary and that other thresholds may be used; however, the damage had been done and the five percent threshold remains as a venerated relic.

The whole process is a convoluted way to approximate the more useful question: What’s the probability of getting a heads on a given coin? The Bayesian approach to statistics, in contrast to the frequentist approach previously described, seeks to do just that. Mr. Hubbard notes that the term “Bayesian” was first used by Fischer himself as a derogatory reference to adherents of the approach named after Rev. Thomas Bayes. Rev. Bayes is credited with developing the first formulation of how new evidence can be used to update prior beliefs.  In the case of Bayesian statistics, new evidence is used to update prior assumptions about probabilities.

Once the distribution of the relevant variables or drivers is better known, Mr. Hubbard postulates a relationship between the variables and generates a hypothetical distribution of the phenomenon that one is trying to predict using Monte Carlo simulations. First developed to solve intractable problems in nuclear physics, modern computing power has made the technique accessible to anyone with a personal computer and an Excel spreadsheet. Instead of trying to compute the probability of a phenomenon such as rolling a two with a pair of dice (“snake eyes”), Monte Carlo simulations flip the problem by simulating thousand or perhaps millions of rolls and then determining what percentage of the rolls were twos. With an Excel spreadsheet, Mr. Hubbard shows how to calculate distributions and expected values for complex phenomenon after estimating the distribution of the underlying variables and their relationships. Monte Carlo simulations are seldom taught in introductory statistics courses. The topic is usually reserved for advanced classes and special topics classes even though the basics of the technique are no more complicated than regression modeling and several other topics that are covered in introductory classes.

With new statistical tools in tow, Mr. Hubbard then sets forth on finding what to measure.  Here Mr. Hubbard again notes a pernicious tendency among decision makers to either measure what’s easy to measure or what they’re already familiar with. The solution, Mr. Hubbard argues, is to triage variable before trying to reduce uncertainty about them by introducing metrics to quantify the costs and benefits of acquiring additional information about each variable. He starts with the Expected Value of Perfect Information (EVPI): What would it be worth to know a presently unknown quantity with complete certainty? He then works backwards to determine the incremental Expected Cost of Information (ECI) and the incremental Expected Value of Information (EVI). Finally, he adds a time component, noting that for some decisions the value of information is perishable. Mr. Hubbard notes that adding the time component can prevent what pioneering decision theorist Howard Raiffa called, “Solving the right problem too late.”

In addition to the tendency to measure the wrong things and measure in the wrong amounts, Mr. Hubbard notes several other behavioral and cognitive biases, such as expectancy bias and overconfidence. Instead of just rehashing problems that already have been noted extensively in the behavioral finance literature, Mr. Hubbard goes further and offers solutions, especially to the problem of overconfidence and quantifying uncertainty. When asked to calculate a 90% confidence interval for an unknown quantity, such as the wingspan of a Boeing 747 aircraft, most people choose too narrow a range. Mr. Hubbard shows that with training the average person can estimate ranges for unknown quantities such that on average the true value falls within their estimated range 90% of the time. The training, called “calibration training,” is simple to conduct and has a tremendous success rate.  Organizations should probably spend more time training their executives to become better decision makers given how much time and money as they spend sending them to conferences, hiring executive coaches, and giving them physical and psychological assessments.

When the CFA Society Chicago’s Book Club met to discuss Mr. Hubbard’s book in April 2017, most of the participants welcomed his fresh approach to quantitative and empirical problem solving. If there were any misgivings about the book, they were that it didn’t fully live up to its title: “Finding the Value of Intangibles in Business.” The participants would have welcomed more examples of how the techniques described could be used to value business units or firms that make intensive use of intangibles such as brand identity, intellectual property, or perhaps others.

Hopefully, this won’t be the last time that Mr. Hubbard crosses paths with the Society and we’ll get to fulfill that promise.