On a daily basis we all find ourselves making decisions under uncertainty, and this is particularly true in businesses, where complexity can be overwhelming. As human beings we need to simplify, and because decisions have to be made, we make them after a process of simplification. At one extreme we might use complex machine learning methods such as Bayesian Networks to try and establish causal links. On the other hand we might just look at a graph and reach some form of conclusion. Either way randomness, the meaningless noise in our environment, is easily interpreted as signal – as something meaningful. A simple example will help.
Consider you work for a firm whose monthly revenues have largely been flat. They do of course vary from month to month, but not by more than 10% say. It is quite feasible that six months of increasing revenue will show up, and believing that the business has got second wind, managers decide to employ more people and invest in additional production capacity. The problem is that six consecutive months of increased revenue will happen on a random basis every five years or so. It’s easy to work out. Looking back over the trading history the business management see that revenue for one month is higher than the previous 50% of the time – on average. Meaning that it is also lower 50% of the time. It’s a bit like flipping coins. Getting six tails in six flips will happen every 64 flips on average. Or for our business, it will get six consecutive months when revenue is higher every 64 months – on average. The opposite is also true – six consecutive months of decreasing sales every 64 months. Obviously this is very simplified, but it illustrates an important point. Seemingly unusual things can happen, purely by accident, with no inherent meaning at all. Anyone who has run a business will know that random variations can imply all sorts of things – most of which are meaningless. Statisticians use something called a p-value to try and weed out these random variations, but it isn’t all that helpful. A p-value of 5% is often used as a standard – meaning that if an event is less than 5% likely to occur by accident then we should interpret the data as meaningful. This wouldn’t have helped in our example.
And so business managers have to apply judgement. All the charts and graphs in the world cannot eliminate uncertainty. If a firm has just released a new product, and hired a new Sales Director then maybe a steady rise in revenue is more justified as meaningful. The hard fact of the matter though, is that you will never know with certainty. While gut feeling is currently very unpopular as a means of honing decisions, there is increasing evidence that it is often more powerful than we might imagine. Gerd Gigerenzer and others have conducted meaningful studies into the power of gut instinct, and find that it often outperforms rigorous analysis – although the term rigorous is never really true, all analysis is subject to uncertainty.
There is a growing realization that ‘evidence based decisioning’ is often flawed, simply because people are blind to the uncertainties, and ignore the human factor in decision making. It seems the most powerful solution is a combination of formal analysis and human judgement. This does not imply that these two approaches will necessarily agree, but that a middle position can be found that is stronger than each approach on its own.
There is much literature dealing with these issues, and some very readable and entertaining books. These include The Flaw of Averages by Savage, The Signal and the Noise by Silver, and Fooled by Randomness by Taleb.