Big data, business intelligence, data mining, data visualization, reporting, Enterprise Performance Management, statistics and so on – they are all attempts to do one thing – improve decision making in the face of uncertainty. If there was no uncertainty we could load our databases and spreadsheets with the appropriate plans and all go home. Sales would proceed as expected, as would production, procurement and all other business activities.
Uncertainty can be defined as lack of information, or conversely we can define information as the stuff that reduces uncertainty. And so the race is on to develop technologies and methods that help reduce uncertainty, so that marketing activities are better targeted, customers are presented with more attractive options, suppliers do what they say they are going to do and overall uncertainty diminishes.
The art and science of uncertainty reduction is not the least bit straightforward however. The algorithms used to mine data for uncertainty reducing patterns are often sophisticated, and for the inexperienced often deliver patterns that are simply inaccurate. It requires experience, knowledge and skill to sort out the truly useful patterns from the mirages that many analytical methods will produce. Not only this, we are poorly equipped to deal with the random and uncertain; always looking for patterns where none exist. This is one of the greatest dangers associated with the current fad for eye-candy rich data visualization tools. Daniel Kahneman in his book ‘Thinking Fast and Slow’ provides plenty of evidence for the mistakes we make when looking for causes, where none exist.
This is a brave new era for business and mistakes will be made, simply because our everyday concepts are poorly matched with the probabilistic world we live in. As always the catch up process will be slow, but advantage will undoubtedly go to those with a willingness to embrace new ideas and technology.