# How to boost your forecast accuracy

*by Wolf-Gerrit Benkendorff*

More and more companies experience the growing dynamics and uncertainty of the economic environment at first hand. The key word *volatility*, as an expression of this phenomenon, is very popular. When the volatility increases, the forecast accuracy becomes more important. What could be more appropriate than to constantly increase the forecast accuracy?

The good news is: The forecast accuracy can be significantly increased with just a few simple means. The even better news is: No complex mathematical-statistical models, expensive software or supernatural abilities are required in the first step – even though these might help as well.

**How to measure deviations systematically**

A systematic measurement is the requirement for every kind of improvement. However, measurements, which deserve the attribute «systematic», do rarely happen in practice. Often, the forecast accuracy is «measured» as the difference between the forecast and the year-end result. In bad cases this is complemented by a comparison of the «accuracies» of the forecasts 1 to 3. Such «measurement methods» do not provide insights to the increase of the forecast accuracy (for a good introduction to the issue of the systematic measurement, please go to ForPrin).

**The more frequent we measure, the better **

The more frequent the measurements are conducted, the more data is available for the assessment of the accuracy. As a result, forecast errors can be identified and fixed in a timely manner.

Here is a simple example: If in four successive forecasts, deviations to the actuals with the same sign are found, then a systematic forecast error must be assumed.* A reason for a systematic forecast error can be, that conservative forecasts are politically not accepted - leading to the continuous production of overoptimistic forecasts. If only three forecasts are created per year, then it takes more than one year to detect a systematic forecast error!

**Analyzing all deviations **

If forecast errors occur, we have to analyze the reasons for this. Such analyses require well documented assumptions. If it cannot be retraced, why a forecast has been proven incorrect in retrospect, then it has no learning effect.

In the deviation analysis, positive and negative deviations must be considered in the same intense manner.

This also applies to large and small deviations. Even if it doesn’t seem to make much sense in the first place: Forecasts with small deviations should be analyzed as well (at least rudimentarily). Otherwise the possibility cannot be excluded, that the accuracy has been achieved despite false assumptions, for example if the false assumptions cancel each other out. In this case it has merely been luck.

**Testing the adjustments **

Once the continuous measurement and analysis process is established, adjustments to the forecast methods, processes and models can be tested for their effectiveness. This way it can be checked, whether statistical methods based on historical data are superior to mere expert estimates; or which forecast methods can be combined for logical reasons (empirical studies prove that the combination of several forecast methods leads to a higher accuracy).

**Does it really work? **

Yes, it really works! Together with a customer, I have established a monthly forecast measurement and analysis process for sales. Already after four months, it has been demonstrated that systematic forecast errors existed in almost all regions. The forecasts in most of the regions were generally too optimistic.

Where there used to be a general suspicion, there is now a certainty. Now, this is a sound basis and the motivation for an improvement of the current forecast methods, processes and models.

*A forecast without any systematic errors will on average provide as many positive deviations to the actuals, as negative ones. The likelihood of four successive deviations with the same sign is merely at 6.25% in a forecast without any systematic errors.