Now a very similar case, in which we have A 100 and F 80. In the first one, we have A 100 and F 120. While fixing the asymmetry of boundlessness, sMAPE introduces another kind of delicate asymmetry caused by the denominator of the formula. Here is a plot where we simulate "sales" by rolling $n=8$ six-sided dice $N=1,000$ times and plot the average sMAPE, together with pointwise quantiles: fcst 1$ will lead to a larger EsAPE than $\hat=1$ seems to be a little tedious. Same assumptions as the MAPE regarding the meaningful zero value. It is because in the training sample it is always possible to overfit, and the richer the model, the better the fit will be. Here is why: The measures on the training set (training sample) are not really suitable as basis for model selection. We also looked at various flavors of MAPE and wMAPE, but let's concentrate on the sMAPE here. Use AIC or BIC rather than MAPE or MASE from the training set. In case it is interesting, we wrote a little paper (see also this presentation) once that explained how minimizing percentage errors can lead to forecasting bias, by rolling standard six-sided dice. And hope that your predictive posterior is not misspecified too badly. I then ran MeanAbsolutePercentageError class with the default symmetric False (which make it asymmetric just like MAPE I assume) but I get 0.13189432350948402 which I would expect from sMAPE, but not with the flag set to False. (I'd be interested in being proven wrong.) I'd assume you will need to simulate. I don't think there is a closed-form solution to this question.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |