median_squared_scaled_error(y_true, y_pred, sp=1, horizon_weight=None, multioutput='uniform_average', square_root=False, **kwargs)[source]#

Median squared scaled error (MdSSE) or root median squared scaled error (RMdSSE).

If square_root is False then calculates MdSSE, otherwise calculates RMdSSE if square_root is True. Both MdSSE and RMdSSE output is non-negative floating point. The best value is 0.0.

This is a squared variant of the MdASE loss metric. Like MASE and other scaled performance metrics this scale-free metric can be used to compare forecast methods on a single series or between series.

This metric is also suited for intermittent-demand series because it will not give infinite or undefined values unless the training data is a flat timeseries. In this case the function returns a large value instead of inf.

Works with multioutput (multivariate) timeseries data with homogeneous seasonal periodicity.

y_truepd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon

Ground truth (correct) target values.

y_predpd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon

Forecasted values.

y_trainpd.Series, pd.DataFrame or np.array of shape (n_timepoints,) or (n_timepoints, n_outputs), default = None

Observed training values.


Seasonal periodicity of training data.

horizon_weightarray-like of shape (fh,), default=None

Forecast horizon weights.

multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’

Defines how to aggregate metric for multivariate (multioutput) data. If array-like, values used as weights to average the errors. If ‘raw_values’, returns a full set of errors in case of multioutput input. If ‘uniform_average’, errors of all outputs are averaged with uniform weight.


RMdSSE loss. If multioutput is ‘raw_values’, then RMdSSE is returned for each output separately. If multioutput is ‘uniform_average’ or an ndarray of weights, then the weighted average RMdSSE of all output errors is returned.


M5 Competition Guidelines.

Hyndman, R. J and Koehler, A. B. (2006). “Another look at measures of forecast accuracy”, International Journal of Forecasting, Volume 22, Issue 4.


>>> from sktime.performance_metrics.forecasting import median_squared_scaled_error
>>> y_train = np.array([5, 0.5, 4, 6, 3, 5, 2])
>>> y_true = np.array([3, -0.5, 2, 7, 2])
>>> y_pred = np.array([2.5, 0.0, 2, 8, 1.25])
>>> median_squared_scaled_error(y_true, y_pred, y_train=y_train, square_root=True)
>>> y_train = np.array([[0.5, 1], [-1, 1], [7, -6]])
>>> y_true = np.array([[0.5, 1], [-1, 1], [7, -6]])
>>> y_pred = np.array([[0, 2], [-1, 2], [8, -5]])
>>> median_squared_scaled_error(y_true, y_pred, y_train=y_train, square_root=True)
>>> median_squared_scaled_error(y_true, y_pred, y_train=y_train,     multioutput='raw_values', square_root=True)
array([0.08687445, 0.20203051])
>>> median_squared_scaled_error(y_true, y_pred, y_train=y_train,     multioutput=[0.3, 0.7], square_root=True)