evaluate#

evaluate(forecaster, cv, y, X=None, strategy: str = 'refit', scoring: callable | list[callable] | None = None, return_data: bool = False, error_score: str | int | float = nan, backend: str | None = None, cv_X=None, backend_params: dict | None = None)[source]#

Evaluate forecaster using timeseries cross-validation.

All-in-one statistical performance benchmarking utility for forecasters which runs a simple backtest experiment and returns a summary pd.DataFrame.

The experiment run is the following:

Denote by \(y_{train, 1}, y_{test, 1}, \dots, y_{train, K}, y_{test, K}\) the train/test folds produced by the generator cv.split_series(y). Denote by \(X_{train, 1}, X_{test, 1}, \dots, X_{train, K}, X_{test, K}\) the train/test folds produced by the generator cv_X.split_series(X) (if X is None, consider these to be None as well).

  1. Initialize the counter to i = 1

  2. Fit the forecaster to \(y_{train, 1}\), \(X_{train, 1}\), with fh set to the absolute indices of \(y_{test, 1}\).

  3. Use the forecaster to make a prediction y_pred with the exogeneous

    data \(X_{test, i}\). Predictions are made using either predict, predict_proba or predict_quantiles, depending on scoring.

  4. Compute the scoring function on y_pred versus \(y_{test, i}\)

  5. If i == K, terminate, otherwise

  6. Set i = i + 1

  7. Ingest more data \(y_{train, i}\), \(X_{train, i}\), how depends on strategy:

  • if strategy == "refit", reset and fit forecaster via fit, on \(y_{train, i}\), \(X_{train, i}\) to forecast \(y_{test, i}\)

  • if strategy == "update", update forecaster via update, on \(y_{train, i}\), \(X_{train, i}\) to forecast \(y_{test, i}\)

  • if strategy == "no-update_params", forward forecaster via update, with argument update_params=False, to the cutoff of \(y_{train, i}\)

  1. Go to 3

Results returned in this function’s return are:

  • results of scoring calculations, from 4, in the i-th loop

  • runtimes for fitting and/or predicting, from 2, 3, 7, in the i-th loop

  • cutoff state of forecaster, at 3, in the i-th loop

  • \(y_{train, i}\), \(y_{test, i}\), y_pred (optional)

A distributed and-or parallel back-end can be chosen via the backend parameter.

Parameters:
forecastersktime BaseForecaster descendant (concrete forecaster)

sktime forecaster to benchmark

cvsktime BaseSplitter descendant

determines split of y and possibly X into test and train folds y is always split according to cv, see above if cv_X is not passed, X splits are subset to loc equal to y if cv_X is passed, X is split according to cv_X

ysktime time series container

Target (endogeneous) time series used in the evaluation experiment

Xsktime time series container, of same mtype as y

Exogenous time series used in the evaluation experiment

strategy{“refit”, “update”, “no-update_params”}, optional, default=”refit”

defines the ingestion mode when the forecaster sees new data when window expands “refit” = forecaster is refitted to each training window “update” = forecaster is updated with training window data, in sequence provided “no-update_params” = fit to first training window, re-used without fit or update

scoringsubclass of sktime.performance_metrics.BaseMetric or list of same,

default=None. Used to get a score function that takes y_pred and y_test arguments and accept y_train as keyword argument. If None, then uses scoring = MeanAbsolutePercentageError(symmetric=True).

return_databool, default=False

Returns three additional columns in the DataFrame, by default False. The cells of the columns contain each a pd.Series for y_train, y_pred, y_test.

error_score“raise” or numeric, default=np.nan

Value to assign to the score if an exception occurs in estimator fitting. If set to “raise”, the exception is raised. If a numeric value is given, FitFailedWarning is raised.

backend{“dask”, “loky”, “multiprocessing”, “threading”}, by default None.

Runs parallel evaluate if specified and strategy is set as “refit”.

  • “None”: executes loop sequentally, simple list comprehension

  • “loky”, “multiprocessing” and “threading”: uses joblib.Parallel loops

  • “joblib”: custom and 3rd party joblib backends, e.g., spark

  • “dask”: uses dask, requires dask package in environment

  • “dask_lazy”: same as “dask”, but changes the return to (lazy) dask.dataframe.DataFrame.

Recommendation: Use “dask” or “loky” for parallel evaluate. “threading” is unlikely to see speed ups due to the GIL and the serialization backend (cloudpickle) for “dask” and “loky” is generally more robust than the standard pickle library used in “multiprocessing”.

cv_Xsktime BaseSplitter descendant, optional

determines split of X into test and train folds default is X being split to identical loc indices as y if passed, must have same number of splits as cv

backend_paramsdict, optional

additional parameters passed to the backend as config. Directly passed to utils.parallel.parallelize. Valid keys depend on the value of backend:

  • “None”: no additional parameters, backend_params is ignored

  • “loky”, “multiprocessing” and “threading”: default joblib backends any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, with the exception of backend which is directly controlled by backend. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “joblib”: custom and 3rd party joblib backends, e.g., spark. any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, backend must be passed as a key of backend_params in this case. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “dask”: any valid keys for dask.compute can be passed, e.g., scheduler

Returns:
resultspd.DataFrame or dask.dataframe.DataFrame

DataFrame that contains several columns with information regarding each refit/update and prediction of the forecaster. Row index is splitter index of train/test fold in cv. Entries in the i-th row are for the i-th train/test split in cv. Columns are as follows:

  • test_{scoring.name}: (float) Model performance score. If scoring is a

list, then there is a column withname test_{scoring.name} for each scorer.

  • fit_time: (float) Time in sec for fit or update on train fold.

  • pred_time: (float) Time in sec to predict from fitted estimator.

  • len_train_window: (int) Length of train window.

  • cutoff: (int, pd.Timestamp, pd.Period) cutoff = last time index in train fold.

  • y_train: (pd.Series) only present if see return_data=True

train fold of the i-th split in cv, used to fit/update the forecaster.

  • y_pred: (pd.Series) present if see return_data=True

forecasts from fitted forecaster for the i-th test fold indices of cv.

  • y_test: (pd.Series) present if see return_data=True

testing fold of the i-th split in cv, used to compute the metric.

Examples

The type of evaluation that is done by evaluate depends on metrics in param scoring. Default is MeanAbsolutePercentageError.

>>> from sktime.datasets import load_airline
>>> from sktime.forecasting.model_evaluation import evaluate
>>> from sktime.split import ExpandingWindowSplitter
>>> from sktime.forecasting.naive import NaiveForecaster
>>> y = load_airline()[:24]
>>> forecaster = NaiveForecaster(strategy="mean", sp=3)
>>> cv = ExpandingWindowSplitter(initial_window=12, step_length=6, fh=[1, 2, 3])
>>> results = evaluate(forecaster=forecaster, y=y, cv=cv)

Optionally, users may select other metrics that can be supplied by scoring argument. These can be forecast metrics of any kind as stated `here

<https://www.sktime.net/en/stable/api_reference/performance_metrics.html?highlight=metrics>`_ i.e., point forecast metrics, interval metrics, quantile forecast metrics. To evaluate estimators using a specific metric, provide them to the scoring arg.

>>> from sktime.performance_metrics.forecasting import MeanAbsoluteError
>>> loss = MeanAbsoluteError()
>>> results = evaluate(forecaster=forecaster, y=y, cv=cv, scoring=loss)

Optionally, users can provide a list of metrics to scoring argument.

>>> from sktime.performance_metrics.forecasting import MeanSquaredError
>>> results = evaluate(
...     forecaster=forecaster,
...     y=y,
...     cv=cv,
...     scoring=[MeanSquaredError(square_root=True), MeanAbsoluteError()],
... )

An example of an interval metric is the PinballLoss. It can be used with all probabilistic forecasters.

>>> from sktime.forecasting.naive import NaiveVariance
>>> from sktime.performance_metrics.forecasting.probabilistic import PinballLoss
>>> loss = PinballLoss()
>>> forecaster = NaiveForecaster(strategy="drift")
>>> results = evaluate(forecaster=NaiveVariance(forecaster),
... y=y, cv=cv, scoring=loss)