evaluate#

evaluate(forecaster, cv, y, X=None, strategy='refit', scoring=None, return_data=False)[source]#

Evaluate forecaster using timeseries cross-validation.

Parameters
forecastersktime.forecaster

Any forecaster

cvTemporal cross-validation splitter

Splitter of how to split the data into test data and train data

ypd.Series

Target time series to which to fit the forecaster.

Xpd.DataFrame, default=None

Exogenous variables

strategy{“refit”, “update”}

Must be “refit” or “update”. The strategy defines whether the forecaster is only fitted on the first train window data and then updated, or always refitted.

scoringsubclass of sktime.performance_metrics.BaseMetric, default=None.

Used to get a score function that takes y_pred and y_test arguments and accept y_train as keyword argument. If None, then uses scoring = MeanAbsolutePercentageError(symmetric=True).

return_databool, default=False

Returns three additional columns in the DataFrame, by default False. The cells of the columns contain each a pd.Series for y_train, y_pred, y_test.

Returns
pd.DataFrame

DataFrame that contains several columns with information regarding each refit/update and prediction of the forecaster.

Examples

>>> from sktime.datasets import load_airline
>>> from sktime.forecasting.model_evaluation import evaluate
>>> from sktime.forecasting.model_selection import ExpandingWindowSplitter
>>> from sktime.forecasting.naive import NaiveForecaster
>>> y = load_airline()
>>> forecaster = NaiveForecaster(strategy="mean", sp=12)
>>> cv = ExpandingWindowSplitter(
...     initial_window=24,
...     step_length=12,
...     fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
>>> results = evaluate(forecaster=forecaster, y=y, cv=cv)