AutoTS#

class AutoTS(model_name: str = '', model_list: list = 'superfast', frequency: str = 'infer', prediction_interval: float = 0.9, max_generations: int = 10, no_negatives: bool = False, constraint: float = None, ensemble: str = 'auto', initial_template: str = 'General+Random', random_seed: int = 2022, holiday_country: str = 'US', subset: int = None, aggfunc: str = 'first', na_tolerance: float = 1, metric_weighting: dict = None, drop_most_recent: int = 0, drop_data_older_than_periods: int = 100000, transformer_list: dict = 'auto', transformer_max_depth: int = 6, models_mode: str = 'random', num_validations: int = 'auto', models_to_validate: float = 0.15, max_per_model_class: int = None, validation_method: str = 'backwards', min_allowed_train_percent: float = 0.5, remove_leading_zeroes: bool = False, prefill_na: str = None, introduce_na: bool = None, preclean: dict = None, model_interrupt: bool = True, generation_timeout: int = None, current_model_file: str = None, verbose: int = 1, n_jobs: int = -2)[source]#

Auto-ensemble from autots library by winedarksea.

Direct interface to autots.AutoTS.

Parameters:
model_namestr, optional (default=”fast”)

The name of the model. NOTE: Overwrites the model_list parameter. For using only one model oder a default model_list.

model_liststr

The list of models to use. str alias or list of names of model objects to use now can be a dictionary of {“model”: prob} but only affects starting random templates. Genetic algorithm takes from there.

frequencystr

‘infer’ or a specific pandas datetime offset. Can be used to force rollup of data (ie daily input, but frequency ‘M’ will rollup to monthly).

prediction_interval: float

0-1, uncertainty range for upper and lower forecasts. Adjust range, but rarely matches actual containment.

max_generations: int

The maximum number of generations for the genetic algorithm. number of genetic algorithms generations to run. More runs = longer runtime, generally better accuracy. It’s called max because someday there will be an auto early stopping option, but for now this is just the exact number of generations to run.

no_negatives (bool):

Whether negative values are allowed in the forecast. if True, all negative predictions are rounded up to 0.

constraint (float):

The constraint on the forecast values. when not None, use this float value * data st dev above max or below min for constraining forecast values. now also instead accepts a dictionary containing the following key/values: constraint_method (str): one of

  • stdev_min - threshold is min and max of historic data +/- constraint

  • st dev of data stdev - threshold is the mean of historic data +/-

    constraint

  • st dev of data absolute - input is array of length series containing

    the threshold’s final value for each quantile - constraint is the quantile of historic data to use as threshold

constraint_regularization (float): 0 to 1

where 0 means no constraint, 1 is hard threshold cutoff, and in between is penalty term upper_constraint (float): or array, depending on method, None if unused lower_constraint (float): or array, depending on method, None if unused bounds (bool): if True, apply to upper/lower forecast, otherwise False applies only to forecast

ensemble: str

The ensemble method to use. None or list or comma-separated string containing: ‘auto’, ‘simple’, ‘distance’, ‘horizontal’, ‘horizontal-min’, ‘horizontal-max’, “mosaic”, “subsample”

initial_template (str):

The initial template to use for the forecast. ‘Random’ - randomly generates starting template, ‘General’ uses template included in package, ‘General+Random’ - both of previous. Also can be overridden with import_template()

random_seed (int):

The random seed for reproducibility. Random seed allows (slightly) more consistent results.

holiday_country (str):

The country for holiday effects. Can be passed through to Holidays package for some models.

subset (int):

Maximum number of series to evaluate at once. Useful to speed evaluation when many series are input. takes a new subset of columns on each validation, unless mosaic ensembling, in which case columns are the same in each validation

aggfunc (str):

The aggregation function to use. If data is to be rolled up to a higher frequency (daily -> monthly) or duplicate timestamps are included. Default ‘first’ removes duplicates, for rollup try ‘mean’ or np.sum. Beware numeric aggregations like ‘mean’ will not work with non-numeric inputs. Numeric aggregations like ‘sum’ will also change nan values to 0

na_tolerance (float):

The tolerance for missing values. 0 to 1. Series are dropped if they have more than this percent NaN. 0.95 here would allow series containing up to 95% NaN values.

metric_weighting (dict):

The weights for different forecast evaluation metrics. Weights to assign to metrics, effecting how the ranking score is generated.

drop_most_recent (int):

Option to drop n most recent data points. Useful, say, for monthly sales data where the current (unfinished) month is included. occurs after any aggregation is applied, so will be whatever is specified by frequency, will drop n frequencies

drop_data_older_than_periods (int):

The threshold for dropping old data points. Will take only the n most recent timestamps.

transformer_list (dict):

List of transformers to use, or dict of transformer:probability. Note this does not apply to initial templates. can accept string aliases: “all”, “fast”, “superfast”, ‘scalable’ (scalable is a subset of fast that should have fewer memory issues at scale)

transformer_max_depth (int):

maximum number of sequential transformers to generate for new Random Transformers. Fewer will be faster.

models_mode (str):

The mode for selecting models. option to adjust parameter options for newly generated models. Only sporadically utilized. Currently includes: ‘default’/’random’, ‘deep’ (searches more params, likely slower), and ‘regressor’ (forces ‘User’ regressor mode in regressor capable models), ‘gradient_boosting’, ‘neuralnets’ (~Regression class models only)

num_validations (int):

The number of validations to perform. 0 for just train/test on best split. Possible confusion: num_validations is the number of validations to perform after the first eval segment, so totally eval/validations will be this + 1. Also “auto” and “max” aliases available. Max maxes out at 50.

models_to_validate (float):

The fraction of models to validate. Top n models to pass through to cross validation. Or float in 0 to 1 as % of tried. 0.99 is forced to 100% validation. 1 evaluates just 1 model. If horizontal or mosaic ensemble, then additional min per_series models above the number here are added to validation.

max_per_model_class (int):

The maximum number of models per class. of the models_to_validate what is the maximum to pass from any one model class/family.

validation_method (str):

The method for validation. ‘even’, ‘backwards’, or ‘seasonal n’ where n is an integer of seasonal ‘backwards’ is better for recency and for shorter training sets ‘even’ splits the data into equally-sized slices best for more consistent data, a poetic but less effective strategy than others here ‘seasonal’ most similar indexes ‘seasonal n’ for example ‘seasonal 364’ would test all data on each previous year of the forecast_length that would immediately follow the training data. ‘similarity’ automatically finds the data sections most similar to the most recent data that will be used for prediction ‘custom’ - if used, .fit() needs validation_indexes passed - a list of pd.DatetimeIndex’s, tail of each is used as test

min_allowed_train_percent (float):

The minimum percentage of data allowed for training. percent of forecast length to allow as min training, else raises error. 0.5 with a forecast length of 10 would mean 5 training points are mandated, for a total of 15 points. Useful in (unrecommended) cases where forecast_ length > training length.

remove_leading_zeroes (bool):

Whether to remove leading zeroes from the data. replace leading zeroes with NaN. Useful in data where initial zeroes mean data collection hasn’t started yet.

prefill_na (str):

The method for prefilling missing values. The value to input to fill all NaNs with. Leaving as None and allowing model interpolation is recommended. None, 0, ‘mean’, or ‘median’. 0 may be useful in for examples sales cases where all NaN can be assumed equal to zero.

introduce_na (bool):

Whether to introduce missing values to the data. whether to force last values in one training validation to be NaN. Helps make more robust models. defaults to None, which introduces NaN in last rows of validations if any NaN in tail of training data. Will not introduce NaN to all series if subset is used. if True, will also randomly change 20% of all rows to NaN in the validations

preclean (dict):

The parameters for data pre-cleaning. if not None, a dictionary of Transformer params to be applied to input data {“fillna”: “median”, “transformations”: {}, “transformation_params”: {}} This will change data used in model inputs for fit and predict, and for accuracy evaluation in cross validation!

model_interrupt (bool):

Whether the model can be interrupted. If False, KeyboardInterrupts quit entire program. if True, KeyboardInterrupts attempt to only quit current model. if True, recommend use in conjunction with verbose > 0 and result_file in the event of accidental complete termination. if “end_generation”, as True and also ends entire generation of run. Note skipped models will not be tried again.

generation_timeout (int):

The timeout for each generation. if not None, this is the number of minutes from start at which the generational search ends, then proceeding to validation This is only checked after the end of each generation, so only offers an ‘approximate’ timeout for searching. It is an overall cap for total generation search time, not per generation.

current_model_file (str):

The file containing the current model. file path to write to disk of current model params (for debugging if computer crashes). .json is appended

verbose (int):

The verbosity level. setting to 0 or lower should reduce most output. Higher numbers give more output.

n_jobs (int):

The number of jobs to run in parallel. Number of cores available to pass to parallel processing. A joblib context manager can be used instead (pass None in this case). Also ‘auto’.

Attributes:
cutoff

Cut-off = “present time” state of forecaster.

fh

Forecasting horizon that was passed.

is_fitted

Whether fit has been called.

Methods

check_is_fitted([method_name])

Check if the estimator has been fitted.

clone()

Obtain a clone of the object with same hyper-parameters and config.

clone_tags(estimator[, tag_names])

Clone tags from another object as dynamic override.

create_test_instance([parameter_set])

Construct an instance of the class, using first test parameter set.

create_test_instances_and_names([parameter_set])

Create list of all test instances and a list of names for them.

fit(y[, X, fh])

Fit forecaster to training data.

fit_predict(y[, X, fh, X_pred])

Fit and forecast time series at future horizon.

get_class_tag(tag_name[, tag_value_default])

Get class tag value from class, with tag level inheritance from parents.

get_class_tags()

Get class tags from class, with tag level inheritance from parent classes.

get_config()

Get config flags for self.

get_fitted_params([deep])

Get fitted parameters.

get_param_defaults()

Get object's parameter defaults.

get_param_names([sort])

Get object's parameter names.

get_params([deep])

Get a dict of parameters values for this object.

get_tag(tag_name[, tag_value_default, ...])

Get tag value from instance, with tag level inheritance and overrides.

get_tags()

Get tags from instance, with tag level inheritance and overrides.

get_test_params([parameter_set])

Return testing parameter settings for the estimator.

is_composite()

Check if the object is composed of other BaseObjects.

load_from_path(serial)

Load object from file location.

load_from_serial(serial)

Load object from serialized memory container.

predict([fh, X])

Forecast time series at future horizon.

predict_interval([fh, X, coverage])

Compute/return prediction interval forecasts.

predict_proba([fh, X, marginal])

Compute/return fully probabilistic forecasts.

predict_quantiles([fh, X, alpha])

Compute/return quantile forecasts.

predict_residuals([y, X])

Return residuals of time series forecasts.

predict_var([fh, X, cov])

Compute/return variance forecasts.

reset()

Reset the object to a clean post-init state.

save([path, serialization_format])

Save serialized self to bytes-like object or to (.zip) file.

score(y[, X, fh])

Scores forecast against ground truth, using MAPE (non-symmetric).

set_config(**config_dict)

Set config flags to given values.

set_params(**params)

Set the parameters of this object.

set_random_state([random_state, deep, ...])

Set random_state pseudo-random seed parameters for self.

set_tags(**tag_dict)

Set instance level tag overrides to given values.

update(y[, X, update_params])

Update cutoff value and, optionally, fitted parameters.

update_predict(y[, cv, X, update_params, ...])

Make predictions and update model iteratively over the test set.

update_predict_single([y, fh, X, update_params])

Update model with new data and make forecasts.

classmethod get_test_params(parameter_set='default')[source]#

Return testing parameter settings for the estimator.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set. There are currently no reserved values for forecasters.

Returns:
paramsdict or list of dict, default = {}

Parameters to create testing instances of the class Each dict are parameters to construct an “interesting” test instance, i.e., MyClass(**params) or MyClass(**params[i]) creates a valid test instance. create_test_instance uses the first (or only) dictionary in params

check_is_fitted(method_name=None)[source]#

Check if the estimator has been fitted.

Check if _is_fitted attribute is present and True. The is_fitted attribute should be set to True in calls to an object’s fit method.

If not, raises a NotFittedError.

Parameters:
method_namestr, optional

Name of the method that called this function. If provided, the error message will include this information.

Raises:
NotFittedError

If the estimator has not been fitted yet.

clone()[source]#

Obtain a clone of the object with same hyper-parameters and config.

A clone is a different object without shared references, in post-init state. This function is equivalent to returning sklearn.clone of self.

Equivalent to constructing a new instance of type(self), with parameters of self, that is, type(self)(**self.get_params(deep=False)).

If configs were set on self, the clone will also have the same configs as the original, equivalent to calling cloned_self.set_config(**self.get_config()).

Also equivalent in value to a call of self.reset, with the exception that clone returns a new object, instead of mutating self like reset.

Raises:
RuntimeError if the clone is non-conforming, due to faulty __init__.
clone_tags(estimator, tag_names=None)[source]#

Clone tags from another object as dynamic override.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

clone_tags sets dynamic tag overrides from another object, estimator.

The clone_tags method should be called only in the __init__ method of an object, during construction, or directly after construction via __init__.

The dynamic tags are set to the values of the tags in estimator, with the names specified in tag_names.

The default of tag_names writes all tags from estimator to self.

Current tag values can be inspected by get_tags or get_tag.

Parameters:
estimatorAn instance of :class:BaseObject or derived class
tag_namesstr or list of str, default = None

Names of tags to clone. The default (None) clones all tags from estimator.

Returns:
self

Reference to self.

classmethod create_test_instance(parameter_set='default')[source]#

Construct an instance of the class, using first test parameter set.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

Returns:
instanceinstance of the class with default parameters
classmethod create_test_instances_and_names(parameter_set='default')[source]#

Create list of all test instances and a list of names for them.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

Returns:
objslist of instances of cls

i-th instance is cls(**cls.get_test_params()[i])

nameslist of str, same length as objs

i-th element is name of i-th instance of obj in tests. The naming convention is {cls.__name__}-{i} if more than one instance, otherwise {cls.__name__}

property cutoff[source]#

Cut-off = “present time” state of forecaster.

Returns:
cutoffpandas compatible index element, or None

pandas compatible index element, if cutoff has been set; None otherwise

property fh[source]#

Forecasting horizon that was passed.

fit(y, X=None, fh=None)[source]#

Fit forecaster to training data.

State change:

Changes state to “fitted”.

Writes to self:

  • Sets fitted model attributes ending in “_”, fitted attributes are inspectable via get_fitted_params.

  • Sets self.is_fitted flag to True.

  • Sets self.cutoff to last index seen in y.

  • Stores fh to self.fh if fh is passed.

Parameters:
ytime series in sktime compatible data container format.

Time series to which to fit the forecaster.

Individual data formats in sktime are so-called mtype specifications, each mtype implements an abstract scitype.

  • Series scitype = individual time series, vanilla forecasting. pd.DataFrame, pd.Series, or np.ndarray (1D or 2D)

  • Panel scitype = collection of time series, global/panel forecasting. pd.DataFrame with 2-level row MultiIndex (instance, time), 3D np.ndarray (instance, variable, time), list of Series typed pd.DataFrame

  • Hierarchical scitype = hierarchical collection, for hierarchical forecasting. pd.DataFrame with 3 or more level row MultiIndex (hierarchy_1, ..., hierarchy_n, time)

For further details on data format, see glossary on mtype. For usage, see forecasting tutorial examples/01_forecasting.ipynb

fhint, list, pd.Index coercible, or ForecastingHorizon, default=None

The forecasting horizon encoding the time stamps to forecast at. If self.get_tag("requires-fh-in-fit") is True, must be passed in fit, not optional

Xtime series in sktime compatible format, optional (default=None).

Exogeneous time series to fit the model to. Should be of same scitype (Series, Panel, or Hierarchical) as y. If self.get_tag("X-y-must-have-same-index"), X.index must contain y.index.

Returns:
selfReference to self.
fit_predict(y, X=None, fh=None, X_pred=None)[source]#

Fit and forecast time series at future horizon.

Same as fit(y, X, fh).predict(X_pred). If X_pred is not passed, same as fit(y, fh, X).predict(X).

State change:

Changes state to “fitted”.

Writes to self:

  • Sets fitted model attributes ending in “_”, fitted attributes are inspectable via get_fitted_params.

  • Sets self.is_fitted flag to True.

  • Sets self.cutoff to last index seen in y.

  • Stores fh to self.fh.

Parameters:
ytime series in sktime compatible data container format

Time series to which to fit the forecaster.

Individual data formats in sktime are so-called mtype specifications, each mtype implements an abstract scitype.

  • Series scitype = individual time series, vanilla forecasting. pd.DataFrame, pd.Series, or np.ndarray (1D or 2D)

  • Panel scitype = collection of time series, global/panel forecasting. pd.DataFrame with 2-level row MultiIndex (instance, time), 3D np.ndarray (instance, variable, time), list of Series typed pd.DataFrame

  • Hierarchical scitype = hierarchical collection, for hierarchical forecasting. pd.DataFrame with 3 or more level row MultiIndex (hierarchy_1, ..., hierarchy_n, time)

For further details on data format, see glossary on mtype. For usage, see forecasting tutorial examples/01_forecasting.ipynb

fhint, list, pd.Index coercible, or ForecastingHorizon (not optional)

The forecasting horizon encoding the time stamps to forecast at.

If fh is not None and not of type ForecastingHorizon it is coerced to ForecastingHorizon via a call to _check_fh. In particular, if fh is of type pd.Index it is coerced via ForecastingHorizon(fh, is_relative=False)

Xtime series in sktime compatible format, optional (default=None).

Exogeneous time series to fit the model to. Should be of same scitype (Series, Panel, or Hierarchical) as y. If self.get_tag("X-y-must-have-same-index"), X.index must contain y.index.

X_predtime series in sktime compatible format, optional (default=None)

Exogeneous time series to use in prediction. If passed, will be used in predict instead of X. Should be of same scitype (Series, Panel, or Hierarchical) as y in fit. If self.get_tag("X-y-must-have-same-index"), X.index must contain fh index reference.

Returns:
y_predtime series in sktime compatible data container format

Point forecasts at fh, with same index as fh. y_pred has same type as the y that has been passed most recently: Series, Panel, Hierarchical scitype, same format (see above)

classmethod get_class_tag(tag_name, tag_value_default=None)[source]#

Get class tag value from class, with tag level inheritance from parents.

Every scikit-base compatible object has a dictionary of tags, which are used to store metadata about the object.

The get_class_tag method is a class method, and retrieves the value of a tag taking into account only class-level tag values and overrides.

It returns the value of the tag with name tag_name from the object, taking into account tag overrides, in the following order of descending priority:

  1. Tags set in the _tags attribute of the class.

  2. Tags set in the _tags attribute of parent classes,

in order of inheritance.

Does not take into account dynamic tag overrides on instances, set via set_tags or clone_tags, that are defined on instances.

To retrieve tag values with potential instance overrides, use the get_tag method instead.

Parameters:
tag_namestr

Name of tag value.

tag_value_defaultany type

Default/fallback value if tag is not found.

Returns:
tag_value

Value of the tag_name tag in self. If not found, returns tag_value_default.

classmethod get_class_tags()[source]#

Get class tags from class, with tag level inheritance from parent classes.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

The get_class_tags method is a class method, and retrieves the value of a tag taking into account only class-level tag values and overrides.

It returns a dictionary with keys being keys of any attribute of _tags set in the class or any of its parent classes.

Values are the corresponding tag values, with overrides in the following order of descending priority:

  1. Tags set in the _tags attribute of the class.

  2. Tags set in the _tags attribute of parent classes,

in order of inheritance.

Instances can override these tags depending on hyper-parameters.

To retrieve tags with potential instance overrides, use the get_tags method instead.

Does not take into account dynamic tag overrides on instances, set via set_tags or clone_tags, that are defined on instances.

For including overrides from dynamic tags, use get_tags.

collected_tagsdict

Dictionary of tag name : tag value pairs. Collected from _tags class attribute via nested inheritance. NOT overridden by dynamic tags set by set_tags or clone_tags.

get_config()[source]#

Get config flags for self.

Configs are key-value pairs of self, typically used as transient flags for controlling behaviour.

get_config returns dynamic configs, which override the default configs.

Default configs are set in the class attribute _config of the class or its parent classes, and are overridden by dynamic configs set via set_config.

Configs are retained under clone or reset calls.

Returns:
config_dictdict

Dictionary of config name : config value pairs. Collected from _config class attribute via nested inheritance and then any overrides and new tags from _onfig_dynamic object attribute.

get_fitted_params(deep=True)[source]#

Get fitted parameters.

State required:

Requires state to be “fitted”.

Parameters:
deepbool, default=True

Whether to return fitted parameters of components.

  • If True, will return a dict of parameter name : value for this object, including fitted parameters of fittable components (= BaseEstimator-valued parameters).

  • If False, will return a dict of parameter name : value for this object, but not include fitted parameters of components.

Returns:
fitted_paramsdict with str-valued keys

Dictionary of fitted parameters, paramname : paramvalue keys-value pairs include:

  • always: all fitted parameters of this object, as via get_param_names values are fitted parameter value for that key, of this object

  • if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value

  • if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc

classmethod get_param_defaults()[source]#

Get object’s parameter defaults.

Returns:
default_dict: dict[str, Any]

Keys are all parameters of cls that have a default defined in __init__. Values are the defaults, as defined in __init__.

classmethod get_param_names(sort=True)[source]#

Get object’s parameter names.

Parameters:
sortbool, default=True

Whether to return the parameter names sorted in alphabetical order (True), or in the order they appear in the class __init__ (False).

Returns:
param_names: list[str]

List of parameter names of cls. If sort=False, in same order as they appear in the class __init__. If sort=True, alphabetically ordered.

get_params(deep=True)[source]#

Get a dict of parameters values for this object.

Parameters:
deepbool, default=True

Whether to return parameters of components.

  • If True, will return a dict of parameter name : value for this object, including parameters of components (= BaseObject-valued parameters).

  • If False, will return a dict of parameter name : value for this object, but not include parameters of components.

Returns:
paramsdict with str-valued keys

Dictionary of parameters, paramname : paramvalue keys-value pairs include:

  • always: all parameters of this object, as via get_param_names values are parameter value for that key, of this object values are always identical to values passed at construction

  • if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value

  • if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc

get_tag(tag_name, tag_value_default=None, raise_error=True)[source]#

Get tag value from instance, with tag level inheritance and overrides.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

The get_tag method retrieves the value of a single tag with name tag_name from the instance, taking into account tag overrides, in the following order of descending priority:

  1. Tags set via set_tags or clone_tags on the instance,

at construction of the instance.

  1. Tags set in the _tags attribute of the class.

  2. Tags set in the _tags attribute of parent classes,

in order of inheritance.

Parameters:
tag_namestr

Name of tag to be retrieved

tag_value_defaultany type, optional; default=None

Default/fallback value if tag is not found

raise_errorbool

whether a ValueError is raised when the tag is not found

Returns:
tag_valueAny

Value of the tag_name tag in self. If not found, raises an error if raise_error is True, otherwise it returns tag_value_default.

Raises:
ValueError, if raise_error is True.

The ValueError is then raised if tag_name is not in self.get_tags().keys().

get_tags()[source]#

Get tags from instance, with tag level inheritance and overrides.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

The get_tags method returns a dictionary of tags, with keys being keys of any attribute of _tags set in the class or any of its parent classes, or tags set via set_tags or clone_tags.

Values are the corresponding tag values, with overrides in the following order of descending priority:

  1. Tags set via set_tags or clone_tags on the instance,

at construction of the instance.

  1. Tags set in the _tags attribute of the class.

  2. Tags set in the _tags attribute of parent classes,

in order of inheritance.

Returns:
collected_tagsdict

Dictionary of tag name : tag value pairs. Collected from _tags class attribute via nested inheritance and then any overrides and new tags from _tags_dynamic object attribute.

is_composite()[source]#

Check if the object is composed of other BaseObjects.

A composite object is an object which contains objects, as parameters. Called on an instance, since this may differ by instance.

Returns:
composite: bool

Whether an object has any parameters whose values are BaseObject descendant instances.

property is_fitted[source]#

Whether fit has been called.

Inspects object’s _is_fitted` attribute that should initialize to ``False during object construction, and be set to True in calls to an object’s fit method.

Returns:
bool

Whether the estimator has been fit.

classmethod load_from_path(serial)[source]#

Load object from file location.

Parameters:
serialresult of ZipFile(path).open(“object)
Returns:
deserialized self resulting in output at path, of cls.save(path)
classmethod load_from_serial(serial)[source]#

Load object from serialized memory container.

Parameters:
serial1st element of output of cls.save(None)
Returns:
deserialized self resulting in output serial, of cls.save(None)
predict(fh=None, X=None)[source]#

Forecast time series at future horizon.

State required:

Requires state to be “fitted”, i.e., self.is_fitted=True.

Accesses in self:

  • Fitted model attributes ending in “_”.

  • self.cutoff, self.is_fitted

Writes to self:

Stores fh to self.fh if fh is passed and has not been passed previously.

Parameters:
fhint, list, pd.Index coercible, or ForecastingHorizon, default=None

The forecasting horizon encoding the time stamps to forecast at. Should not be passed if has already been passed in fit. If has not been passed in fit, must be passed, not optional

If fh is not None and not of type ForecastingHorizon it is coerced to ForecastingHorizon via a call to _check_fh. In particular, if fh is of type pd.Index it is coerced via ForecastingHorizon(fh, is_relative=False)

Xtime series in sktime compatible format, optional (default=None)

Exogeneous time series to use in prediction. Should be of same scitype (Series, Panel, or Hierarchical) as y in fit. If self.get_tag("X-y-must-have-same-index"), X.index must contain fh index reference.

Returns:
y_predtime series in sktime compatible data container format

Point forecasts at fh, with same index as fh. y_pred has same type as the y that has been passed most recently: Series, Panel, Hierarchical scitype, same format (see above)

predict_interval(fh=None, X=None, coverage=0.9)[source]#

Compute/return prediction interval forecasts.

If coverage is iterable, multiple intervals will be calculated.

State required:

Requires state to be “fitted”, i.e., self.is_fitted=True.

Accesses in self:

  • Fitted model attributes ending in “_”.

  • self.cutoff, self.is_fitted

Writes to self:

Stores fh to self.fh if fh is passed and has not been passed previously.

Parameters:
fhint, list, pd.Index coercible, or ForecastingHorizon, default=None

The forecasting horizon encoding the time stamps to forecast at. Should not be passed if has already been passed in fit. If has not been passed in fit, must be passed, not optional

If fh is not None and not of type ForecastingHorizon, it is coerced to ForecastingHorizon internally (via _check_fh).

  • if fh is int or array-like of int, it is interpreted as relative horizon, and coerced to a relative ForecastingHorizon(fh, is_relative=True).

  • if fh is of type pd.Index, it is interpreted as an absolute horizon, and coerced to an absolute ForecastingHorizon(fh, is_relative=False).

Xtime series in sktime compatible format, optional (default=None)

Exogeneous time series to use in prediction. Should be of same scitype (Series, Panel, or Hierarchical) as y in fit. If self.get_tag("X-y-must-have-same-index"), X.index must contain fh index reference.

coveragefloat or list of float of unique values, optional (default=0.90)

nominal coverage(s) of predictive interval(s)

Returns:
pred_intpd.DataFrame
Column has multi-index: first level is variable name from y in fit,
second level coverage fractions for which intervals were computed.

in the same order as in input coverage.

Third level is string “lower” or “upper”, for lower/upper interval end.

Row index is fh, with additional (upper) levels equal to instance levels,

from y seen in fit, if y seen in fit was Panel or Hierarchical.

Entries are forecasts of lower/upper interval end,

for var in col index, at nominal coverage in second col index, lower/upper depending on third col index, for the row index. Upper/lower interval end forecasts are equivalent to quantile forecasts at alpha = 0.5 - c/2, 0.5 + c/2 for c in coverage.

predict_proba(fh=None, X=None, marginal=True)[source]#

Compute/return fully probabilistic forecasts.

Note:

  • currently only implemented for Series (non-panel, non-hierarchical) y.

  • requires skpro installed for the distribution objects returned.

State required:

Requires state to be “fitted”, i.e., self.is_fitted=True.

Accesses in self:

  • Fitted model attributes ending in “_”.

  • self.cutoff, self.is_fitted

Writes to self:

Stores fh to self.fh if fh is passed and has not been passed previously.

Parameters:
fhint, list, pd.Index coercible, or ForecastingHorizon, default=None

The forecasting horizon encoding the time stamps to forecast at. Should not be passed if has already been passed in fit. If has not been passed in fit, must be passed, not optional

If fh is not None and not of type ForecastingHorizon, it is coerced to ForecastingHorizon internally (via _check_fh).

  • if fh is int or array-like of int, it is interpreted as relative horizon, and coerced to a relative ForecastingHorizon(fh, is_relative=True).

  • if fh is of type pd.Index, it is interpreted as an absolute horizon, and coerced to an absolute ForecastingHorizon(fh, is_relative=False).

Xtime series in sktime compatible format, optional (default=None)

Exogeneous time series to use in prediction. Should be of same scitype (Series, Panel, or Hierarchical) as y in fit. If self.get_tag("X-y-must-have-same-index"), X.index must contain fh index reference.

marginalbool, optional (default=True)

whether returned distribution is marginal by time index

Returns:
pred_distskpro BaseDistribution

predictive distribution if marginal=True, will be marginal distribution by time point if marginal=False and implemented by method, will be joint

predict_quantiles(fh=None, X=None, alpha=None)[source]#

Compute/return quantile forecasts.

If alpha is iterable, multiple quantiles will be calculated.

State required:

Requires state to be “fitted”, i.e., self.is_fitted=True.

Accesses in self:

  • Fitted model attributes ending in “_”.

  • self.cutoff, self.is_fitted

Writes to self:

Stores fh to self.fh if fh is passed and has not been passed previously.

Parameters:
fhint, list, pd.Index coercible, or ForecastingHorizon, default=None

The forecasting horizon encoding the time stamps to forecast at. Should not be passed if has already been passed in fit. If has not been passed in fit, must be passed, not optional

If fh is not None and not of type ForecastingHorizon, it is coerced to ForecastingHorizon internally (via _check_fh).

  • if fh is int or array-like of int, it is interpreted as relative horizon, and coerced to a relative ForecastingHorizon(fh, is_relative=True).

  • if fh is of type pd.Index, it is interpreted as an absolute horizon, and coerced to an absolute ForecastingHorizon(fh, is_relative=False).

Xtime series in sktime compatible format, optional (default=None)

Exogeneous time series to use in prediction. Should be of same scitype (Series, Panel, or Hierarchical) as y in fit. If self.get_tag("X-y-must-have-same-index"), X.index must contain fh index reference.

alphafloat or list of float of unique values, optional (default=[0.05, 0.95])

A probability or list of, at which quantile forecasts are computed.

Returns:
quantilespd.DataFrame
Column has multi-index: first level is variable name from y in fit,

second level being the values of alpha passed to the function.

Row index is fh, with additional (upper) levels equal to instance levels,

from y seen in fit, if y seen in fit was Panel or Hierarchical.

Entries are quantile forecasts, for var in col index,

at quantile probability in second col index, for the row index.

predict_residuals(y=None, X=None)[source]#

Return residuals of time series forecasts.

Residuals will be computed for forecasts at y.index.

If fh must be passed in fit, must agree with y.index. If y is an np.ndarray, and no fh has been passed in fit, the residuals will be computed at a fh of range(len(y.shape[0]))

State required:

Requires state to be “fitted”. If fh has been set, must correspond to index of y (pandas or integer)

Accesses in self:

Fitted model attributes ending in “_”. self.cutoff, self._is_fitted

Writes to self:

Nothing.

Parameters:
ytime series in sktime compatible data container format

Time series with ground truth observations, to compute residuals to. Must have same type, dimension, and indices as expected return of predict.

If None, the y seen so far (self._y) are used, in particular:

  • if preceded by a single fit call, then in-sample residuals are produced

  • if fit requires fh, it must have pointed to index of y in fit

Xtime series in sktime compatible format, optional (default=None)

Exogeneous time series for updating and forecasting Should be of same scitype (Series, Panel, or Hierarchical) as y in fit. If self.get_tag("X-y-must-have-same-index"), X.index must contain both fh index reference and y.index.

Returns:
y_restime series in sktime compatible data container format

Forecast residuals at fh`, with same index as ``fh. y_res has same type as the y that has been passed most recently: Series, Panel, Hierarchical scitype, same format (see above)

predict_var(fh=None, X=None, cov=False)[source]#

Compute/return variance forecasts.

State required:

Requires state to be “fitted”, i.e., self.is_fitted=True.

Accesses in self:

  • Fitted model attributes ending in “_”.

  • self.cutoff, self.is_fitted

Writes to self:

Stores fh to self.fh if fh is passed and has not been passed previously.

Parameters:
fhint, list, pd.Index coercible, or ForecastingHorizon, default=None

The forecasting horizon encoding the time stamps to forecast at. Should not be passed if has already been passed in fit. If has not been passed in fit, must be passed, not optional

If fh is not None and not of type ForecastingHorizon, it is coerced to ForecastingHorizon internally (via _check_fh).

  • if fh is int or array-like of int, it is interpreted as relative horizon, and coerced to a relative ForecastingHorizon(fh, is_relative=True).

  • if fh is of type pd.Index, it is interpreted as an absolute horizon, and coerced to an absolute ForecastingHorizon(fh, is_relative=False).

Xtime series in sktime compatible format, optional (default=None)

Exogeneous time series to use in prediction. Should be of same scitype (Series, Panel, or Hierarchical) as y in fit. If self.get_tag("X-y-must-have-same-index"), X.index must contain fh index reference.

covbool, optional (default=False)

if True, computes covariance matrix forecast. if False, computes marginal variance forecasts.

Returns:
pred_varpd.DataFrame, format dependent on cov variable
If cov=False:
Column names are exactly those of y passed in fit/update.

For nameless formats, column index will be a RangeIndex.

Row index is fh, with additional levels equal to instance levels,

from y seen in fit, if y seen in fit was Panel or Hierarchical.

Entries are variance forecasts, for var in col index. A variance forecast for given variable and fh index is a predicted

variance for that variable and index, given observed data.

If cov=True:
Column index is a multiindex: 1st level is variable names (as above)

2nd level is fh.

Row index is fh, with additional levels equal to instance levels,

from y seen in fit, if y seen in fit was Panel or Hierarchical.

Entries are (co-)variance forecasts, for var in col index, and

covariance between time index in row and col.

Note: no covariance forecasts are returned between different variables.

reset()[source]#

Reset the object to a clean post-init state.

Results in setting self to the state it had directly after the constructor call, with the same hyper-parameters. Config values set by set_config are also retained.

A reset call deletes any object attributes, except:

  • hyper-parameters = arguments of __init__ written to self, e.g., self.paramname where paramname is an argument of __init__

  • object attributes containing double-underscores, i.e., the string “__”. For instance, an attribute named “__myattr” is retained.

  • config attributes, configs are retained without change. That is, results of get_config before and after reset are equal.

Class and object methods, and class attributes are also unaffected.

Equivalent to clone, with the exception that reset mutates self instead of returning a new object.

After a self.reset() call, self is equal in value and state, to the object obtained after a constructor call``type(self)(**self.get_params(deep=False))``.

Returns:
self

Instance of class reset to a clean post-init state but retaining the current hyper-parameter values.

save(path=None, serialization_format='pickle')[source]#

Save serialized self to bytes-like object or to (.zip) file.

Behaviour: if path is None, returns an in-memory serialized self if path is a file location, stores self at that location as a zip file

saved files are zip files with following contents: _metadata - contains class of self, i.e., type(self) _obj - serialized self. This class uses the default serialization (pickle).

Parameters:
pathNone or file location (str or Path)

if None, self is saved to an in-memory object if file location, self is saved to that file location. If:

  • path=”estimator” then a zip file estimator.zip will be made at cwd.

  • path=”/home/stored/estimator” then a zip file estimator.zip will be

stored in /home/stored/.

serialization_format: str, default = “pickle”

Module to use for serialization. The available options are “pickle” and “cloudpickle”. Note that non-default formats might require installation of other soft dependencies.

Returns:
if path is None - in-memory serialized self
if path is file location - ZipFile with reference to the file
score(y, X=None, fh=None)[source]#

Scores forecast against ground truth, using MAPE (non-symmetric).

Parameters:
ypd.Series, pd.DataFrame, or np.ndarray (1D or 2D)

Time series to score

fhint, list, pd.Index coercible, or ForecastingHorizon, default=None

The forecasting horizon encoding the time stamps to forecast at.

Xpd.DataFrame, or 2D np.array, optional (default=None)

Exogeneous time series to score if self.get_tag(“X-y-must-have-same-index”), X.index must contain y.index

Returns:
scorefloat

MAPE loss of self.predict(fh, X) with respect to y_test.

set_config(**config_dict)[source]#

Set config flags to given values.

Parameters:
config_dictdict

Dictionary of config name : config value pairs. Valid configs, values, and their meaning is listed below:

displaystr, “diagram” (default), or “text”

how jupyter kernels display instances of self

  • “diagram” = html box diagram representation

  • “text” = string printout

print_changed_onlybool, default=True

whether printing of self lists only self-parameters that differ from defaults (False), or all parameter names and values (False). Does not nest, i.e., only affects self and not component estimators.

warningsstr, “on” (default), or “off”

whether to raise warnings, affects warnings from sktime only

  • “on” = will raise warnings from sktime

  • “off” = will not raise warnings from sktime

backend:parallelstr, optional, default=”None”

backend to use for parallelization when broadcasting/vectorizing, one of

  • “None”: executes loop sequentally, simple list comprehension

  • “loky”, “multiprocessing” and “threading”: uses joblib.Parallel

  • “joblib”: custom and 3rd party joblib backends, e.g., spark

  • “dask”: uses dask, requires dask package in environment

  • “ray”: uses ray, requires ray package in environment

backend:parallel:paramsdict, optional, default={} (no parameters passed)

additional parameters passed to the parallelization backend as config. Valid keys depend on the value of backend:parallel:

  • “None”: no additional parameters, backend_params is ignored

  • “loky”, “multiprocessing” and “threading”: default joblib backends any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, with the exception of backend which is directly controlled by backend. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “joblib”: custom and 3rd party joblib backends, e.g., spark. Any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, backend must be passed as a key of backend_params in this case. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “dask”: any valid keys for dask.compute can be passed, e.g., scheduler

  • “ray”: The following keys can be passed:

    • “ray_remote_args”: dictionary of valid keys for ray.init

    • “shutdown_ray”: bool, default=True; False prevents ray from

      shutting down after parallelization.

    • “logger_name”: str, default=”ray”; name of the logger to use.

    • “mute_warnings”: bool, default=False; if True, suppresses warnings

remember_databool, default=True

whether self._X and self._y are stored in fit, and updated in update. If True, self._X and self._y are stored and updated. If False, self._X and self._y are not stored and updated. This reduces serialization size when using save, but the update will default to “do nothing” rather than “refit to all data seen”.

Returns:
selfreference to self.

Notes

Changes object state, copies configs in config_dict to self._config_dynamic.

set_params(**params)[source]#

Set the parameters of this object.

The method works on simple skbase objects as well as on composite objects. Parameter key strings <component>__<parameter> can be used for composites, i.e., objects that contain other objects, to access <parameter> in the component <component>. The string <parameter>, without <component>__, can also be used if this makes the reference unambiguous, e.g., there are no two parameters of components with the name <parameter>.

Parameters:
**paramsdict

BaseObject parameters, keys must be <component>__<parameter> strings. __ suffixes can alias full strings, if unique among get_params keys.

Returns:
selfreference to self (after parameters have been set)
set_random_state(random_state=None, deep=True, self_policy='copy')[source]#

Set random_state pseudo-random seed parameters for self.

Finds random_state named parameters via self.get_params, and sets them to integers derived from random_state via set_params. These integers are sampled from chain hashing via sample_dependent_seed, and guarantee pseudo-random independence of seeded random generators.

Applies to random_state parameters in self, depending on self_policy, and remaining component objects if and only if deep=True.

Note: calls set_params even if self does not have a random_state, or none of the components have a random_state parameter. Therefore, set_random_state will reset any scikit-base object, even those without a random_state parameter.

Parameters:
random_stateint, RandomState instance or None, default=None

Pseudo-random number generator to control the generation of the random integers. Pass int for reproducible output across multiple function calls.

deepbool, default=True

Whether to set the random state in skbase object valued parameters, i.e., component estimators.

  • If False, will set only self’s random_state parameter, if exists.

  • If True, will set random_state parameters in component objects as well.

self_policystr, one of {“copy”, “keep”, “new”}, default=”copy”
  • “copy” : self.random_state is set to input random_state

  • “keep” : self.random_state is kept as is

  • “new” : self.random_state is set to a new random state,

derived from input random_state, and in general different from it

Returns:
selfreference to self
set_tags(**tag_dict)[source]#

Set instance level tag overrides to given values.

Every scikit-base compatible object has a dictionary of tags, which are used to store metadata about the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object. They may be used for metadata inspection, or for controlling behaviour of the object.

set_tags sets dynamic tag overrides to the values as specified in tag_dict, with keys being the tag name, and dict values being the value to set the tag to.

The set_tags method should be called only in the __init__ method of an object, during construction, or directly after construction via __init__.

Current tag values can be inspected by get_tags or get_tag.

Parameters:
**tag_dictdict

Dictionary of tag name: tag value pairs.

Returns:
Self

Reference to self.

update(y, X=None, update_params=True)[source]#

Update cutoff value and, optionally, fitted parameters.

If no estimator-specific update method has been implemented, default fall-back is as follows:

  • update_params=True: fitting to all observed data so far

  • update_params=False: updates cutoff and remembers data only

State required:

Requires state to be “fitted”, i.e., self.is_fitted=True.

Accesses in self:

  • Fitted model attributes ending in “_”.

  • self.cutoff, self.is_fitted

Writes to self:

  • Updates self.cutoff to latest index seen in y.

  • If update_params=True, updates fitted model attributes ending in “_”.

Parameters:
ytime series in sktime compatible data container format.

Time series with which to update the forecaster.

Individual data formats in sktime are so-called mtype specifications, each mtype implements an abstract scitype.

  • Series scitype = individual time series, vanilla forecasting. pd.DataFrame, pd.Series, or np.ndarray (1D or 2D)

  • Panel scitype = collection of time series, global/panel forecasting. pd.DataFrame with 2-level row MultiIndex (instance, time), 3D np.ndarray (instance, variable, time), list of Series typed pd.DataFrame

  • Hierarchical scitype = hierarchical collection, for hierarchical forecasting. pd.DataFrame with 3 or more level row MultiIndex (hierarchy_1, ..., hierarchy_n, time)

For further details on data format, see glossary on mtype. For usage, see forecasting tutorial examples/01_forecasting.ipynb

Xtime series in sktime compatible format, optional (default=None).

Exogeneous time series to update the model fit with Should be of same scitype (Series, Panel, or Hierarchical) as y. If self.get_tag("X-y-must-have-same-index"), X.index must contain y.index.

update_paramsbool, optional (default=True)

whether model parameters should be updated. If False, only the cutoff is updated, model parameters (e.g., coefficients) are not updated.

Returns:
selfreference to self
update_predict(y, cv=None, X=None, update_params=True, reset_forecaster=True)[source]#

Make predictions and update model iteratively over the test set.

Shorthand to carry out chain of multiple update / predict executions, with data playback based on temporal splitter cv.

Same as the following (if only y, cv are non-default):

  1. self.update(y=cv.split_series(y)[0][0])

  2. remember self.predict() (return later in single batch)

  3. self.update(y=cv.split_series(y)[1][0])

  4. remember self.predict() (return later in single batch)

  5. etc

  6. return all remembered predictions

If no estimator-specific update method has been implemented, default fall-back is as follows:

  • update_params=True: fitting to all observed data so far

  • update_params=False: updates cutoff and remembers data only

State required:

Requires state to be “fitted”, i.e., self.is_fitted=True.

Accesses in self:

  • Fitted model attributes ending in “_”.

  • self.cutoff, self.is_fitted

Writes to self (unless reset_forecaster=True):
  • Updates self.cutoff to latest index seen in y.

  • If update_params=True, updates fitted model attributes ending in “_”.

Does not update state if reset_forecaster=True.

Parameters:
ytime series in sktime compatible data container format.

Time series with which to update the forecaster.

Individual data formats in sktime are so-called mtype specifications, each mtype implements an abstract scitype.

  • Series scitype = individual time series, vanilla forecasting. pd.DataFrame, pd.Series, or np.ndarray (1D or 2D)

  • Panel scitype = collection of time series, global/panel forecasting. pd.DataFrame with 2-level row MultiIndex (instance, time), 3D np.ndarray (instance, variable, time), list of Series typed pd.DataFrame

  • Hierarchical scitype = hierarchical collection, for hierarchical forecasting. pd.DataFrame with 3 or more level row MultiIndex (hierarchy_1, ..., hierarchy_n, time)

For further details on data format, see glossary on mtype. For usage, see forecasting tutorial examples/01_forecasting.ipynb

cvtemporal cross-validation generator inheriting from BaseSplitter, optional

for example, SlidingWindowSplitter or ExpandingWindowSplitter; default = ExpandingWindowSplitter with initial_window=1 and defaults = individual data points in y/X are added and forecast one-by-one, initial_window = 1, step_length = 1 and fh = 1

Xtime series in sktime compatible format, optional (default=None)

Exogeneous time series for updating and forecasting Should be of same scitype (Series, Panel, or Hierarchical) as y in fit. If self.get_tag("X-y-must-have-same-index"), X.index must contain fh index reference.

update_paramsbool, optional (default=True)

whether model parameters should be updated. If False, only the cutoff is updated, model parameters (e.g., coefficients) are not updated.

reset_forecasterbool, optional (default=True)
  • if True, will not change the state of the forecaster, i.e., update/predict sequence is run with a copy, and cutoff, model parameters, data memory of self do not change

  • if False, will update self when the update/predict sequence is run as if update/predict were called directly

Returns:
y_predobject that tabulates point forecasts from multiple split batches

format depends on pairs (cutoff, absolute horizon) forecast overall

  • if collection of absolute horizon points is unique: type is time series in sktime compatible data container format cutoff is suppressed in output has same type as the y that has been passed most recently: Series, Panel, Hierarchical scitype, same format (see above)

  • if collection of absolute horizon points is not unique: type is a pandas DataFrame, with row and col index being time stamps row index corresponds to cutoffs that are predicted from column index corresponds to absolute horizons that are predicted entry is the point prediction of col index predicted from row index entry is nan if no prediction is made at that (cutoff, horizon) pair

update_predict_single(y=None, fh=None, X=None, update_params=True)[source]#

Update model with new data and make forecasts.

This method is useful for updating and making forecasts in a single step.

If no estimator-specific update method has been implemented, default fall-back is first update, then predict.

State required:

Requires state to be “fitted”.

Accesses in self:

Fitted model attributes ending in “_”. Pointers to seen data, self._y and self.X self.cutoff, self._is_fitted If update_params=True, model attributes ending in “_”.

Writes to self:

Update self._y and self._X with y and X, by appending rows. Updates self.cutoff and self._cutoff to last index seen in y. If update_params=True,

updates fitted model attributes ending in “_”.

Parameters:
ytime series in sktime compatible data container format.

Time series with which to update the forecaster.

Individual data formats in sktime are so-called mtype specifications, each mtype implements an abstract scitype.

  • Series scitype = individual time series, vanilla forecasting. pd.DataFrame, pd.Series, or np.ndarray (1D or 2D)

  • Panel scitype = collection of time series, global/panel forecasting. pd.DataFrame with 2-level row MultiIndex (instance, time), 3D np.ndarray (instance, variable, time), list of Series typed pd.DataFrame

  • Hierarchical scitype = hierarchical collection, for hierarchical forecasting. pd.DataFrame with 3 or more level row MultiIndex (hierarchy_1, ..., hierarchy_n, time)

For further details on data format, see glossary on mtype. For usage, see forecasting tutorial examples/01_forecasting.ipynb

fhint, list, pd.Index coercible, or ForecastingHorizon, default=None

The forecasting horizon encoding the time stamps to forecast at. Should not be passed if has already been passed in fit. If has not been passed in fit, must be passed, not optional

Xtime series in sktime compatible format, optional (default=None)

Exogeneous time series for updating and forecasting Should be of same scitype (Series, Panel, or Hierarchical) as y in fit. If self.get_tag("X-y-must-have-same-index"), X.index must contain fh index reference.

update_paramsbool, optional (default=True)

whether model parameters should be updated. If False, only the cutoff is updated, model parameters (e.g., coefficients) are not updated.

Returns:
y_predtime series in sktime compatible data container format

Point forecasts at fh, with same index as fh. y_pred has same type as the y that has been passed most recently: Series, Panel, Hierarchical scitype, same format (see above)