TSCGridSearchCV#

class TSCGridSearchCV(estimator, param_grid, scoring=None, n_jobs=None, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', error_score=nan, return_train_score=False, tune_by_variable=False)[source]#

Exhaustive search over specified parameter values for an estimator.

Adapts sklearn GridSearchCV for sktime time series classifiers

Optimizes hyper-parameters of estimators by exhaustive grid search.

Parameters:
estimatorestimator object

This is assumed to implement the scikit-learn estimator interface. Either estimator needs to provide a score function, or scoring must be passed.

param_griddict or list of dictionaries

Dictionary with parameters names (str) as keys and lists of parameter settings to try as values, or a list of such dictionaries, in which case the grids spanned by each dictionary in the list are explored. This enables searching over any sequence of parameter settings.

scoringstr, callable, list, tuple or dict, default=None

Strategy to evaluate the performance of the cross-validated model on the test set.

If scoring represents a single score, one can use:

If scoring represents multiple scores, one can use:

  • a list or tuple of unique strings;

  • a callable returning a dictionary where the keys are the metric names and the values are the metric scores;

  • a dictionary with metric names as keys and callables a values.

n_jobsint, default=None

Number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.

refitbool, str, or callable, default=True

Refit an estimator using the best found parameters on the whole dataset. If False, the predict and predict_proba will not work.

For multiple metric evaluation, this needs to be a str denoting the scorer that would be used to find the best parameters for refitting the estimator at the end.

Where there are considerations other than maximum score in choosing a best estimator, refit can be set to a function which returns the selected best_index_ given cv_results_. In that case, the best_estimator_ and best_params_ will be set according to the returned best_index_ while the best_score_ attribute will not be available.

The refitted estimator is made available at the best_estimator_ attribute and permits using predict directly on this GridSearchCV instance.

Also for multiple metric evaluation, the attributes best_index_, best_score_ and best_params_ will only be available if refit is set and all of them will be determined w.r.t this specific scorer.

See scoring parameter to know more about multiple metric evaluation.

cvint, cross-validation generator or an iterable, default=None

Determines the cross-validation splitting strategy. Possible inputs for cv are:

  • None, to use the default 5-fold cross validation,

  • integer, to specify the number of folds in a (Stratified)KFold,

  • CV splitter,

  • An iterable yielding (train, test) splits as arrays of indices.

For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. These splitters are instantiated with shuffle=False so the splits will be the same across calls.

Refer User Guide for the various cross-validation strategies that can be used here.

verboseint

Controls the verbosity: the higher, the more messages.

  • >1 : the computation time for each fold and parameter candidate is displayed;

  • >2 : the score is also displayed;

  • >3 : the fold and candidate parameter indexes are also displayed together with the starting time of the computation.

pre_dispatchint, or str, default=’2*n_jobs’

Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be:

  • None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs

  • An int, giving the exact number of total jobs that are spawned

  • A str, giving an expression as a function of n_jobs, as in ‘2*n_jobs’

error_score‘raise’ or numeric, default=np.nan

Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised. This parameter does not affect the refit step, which will always raise the error.

return_train_scorebool, default=False

If False, the cv_results_ attribute will not include training scores. Computing training scores is used to get insights on how different parameter settings impact the overfitting/underfitting trade-off. However computing the scores on the training set can be computationally expensive and is not strictly required to select the parameters that yield the best generalization performance.

tune_by_variablebool, optional (default=False)

Whether to tune parameter by each time series variable separately, in case of multivariate data passed to the tuning estimator. Only applies if time series passed are strictly multivariate. If True, clones of the estimator will be fit to each variable separately, and are available in fields of the classifiers_ attribute. Has the same effect as applying ColumnEnsembleClassifier wrapper to self. If False, the same best parameter is selected for all variables.

Attributes:
cv_results_dict of numpy (masked) ndarrays

A dict with keys as column headers and values as columns, that can be imported into a pandas DataFrame.

For multi-metric evaluation, the scores for all the scorers are available in the cv_results_ dict at the keys ending with that scorer’s name ('_<scorer_name>') instead of '_score' shown above. (‘split0_test_precision’, ‘mean_train_precision’ etc.)

best_estimator_estimator

Estimator that was chosen by the search, i.e. estimator which gave highest score (or smallest loss if specified) on the left out data. Not available if refit=False.

See refit parameter for more information on allowed values.

best_score_float

Mean cross-validated score of the best_estimator

For multi-metric evaluation, this is present only if refit is specified.

This attribute is not available if refit is a function.

best_params_dict

Parameter setting that gave the best results on the hold out data.

For multi-metric evaluation, this is present only if refit is specified.

best_index_int

The index (of the cv_results_ arrays) which corresponds to the best candidate parameter setting.

The dict at search.cv_results_['params'][search.best_index_] gives the parameter setting for the best model, that gives the highest mean score (search.best_score_).

For multi-metric evaluation, this is present only if refit is specified.

scorer_function or a dict

Scorer function used on the held out data to choose the best parameters for the model.

For multi-metric evaluation, this attribute holds the validated scoring dict which maps the scorer key to the scorer callable.

n_splits_int

The number of cross-validation splits (folds/iterations).

refit_time_float

Seconds used for refitting the best model on the whole dataset.

This is present only if refit is not False.

multimetric_bool

Whether or not the scorers compute several metrics.

classes_ndarray of shape (n_classes,)

The classes labels. This is present only if refit is specified and the underlying estimator is a classifier.

n_features_in_int

Number of features seen during fit. Only defined if best_estimator_ is defined (see the documentation for the refit parameter for more details) and that best_estimator_ exposes n_features_in_ when fit.

feature_names_in_ndarray of shape (n_features_in_,)

Names of features seen during fit. Only defined if best_estimator_ is defined (see the documentation for the refit parameter for more details) and that best_estimator_ exposes feature_names_in_ when fit.

See also

ParameterGrid

Generates all the combinations of a hyperparameter grid.

train_test_split

Utility function to split the data into a development set usable for fitting a GridSearchCV instance and an evaluation set for its final evaluation.

sklearn.metrics.make_scorer

Make a scorer from a performance metric or loss function.

Methods

check_is_fitted()

Check if the estimator has been fitted.

clone()

Obtain a clone of the object with same hyper-parameters.

clone_tags(estimator[, tag_names])

Clone tags from another estimator as dynamic override.

create_test_instance([parameter_set])

Construct Estimator instance if possible.

create_test_instances_and_names([parameter_set])

Create list of all test instances and a list of names for them.

fit(X, y)

Fit time series classifier to training data.

fit_predict(X, y[, cv, change_state])

Fit and predict labels for sequences in X.

fit_predict_proba(X, y[, cv, change_state])

Fit and predict labels probabilities for sequences in X.

get_class_tag(tag_name[, tag_value_default])

Get a class tag's value.

get_class_tags()

Get class tags from the class and all its parent classes.

get_config()

Get config flags for self.

get_fitted_params([deep])

Get fitted parameters.

get_param_defaults()

Get object's parameter defaults.

get_param_names()

Get object's parameter names.

get_params([deep])

Get a dict of parameters values for this object.

get_tag(tag_name[, tag_value_default, ...])

Get tag value from estimator class and dynamic tag overrides.

get_tags()

Get tags from estimator class and dynamic tag overrides.

get_test_params([parameter_set])

Return testing parameter settings for the estimator.

is_composite()

Check if the object is composed of other BaseObjects.

load_from_path(serial)

Load object from file location.

load_from_serial(serial)

Load object from serialized memory container.

predict(X)

Predicts labels for sequences in X.

predict_proba(X)

Predicts labels probabilities for sequences in X.

reset()

Reset the object to a clean post-init state.

save([path, serialization_format])

Save serialized self to bytes-like object or to (.zip) file.

score(X, y)

Scores predicted labels against ground truth labels on X.

set_config(**config_dict)

Set config flags to given values.

set_params(**params)

Set the parameters of this object.

set_random_state([random_state, deep, ...])

Set random_state pseudo-random seed parameters for self.

set_tags(**tag_dict)

Set dynamic tags to given values.

classmethod get_test_params(parameter_set='default')[source]#

Return testing parameter settings for the estimator.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return "default" set. For classifiers, a “default” set of parameters should be provided for general testing, and a “results_comparison” set for comparing against previously recorded results if the general set does not produce suitable probabilities to compare against.

Returns:
paramsdict or list of dict, default={}

Parameters to create testing instances of the class. Each dict are parameters to construct an “interesting” test instance, i.e., MyClass(**params) or MyClass(**params[i]) creates a valid test instance. create_test_instance uses the first (or only) dictionary in params.

check_is_fitted()[source]#

Check if the estimator has been fitted.

Raises:
NotFittedError

If the estimator has not been fitted yet.

clone()[source]#

Obtain a clone of the object with same hyper-parameters.

A clone is a different object without shared references, in post-init state. This function is equivalent to returning sklearn.clone of self.

Raises:
RuntimeError if the clone is non-conforming, due to faulty __init__.

Notes

If successful, equal in value to type(self)(**self.get_params(deep=False)).

clone_tags(estimator, tag_names=None)[source]#

Clone tags from another estimator as dynamic override.

Parameters:
estimatorestimator inheriting from :class:BaseEstimator
tag_namesstr or list of str, default = None

Names of tags to clone. If None then all tags in estimator are used as tag_names.

Returns:
Self

Reference to self.

Notes

Changes object state by setting tag values in tag_set from estimator as dynamic tags in self.

classmethod create_test_instance(parameter_set='default')[source]#

Construct Estimator instance if possible.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

Returns:
instanceinstance of the class with default parameters

Notes

get_test_params can return dict or list of dict. This function takes first or single dict that get_test_params returns, and constructs the object with that.

classmethod create_test_instances_and_names(parameter_set='default')[source]#

Create list of all test instances and a list of names for them.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

Returns:
objslist of instances of cls

i-th instance is cls(**cls.get_test_params()[i])

nameslist of str, same length as objs

i-th element is name of i-th instance of obj in tests convention is {cls.__name__}-{i} if more than one instance otherwise {cls.__name__}

fit(X, y)[source]#

Fit time series classifier to training data.

State change:

Changes state to “fitted”.

Writes to self:

Sets self.is_fitted to True. Sets fitted model attributes ending in “_”.

Parameters:
Xsktime compatible time series panel data container, Panel scitype, e.g.,

pd-multiindex: pd.DataFrame with columns = variables, index = pd.MultiIndex with first level = instance indices, second level = time indices numpy3D: 3D np.array (any number of dimensions, equal length series) of shape [n_instances, n_dimensions, series_length] or of any other supported Panel mtype for list of mtypes, see datatypes.SCITYPE_REGISTER for specifications, see examples/AA_datatypes_and_datasets.ipynb

ysktime compatible tabular data container, Table scitype

1D iterable, of shape [n_instances] or 2D iterable, of shape [n_instances, n_dimensions] class labels for fitting 0-th indices correspond to instance indices in X 1-st indices (if applicable) correspond to multioutput vector indices in X supported sktime types: np.ndarray (1D, 2D), pd.Series, pd.DataFrame

Returns:
selfReference to self.
fit_predict(X, y, cv=None, change_state=True)[source]#

Fit and predict labels for sequences in X.

Convenience method to produce in-sample predictions and cross-validated out-of-sample predictions.

Writes to self, if change_state=True:

Sets self.is_fitted to True. Sets fitted model attributes ending in “_”.

Does not update state if change_state=False.

Parameters:
Xsktime compatible time series panel data container, Panel scitype, e.g.,

pd-multiindex: pd.DataFrame with columns = variables, index = pd.MultiIndex with first level = instance indices, second level = time indices numpy3D: 3D np.array (any number of dimensions, equal length series) of shape [n_instances, n_dimensions, series_length] or of any other supported Panel mtype for list of mtypes, see datatypes.SCITYPE_REGISTER for specifications, see examples/AA_datatypes_and_datasets.ipynb

ysktime compatible tabular data container, Table scitype

1D iterable, of shape [n_instances] or 2D iterable, of shape [n_instances, n_dimensions] class labels for fitting 0-th indices correspond to instance indices in X 1-st indices (if applicable) correspond to multioutput vector indices in X supported sktime types: np.ndarray (1D, 2D), pd.Series, pd.DataFrame

cvNone, int, or sklearn cross-validation object, optional, default=None

None : predictions are in-sample, equivalent to fit(X, y).predict(X) cv : predictions are equivalent to fit(X_train, y_train).predict(X_test)

where multiple X_train, y_train, X_test are obtained from cv folds returned y is union over all test fold predictions cv test folds must be non-intersecting

intequivalent to cv=KFold(cv, shuffle=True, random_state=x),

i.e., k-fold cross-validation predictions out-of-sample random_state x is taken from self if exists, otherwise x=None

change_statebool, optional (default=True)
if False, will not change the state of the classifier,

i.e., fit/predict sequence is run with a copy, self does not change

if True, will fit self to the full X and y,

end state will be equivalent to running fit(X, y)

Returns:
y_predsktime compatible tabular data container, Table scitype

1D iterable, of shape [n_instances] or 2D iterable, of shape [n_instances, n_dimensions] predicted class labels 0-th indices correspond to instance indices in X 1-st indices (if applicable) correspond to multioutput vector indices in X 1D np.npdarray, if y univariate (one dimension) otherwise, same type as y passed in fit

fit_predict_proba(X, y, cv=None, change_state=True)[source]#

Fit and predict labels probabilities for sequences in X.

Convenience method to produce in-sample predictions and cross-validated out-of-sample predictions.

Writes to self, if change_state=True:

Sets self.is_fitted to True. Sets fitted model attributes ending in “_”.

Does not update state if change_state=False.

Parameters:
Xsktime compatible time series panel data container, Panel scitype, e.g.,

pd-multiindex: pd.DataFrame with columns = variables, index = pd.MultiIndex with first level = instance indices, second level = time indices numpy3D: 3D np.array (any number of dimensions, equal length series) of shape [n_instances, n_dimensions, series_length] or of any other supported Panel mtype for list of mtypes, see datatypes.SCITYPE_REGISTER for specifications, see examples/AA_datatypes_and_datasets.ipynb

ysktime compatible tabular data container, Table scitype

1D iterable, of shape [n_instances] or 2D iterable, of shape [n_instances, n_dimensions] class labels for fitting 0-th indices correspond to instance indices in X 1-st indices (if applicable) correspond to multioutput vector indices in X supported sktime types: np.ndarray (1D, 2D), pd.Series, pd.DataFrame

cvNone, int, or sklearn cross-validation object, optional, default=None

None : predictions are in-sample, equivalent to fit(X, y).predict(X) cv : predictions are equivalent to fit(X_train, y_train).predict(X_test)

where multiple X_train, y_train, X_test are obtained from cv folds returned y is union over all test fold predictions cv test folds must be non-intersecting

int : equivalent to cv=Kfold(int), i.e., k-fold cross-validation predictions

change_statebool, optional (default=True)
if False, will not change the state of the classifier,

i.e., fit/predict sequence is run with a copy, self does not change

if True, will fit self to the full X and y,

end state will be equivalent to running fit(X, y)

Returns:
y_pred2D np.array of int, of shape [n_instances, n_classes]

predicted class label probabilities 0-th indices correspond to instance indices in X 1-st indices correspond to class index, in same order as in self.classes_ entries are predictive class probabilities, summing to 1

classmethod get_class_tag(tag_name, tag_value_default=None)[source]#

Get a class tag’s value.

Does not return information from dynamic tags (set via set_tags or clone_tags) that are defined on instances.

Parameters:
tag_namestr

Name of tag value.

tag_value_defaultany

Default/fallback value if tag is not found.

Returns:
tag_value

Value of the tag_name tag in self. If not found, returns tag_value_default.

classmethod get_class_tags()[source]#

Get class tags from the class and all its parent classes.

Retrieves tag: value pairs from _tags class attribute. Does not return information from dynamic tags (set via set_tags or clone_tags) that are defined on instances.

Returns:
collected_tagsdict

Dictionary of class tag name: tag value pairs. Collected from _tags class attribute via nested inheritance.

get_config()[source]#

Get config flags for self.

Returns:
config_dictdict

Dictionary of config name : config value pairs. Collected from _config class attribute via nested inheritance and then any overrides and new tags from _onfig_dynamic object attribute.

get_fitted_params(deep=True)[source]#

Get fitted parameters.

State required:

Requires state to be “fitted”.

Parameters:
deepbool, default=True

Whether to return fitted parameters of components.

  • If True, will return a dict of parameter name : value for this object, including fitted parameters of fittable components (= BaseEstimator-valued parameters).

  • If False, will return a dict of parameter name : value for this object, but not include fitted parameters of components.

Returns:
fitted_paramsdict with str-valued keys

Dictionary of fitted parameters, paramname : paramvalue keys-value pairs include:

  • always: all fitted parameters of this object, as via get_param_names values are fitted parameter value for that key, of this object

  • if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value

  • if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc

classmethod get_param_defaults()[source]#

Get object’s parameter defaults.

Returns:
default_dict: dict[str, Any]

Keys are all parameters of cls that have a default defined in __init__ values are the defaults, as defined in __init__.

classmethod get_param_names()[source]#

Get object’s parameter names.

Returns:
param_names: list[str]

Alphabetically sorted list of parameter names of cls.

get_params(deep=True)[source]#

Get a dict of parameters values for this object.

Parameters:
deepbool, default=True

Whether to return parameters of components.

  • If True, will return a dict of parameter name : value for this object, including parameters of components (= BaseObject-valued parameters).

  • If False, will return a dict of parameter name : value for this object, but not include parameters of components.

Returns:
paramsdict with str-valued keys

Dictionary of parameters, paramname : paramvalue keys-value pairs include:

  • always: all parameters of this object, as via get_param_names values are parameter value for that key, of this object values are always identical to values passed at construction

  • if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value

  • if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc

get_tag(tag_name, tag_value_default=None, raise_error=True)[source]#

Get tag value from estimator class and dynamic tag overrides.

Parameters:
tag_namestr

Name of tag to be retrieved

tag_value_defaultany type, optional; default=None

Default/fallback value if tag is not found

raise_errorbool

whether a ValueError is raised when the tag is not found

Returns:
tag_valueAny

Value of the tag_name tag in self. If not found, returns an error if raise_error is True, otherwise it returns tag_value_default.

Raises:
ValueError if raise_error is True i.e. if tag_name is not in
self.get_tags().keys()
get_tags()[source]#

Get tags from estimator class and dynamic tag overrides.

Returns:
collected_tagsdict

Dictionary of tag name : tag value pairs. Collected from _tags class attribute via nested inheritance and then any overrides and new tags from _tags_dynamic object attribute.

is_composite()[source]#

Check if the object is composed of other BaseObjects.

A composite object is an object which contains objects, as parameters. Called on an instance, since this may differ by instance.

Returns:
composite: bool

Whether an object has any parameters whose values are BaseObjects.

property is_fitted[source]#

Whether fit has been called.

classmethod load_from_path(serial)[source]#

Load object from file location.

Parameters:
serialresult of ZipFile(path).open(“object)
Returns:
deserialized self resulting in output at path, of cls.save(path)
classmethod load_from_serial(serial)[source]#

Load object from serialized memory container.

Parameters:
serial1st element of output of cls.save(None)
Returns:
deserialized self resulting in output serial, of cls.save(None)
predict(X)[source]#

Predicts labels for sequences in X.

Parameters:
Xsktime compatible time series panel data container, Panel scitype, e.g.,

pd-multiindex: pd.DataFrame with columns = variables, index = pd.MultiIndex with first level = instance indices, second level = time indices numpy3D: 3D np.array (any number of dimensions, equal length series) of shape [n_instances, n_dimensions, series_length] or of any other supported Panel mtype for list of mtypes, see datatypes.SCITYPE_REGISTER for specifications, see examples/AA_datatypes_and_datasets.ipynb

Returns:
y_predsktime compatible tabular data container, Table scitype

1D iterable, of shape [n_instances] or 2D iterable, of shape [n_instances, n_dimensions] predicted class labels 0-th indices correspond to instance indices in X 1-st indices (if applicable) correspond to multioutput vector indices in X 1D np.npdarray, if y univariate (one dimension) otherwise, same type as y passed in fit

predict_proba(X)[source]#

Predicts labels probabilities for sequences in X.

Parameters:
Xsktime compatible time series panel data container, Panel scitype, e.g.,

pd-multiindex: pd.DataFrame with columns = variables, index = pd.MultiIndex with first level = instance indices, second level = time indices numpy3D: 3D np.array (any number of dimensions, equal length series) of shape [n_instances, n_dimensions, series_length] or of any other supported Panel mtype for list of mtypes, see datatypes.SCITYPE_REGISTER for specifications, see examples/AA_datatypes_and_datasets.ipynb

Returns:
y_pred2D np.array of int, of shape [n_instances, n_classes]

predicted class label probabilities 0-th indices correspond to instance indices in X 1-st indices correspond to class index, in same order as in self.classes_ entries are predictive class probabilities, summing to 1

reset()[source]#

Reset the object to a clean post-init state.

Using reset, runs __init__ with current values of hyper-parameters (result of get_params). This Removes any object attributes, except:

  • hyper-parameters = arguments of __init__

  • object attributes containing double-underscores, i.e., the string “__”

Class and object methods, and class attributes are also unaffected.

Returns:
self

Instance of class reset to a clean post-init state but retaining the current hyper-parameter values.

Notes

Equivalent to sklearn.clone but overwrites self. After self.reset() call, self is equal in value to type(self)(**self.get_params(deep=False))

save(path=None, serialization_format='pickle')[source]#

Save serialized self to bytes-like object or to (.zip) file.

Behaviour: if path is None, returns an in-memory serialized self if path is a file location, stores self at that location as a zip file

saved files are zip files with following contents: _metadata - contains class of self, i.e., type(self) _obj - serialized self. This class uses the default serialization (pickle).

Parameters:
pathNone or file location (str or Path)

if None, self is saved to an in-memory object if file location, self is saved to that file location. If:

path=”estimator” then a zip file estimator.zip will be made at cwd. path=”/home/stored/estimator” then a zip file estimator.zip will be stored in /home/stored/.

serialization_format: str, default = “pickle”

Module to use for serialization. The available options are “pickle” and “cloudpickle”. Note that non-default formats might require installation of other soft dependencies.

Returns:
if path is None - in-memory serialized self
if path is file location - ZipFile with reference to the file
score(X, y) float[source]#

Scores predicted labels against ground truth labels on X.

Parameters:
Xsktime compatible time series panel data container, e.g.,

pd-multiindex: pd.DataFrame with columns = variables, index = pd.MultiIndex with first level = instance indices, second level = time indices numpy3D: 3D np.array (any number of dimensions, equal length series) of shape [n_instances, n_dimensions, series_length] or of any other supported Panel mtype for list of mtypes, see datatypes.SCITYPE_REGISTER for specifications, see examples/AA_datatypes_and_datasets.ipynb

ysktime compatible tabular data container, Table scitype

1D iterable, of shape [n_instances] or 2D iterable, of shape [n_instances, n_dimensions] class labels for fitting 0-th indices correspond to instance indices in X 1-st indices (if applicable) correspond to multioutput vector indices in X supported sktime types: np.ndarray (1D, 2D), pd.Series, pd.DataFrame

Returns:
float, accuracy score of predict(X) vs y
set_config(**config_dict)[source]#

Set config flags to given values.

Parameters:
config_dictdict

Dictionary of config name : config value pairs. Valid configs, values, and their meaning is listed below:

displaystr, “diagram” (default), or “text”

how jupyter kernels display instances of self

  • “diagram” = html box diagram representation

  • “text” = string printout

print_changed_onlybool, default=True

whether printing of self lists only self-parameters that differ from defaults (False), or all parameter names and values (False). Does not nest, i.e., only affects self and not component estimators.

warningsstr, “on” (default), or “off”

whether to raise warnings, affects warnings from sktime only

  • “on” = will raise warnings from sktime

  • “off” = will not raise warnings from sktime

backend:parallelstr, optional, default=”None”

backend to use for parallelization when broadcasting/vectorizing, one of

  • “None”: executes loop sequentally, simple list comprehension

  • “loky”, “multiprocessing” and “threading”: uses joblib.Parallel

  • “joblib”: custom and 3rd party joblib backends, e.g., spark

  • “dask”: uses dask, requires dask package in environment

backend:parallel:paramsdict, optional, default={} (no parameters passed)

additional parameters passed to the parallelization backend as config. Valid keys depend on the value of backend:parallel:

  • “None”: no additional parameters, backend_params is ignored

  • “loky”, “multiprocessing” and “threading”: default joblib backends any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, with the exception of backend which is directly controlled by backend. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “joblib”: custom and 3rd party joblib backends, e.g., spark. Any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, backend must be passed as a key of backend_params in this case. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “dask”: any valid keys for dask.compute can be passed, e.g., scheduler

Returns:
selfreference to self.

Notes

Changes object state, copies configs in config_dict to self._config_dynamic.

set_params(**params)[source]#

Set the parameters of this object.

The method works on simple estimators as well as on composite objects. Parameter key strings <component>__<parameter> can be used for composites, i.e., objects that contain other objects, to access <parameter> in the component <component>. The string <parameter>, without <component>__, can also be used if this makes the reference unambiguous, e.g., there are no two parameters of components with the name <parameter>.

Parameters:
**paramsdict

BaseObject parameters, keys must be <component>__<parameter> strings. __ suffixes can alias full strings, if unique among get_params keys.

Returns:
selfreference to self (after parameters have been set)
set_random_state(random_state=None, deep=True, self_policy='copy')[source]#

Set random_state pseudo-random seed parameters for self.

Finds random_state named parameters via estimator.get_params, and sets them to integers derived from random_state via set_params. These integers are sampled from chain hashing via sample_dependent_seed, and guarantee pseudo-random independence of seeded random generators.

Applies to random_state parameters in estimator depending on self_policy, and remaining component estimators if and only if deep=True.

Note: calls set_params even if self does not have a random_state, or none of the components have a random_state parameter. Therefore, set_random_state will reset any scikit-base estimator, even those without a random_state parameter.

Parameters:
random_stateint, RandomState instance or None, default=None

Pseudo-random number generator to control the generation of the random integers. Pass int for reproducible output across multiple function calls.

deepbool, default=True

Whether to set the random state in sub-estimators. If False, will set only self’s random_state parameter, if exists. If True, will set random_state parameters in sub-estimators as well.

self_policystr, one of {“copy”, “keep”, “new”}, default=”copy”
  • “copy” : estimator.random_state is set to input random_state

  • “keep” : estimator.random_state is kept as is

  • “new” : estimator.random_state is set to a new random state,

derived from input random_state, and in general different from it

Returns:
selfreference to self
set_tags(**tag_dict)[source]#

Set dynamic tags to given values.

Parameters:
**tag_dictdict

Dictionary of tag name: tag value pairs.

Returns:
Self

Reference to self.

Notes

Changes object state by setting tag values in tag_dict as dynamic tags in self.