ComposableTimeSeriesForestClassifier#

class ComposableTimeSeriesForestClassifier(estimator=None, n_estimators=100, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=False, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, max_samples=None, criterion='gini')[source]#

Time Series Forest Classifier as described in [1].

A time series forest is an adaptation of the random forest for time-series data. It that fits a number of decision tree classifiers on various sub-samples of a transformed dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default).

Parameters:
estimatorPipeline

A pipeline consisting of series-to-tabular transformations and a decision tree classifier as final estimator.

n_estimatorsinteger, optional (default=200)

The number of trees in the forest.

max_depthinteger or None, optional (default=None)

The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.

min_samples_splitint, float, optional (default=2)

The minimum number of samples required to split an internal node: - If int, then consider min_samples_split as the minimum number. - If float, then min_samples_split is a fraction and

ceil(min_samples_split * n_samples) are the minimum number of samples for each split.

min_samples_leafint, float, optional (default=1)

The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least min_samples_leaf training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression. - If int, then consider min_samples_leaf as the minimum number. - If float, then min_samples_leaf is a fraction and

ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.

min_weight_fraction_leaffloat, optional (default=0.)

The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.

max_featuresint, float, string or None, optional (default=None)

The number of features to consider when looking for the best split: - If int, then consider max_features features at each split. - If float, then max_features is a fraction and

int(max_features * n_features) features are considered at each split.

  • If “auto”, then max_features=sqrt(n_features).

  • If “sqrt”, then max_features=sqrt(n_features) (same as “auto”).

  • If “log2”, then max_features=log2(n_features).

  • If None, then max_features=n_features.

Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than max_features features.

max_leaf_nodesint or None, optional (default=None)

Grow trees with max_leaf_nodes in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.

min_impurity_decreasefloat, optional (default=0.)

A node will be split if this split induces a decrease of the impurity greater than or equal to this value. The weighted impurity decrease equation is the following:

N_t / N * (impurity - N_t_R / N_t * right_impurity
                    - N_t_L / N_t * left_impurity)

where N is the total number of samples, N_t is the number of samples at the current node, N_t_L is the number of samples in the left child, and N_t_R is the number of samples in the right child. N, N_t, N_t_R and N_t_L all refer to the weighted sum, if sample_weight is passed.

bootstrapboolean, optional (default=False)

Whether bootstrap samples are used when building trees.

oob_scorebool (default=False)

Whether to use out-of-bag samples to estimate the generalization accuracy.

n_jobsint or None, optional (default=None)

The number of jobs to run in parallel for both fit and predict. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors.

random_stateint, RandomState instance or None, optional (default=None)

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

verboseint, optional (default=0)

Controls the verbosity when fitting and predicting.

warm_startbool, optional (default=False)

When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest.

class_weightdict, list of dicts, “balanced”, “balanced_subsample” or None, optional (default=None)

Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y. Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}]. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) The “balanced_subsample” mode is the same as “balanced” except that weights are computed based on the bootstrap sample for every tree grown. For multi-output, the weights of each column of y will be multiplied. Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.

max_samplesint or float, default=None

If bootstrap is True, the number of samples to draw from X to train each base estimator. - If None (default), then draw X.shape[0] samples. - If int, then draw max_samples samples. - If float, then draw max_samples * X.shape[0] samples. Thus,

max_samples should be in the interval (0, 1).

Attributes:
estimators_list of DecisionTreeClassifier

The collection of fitted sub-estimators.

classes_array of shape = [n_classes] or a list of such arrays

The classes labels (single output problem), or a list of arrays of class labels (multi-output problem).

n_classes_int or list

The number of classes (single output problem), or a list containing the number of classes for each output (multi-output problem).

n_columnsint

The number of features when fit is performed.

n_outputs_int

The number of outputs when fit is performed.

feature_importances_data frame of shape = [n_timepoints, n_features]

Compute feature importances for time series forest.

oob_score_float

Score of the training dataset obtained using an out-of-bag estimate.

oob_decision_function_array of shape = [n_samples, n_classes]

Decision function computed with out-of-bag estimate on the training set. If n_estimators is small it might be possible that a data point was never left out during the bootstrap. In this case, oob_decision_function_ might contain NaN.

criterion{“gini”, “entropy”, “log_loss”}, default=”gini”

The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “log_loss” and “entropy” both for the Shannon information gain, see Mathematical formulation. Note: This parameter is tree-specific.

References

[1]

Deng et. al, A time series forest for classification and feature extraction,

Information Sciences, 239:2013.

Examples

>>> from sktime.classification.ensemble import ComposableTimeSeriesForestClassifier
>>> from sktime.classification.kernel_based import RocketClassifier
>>> from sktime.datasets import load_unit_test
>>> X_train, y_train = load_unit_test(split="train") 
>>> X_test, y_test = load_unit_test(split="test") 
>>> clf = ComposableTimeSeriesForestClassifier(
...     RocketClassifier(num_kernels=100),
...     n_estimators=10,
... )  
>>> clf.fit(X_train, y_train)  
ComposableTimeSeriesForestClassifier(...)
>>> y_pred = clf.predict(X_test)  

Methods

apply(X)

Abstract method that is implemented by concrete estimators.

check_is_fitted([method_name])

Check if the estimator has been fitted.

clone()

Obtain a clone of the object with same hyper-parameters and config.

clone_tags(estimator[, tag_names])

Clone tags from another object as dynamic override.

create_test_instance([parameter_set])

Construct an instance of the class, using first test parameter set.

create_test_instances_and_names([parameter_set])

Create list of all test instances and a list of names for them.

decision_path(X)

Decision path of decision tree.

fit(X, y, **kwargs)

Wrap fit to call BaseClassifier.fit.

fit_predict(X, y[, cv, change_state])

Fit and predict labels for sequences in X.

fit_predict_proba(X, y[, cv, change_state])

Fit and predict labels probabilities for sequences in X.

get_class_tag(tag_name[, tag_value_default])

Get class tag value from class, with tag level inheritance from parents.

get_class_tags()

Get class tags from class, with tag level inheritance from parent classes.

get_config()

Get config flags for self.

get_fitted_params([deep])

Get fitted parameters.

get_metadata_routing()

Get metadata routing of this object.

get_param_defaults()

Get object's parameter defaults.

get_param_names([sort])

Get object's parameter names.

get_params([deep])

Get parameters for this estimator.

get_tag(tag_name[, tag_value_default, ...])

Get tag value from instance, with tag level inheritance and overrides.

get_tags()

Get tags from instance, with tag level inheritance and overrides.

get_test_params([parameter_set])

Return testing parameter settings for the estimator.

is_composite()

Check if the object is composed of other BaseObjects.

load_from_path(serial)

Load object from file location.

load_from_serial(serial)

Load object from serialized memory container.

predict(X, **kwargs)

Wrap predict to call BaseClassifier.predict.

predict_log_proba(X)

Predict class log-probabilities for X.

predict_proba(X, **kwargs)

Wrap predict_proba to call BaseClassifier.predict_proba.

reset()

Reset the object to a clean post-init state.

save([path, serialization_format])

Save serialized self to bytes-like object or to (.zip) file.

score(X, y)

Scores predicted labels against ground truth labels on X.

set_config(**config_dict)

Set config flags to given values.

set_fit_request(*[, sample_weight])

Request metadata passed to the fit method.

set_params(**params)

Set the parameters of this estimator.

set_random_state([random_state, deep, ...])

Set random_state pseudo-random seed parameters for self.

set_tags(**tag_dict)

Set instance level tag overrides to given values.

fit(X, y, **kwargs)[source]#

Wrap fit to call BaseClassifier.fit.

This is a fix to get around the problem with multiple inheritance. The problem is that if we just override _fit, this class inherits the fit from the sklearn class BaseTimeSeriesForest. This is the simplest solution, albeit a little hacky.

predict(X, **kwargs) ndarray[source]#

Wrap predict to call BaseClassifier.predict.

predict_proba(X, **kwargs) ndarray[source]#

Wrap predict_proba to call BaseClassifier.predict_proba.

predict_log_proba(X)[source]#

Predict class log-probabilities for X.

The predicted class log-probabilities of an input sample is computed as the log of the mean predicted class probabilities of the trees in the forest.

Parameters:
Xarray-like or sparse matrix of shape (n_samples, n_features)

The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix.

Returns:
parray of shape (n_samples, n_classes), or a list of n_outputs

such arrays if n_outputs > 1. The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.

classmethod get_test_params(parameter_set='default')[source]#

Return testing parameter settings for the estimator.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return "default" set. For classifiers, a “default” set of parameters should be provided for general testing, and a “results_comparison” set for comparing against previously recorded results if the general set does not produce suitable probabilities to compare against.

Returns:
paramsdict or list of dict, default={}

Parameters to create testing instances of the class. Each dict are parameters to construct an “interesting” test instance, i.e., MyClass(**params) or MyClass(**params[i]) creates a valid test instance. create_test_instance uses the first (or only) dictionary in params.

apply(X)[source]#

Abstract method that is implemented by concrete estimators.

check_is_fitted(method_name=None)[source]#

Check if the estimator has been fitted.

Check if _is_fitted attribute is present and True. The is_fitted attribute should be set to True in calls to an object’s fit method.

If not, raises a NotFittedError.

Parameters:
method_namestr, optional

Name of the method that called this function. If provided, the error message will include this information.

Raises:
NotFittedError

If the estimator has not been fitted yet.

clone()[source]#

Obtain a clone of the object with same hyper-parameters and config.

A clone is a different object without shared references, in post-init state. This function is equivalent to returning sklearn.clone of self.

Equivalent to constructing a new instance of type(self), with parameters of self, that is, type(self)(**self.get_params(deep=False)).

If configs were set on self, the clone will also have the same configs as the original, equivalent to calling cloned_self.set_config(**self.get_config()).

Also equivalent in value to a call of self.reset, with the exception that clone returns a new object, instead of mutating self like reset.

Raises:
RuntimeError if the clone is non-conforming, due to faulty __init__.
clone_tags(estimator, tag_names=None)[source]#

Clone tags from another object as dynamic override.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

clone_tags sets dynamic tag overrides from another object, estimator.

The clone_tags method should be called only in the __init__ method of an object, during construction, or directly after construction via __init__.

The dynamic tags are set to the values of the tags in estimator, with the names specified in tag_names.

The default of tag_names writes all tags from estimator to self.

Current tag values can be inspected by get_tags or get_tag.

Parameters:
estimatorAn instance of :class:BaseObject or derived class
tag_namesstr or list of str, default = None

Names of tags to clone. The default (None) clones all tags from estimator.

Returns:
self

Reference to self.

classmethod create_test_instance(parameter_set='default')[source]#

Construct an instance of the class, using first test parameter set.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

Returns:
instanceinstance of the class with default parameters
classmethod create_test_instances_and_names(parameter_set='default')[source]#

Create list of all test instances and a list of names for them.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

Returns:
objslist of instances of cls

i-th instance is cls(**cls.get_test_params()[i])

nameslist of str, same length as objs

i-th element is name of i-th instance of obj in tests. The naming convention is {cls.__name__}-{i} if more than one instance, otherwise {cls.__name__}

decision_path(X)[source]#

Decision path of decision tree.

Abstract method that is implemented by concrete estimators.

property estimators_samples_[source]#

The subset of drawn samples for each base estimator.

Returns a dynamically generated list of indices identifying the samples used for fitting each member of the ensemble, i.e., the in-bag samples.

Note: the list is re-created at each call to the property in order to reduce the object memory footprint by not storing the sampling data. Thus fetching the property may be slower than expected.

property feature_importances_[source]#

Compute feature importances for time series forest.

fit_predict(X, y, cv=None, change_state=True)[source]#

Fit and predict labels for sequences in X.

Convenience method to produce in-sample predictions and cross-validated out-of-sample predictions.

Writes to self, if change_state=True:

Sets self.is_fitted to True. Sets fitted model attributes ending in “_”.

Does not update state if change_state=False.

Parameters:
Xsktime compatible time series panel data container of Panel scitype

time series to fit to and predict labels for.

Can be in any mtype of Panel scitype, for instance:

  • pd-multiindex: pd.DataFrame with columns = variables, index = pd.MultiIndex with first level = instance indices, second level = time indices

  • numpy3D: 3D np.array (any number of dimensions, equal length series) of shape [n_instances, n_dimensions, series_length]

  • or of any other supported Panel mtype

for list of mtypes, see datatypes.SCITYPE_REGISTER

for specifications, see examples/AA_datatypes_and_datasets.ipynb

Not all estimators support panels with multivariate or unequal length series, see the tag reference for details.

ysktime compatible tabular data container, Table scitype

1D iterable, of shape [n_instances] or 2D iterable, of shape [n_instances, n_dimensions] class labels for fitting 0-th indices correspond to instance indices in X 1-st indices (if applicable) correspond to multioutput vector indices in X supported sktime types: np.ndarray (1D, 2D), pd.Series, pd.DataFrame

cvNone, int, or sklearn cross-validation object, optional, default=None
  • None : predictions are in-sample, equivalent to fit(X, y).predict(X)

  • cv : predictions are equivalent to fit(X_train, y_train).predict(X_test), where multiple X_train, y_train, X_test are obtained from cv folds. returned y is union over all test fold predictions, cv test folds must be non-intersecting

  • int : equivalent to cv=KFold(cv, shuffle=True, random_state=x), i.e., k-fold cross-validation predictions out-of-sample, and where random_state x is taken from self if exists, otherwise x=None

change_statebool, optional (default=True)
  • if False, will not change the state of the classifier, i.e., fit/predict sequence is run with a copy, self does not change

  • if True, will fit self to the full X and y, end state will be equivalent to running fit(X, y)

Returns:
y_predsktime compatible tabular data container, of Table scitype

predicted class labels

1D iterable, of shape [n_instances], or 2D iterable, of shape [n_instances, n_dimensions].

0-th indices correspond to instance indices in X, 1-st indices (if applicable) correspond to multioutput vector indices in X.

1D np.npdarray, if y univariate (one dimension); otherwise, same type as y passed in fit

fit_predict_proba(X, y, cv=None, change_state=True)[source]#

Fit and predict labels probabilities for sequences in X.

Convenience method to produce in-sample predictions and cross-validated out-of-sample predictions.

Writes to self, if change_state=True:

Sets self.is_fitted to True. Sets fitted model attributes ending in “_”.

Does not update state if change_state=False.

Parameters:
Xsktime compatible time series panel data container of Panel scitype

time series to fit to and predict labels for.

Can be in any mtype of Panel scitype, for instance:

  • pd-multiindex: pd.DataFrame with columns = variables, index = pd.MultiIndex with first level = instance indices, second level = time indices

  • numpy3D: 3D np.array (any number of dimensions, equal length series) of shape [n_instances, n_dimensions, series_length]

  • or of any other supported Panel mtype

for list of mtypes, see datatypes.SCITYPE_REGISTER

for specifications, see examples/AA_datatypes_and_datasets.ipynb

Not all estimators support panels with multivariate or unequal length series, see the tag reference for details.

ysktime compatible tabular data container, Table scitype

1D iterable, of shape [n_instances] or 2D iterable, of shape [n_instances, n_dimensions] class labels for fitting 0-th indices correspond to instance indices in X 1-st indices (if applicable) correspond to multioutput vector indices in X supported sktime types: np.ndarray (1D, 2D), pd.Series, pd.DataFrame

cvNone, int, or sklearn cross-validation object, optional, default=None
  • None : predictions are in-sample, equivalent to fit(X, y).predict(X)

  • cv : predictions are equivalent to fit(X_train, y_train).predict(X_test), where multiple X_train, y_train, X_test are obtained from cv folds. returned y is union over all test fold predictions, cv test folds must be non-intersecting

  • int : equivalent to cv=KFold(cv, shuffle=True, random_state=x), i.e., k-fold cross-validation predictions out-of-sample, and where random_state x is taken from self if exists, otherwise x=None

change_statebool, optional (default=True)
  • if False, will not change the state of the classifier, i.e., fit/predict sequence is run with a copy, self does not change

  • if True, will fit self to the full X and y, end state will be equivalent to running fit(X, y)

Returns:
y_pred2D np.array of int, of shape [n_instances, n_classes]

predicted class label probabilities 0-th indices correspond to instance indices in X 1-st indices correspond to class index, in same order as in self.classes_ entries are predictive class probabilities, summing to 1

classmethod get_class_tag(tag_name, tag_value_default=None)[source]#

Get class tag value from class, with tag level inheritance from parents.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

The get_class_tag method is a class method, and retrieves the value of a tag taking into account only class-level tag values and overrides.

It returns the value of the tag with name tag_name from the object, taking into account tag overrides, in the following order of descending priority:

  1. Tags set in the _tags attribute of the class.

  2. Tags set in the _tags attribute of parent classes,

in order of inheritance.

Does not take into account dynamic tag overrides on instances, set via set_tags or clone_tags, that are defined on instances.

To retrieve tag values with potential instance overrides, use the get_tag method instead.

Parameters:
tag_namestr

Name of tag value.

tag_value_defaultany type

Default/fallback value if tag is not found.

Returns:
tag_value

Value of the tag_name tag in self. If not found, returns tag_value_default.

classmethod get_class_tags()[source]#

Get class tags from class, with tag level inheritance from parent classes.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

The get_class_tags method is a class method, and retrieves the value of a tag taking into account only class-level tag values and overrides.

It returns a dictionary with keys being keys of any attribute of _tags set in the class or any of its parent classes.

Values are the corresponding tag values, with overrides in the following order of descending priority:

  1. Tags set in the _tags attribute of the class.

  2. Tags set in the _tags attribute of parent classes,

in order of inheritance.

Instances can override these tags depending on hyper-parameters.

To retrieve tags with potential instance overrides, use the get_tags method instead.

Does not take into account dynamic tag overrides on instances, set via set_tags or clone_tags, that are defined on instances.

For including overrides from dynamic tags, use get_tags.

Returns:
collected_tagsdict

Dictionary of tag name : tag value pairs. Collected from _tags class attribute via nested inheritance. NOT overridden by dynamic tags set by set_tags or clone_tags.

get_config()[source]#

Get config flags for self.

Configs are key-value pairs of self, typically used as transient flags for controlling behaviour.

get_config returns dynamic configs, which override the default configs.

Default configs are set in the class attribute _config of the class or its parent classes, and are overridden by dynamic configs set via set_config.

Configs are retained under clone or reset calls.

Returns:
config_dictdict

Dictionary of config name : config value pairs. Collected from _config class attribute via nested inheritance and then any overrides and new tags from _onfig_dynamic object attribute.

get_fitted_params(deep=True)[source]#

Get fitted parameters.

State required:

Requires state to be “fitted”.

Parameters:
deepbool, default=True

Whether to return fitted parameters of components.

  • If True, will return a dict of parameter name : value for this object, including fitted parameters of fittable components (= BaseEstimator-valued parameters).

  • If False, will return a dict of parameter name : value for this object, but not include fitted parameters of components.

Returns:
fitted_paramsdict with str-valued keys

Dictionary of fitted parameters, paramname : paramvalue keys-value pairs include:

  • always: all fitted parameters of this object, as via get_param_names values are fitted parameter value for that key, of this object

  • if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value

  • if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc

get_metadata_routing()[source]#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

classmethod get_param_defaults()[source]#

Get object’s parameter defaults.

Returns:
default_dict: dict[str, Any]

Keys are all parameters of cls that have a default defined in __init__. Values are the defaults, as defined in __init__.

classmethod get_param_names(sort=True)[source]#

Get object’s parameter names.

Parameters:
sortbool, default=True

Whether to return the parameter names sorted in alphabetical order (True), or in the order they appear in the class __init__ (False).

Returns:
param_names: list[str]

List of parameter names of cls. If sort=False, in same order as they appear in the class __init__. If sort=True, alphabetically ordered.

get_params(deep=True)[source]#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

get_tag(tag_name, tag_value_default=None, raise_error=True)[source]#

Get tag value from instance, with tag level inheritance and overrides.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

The get_tag method retrieves the value of a single tag with name tag_name from the instance, taking into account tag overrides, in the following order of descending priority:

  1. Tags set via set_tags or clone_tags on the instance,

at construction of the instance.

  1. Tags set in the _tags attribute of the class.

  2. Tags set in the _tags attribute of parent classes,

in order of inheritance.

Parameters:
tag_namestr

Name of tag to be retrieved

tag_value_defaultany type, optional; default=None

Default/fallback value if tag is not found

raise_errorbool

whether a ValueError is raised when the tag is not found

Returns:
tag_valueAny

Value of the tag_name tag in self. If not found, raises an error if raise_error is True, otherwise it returns tag_value_default.

Raises:
ValueError, if raise_error is True.

The ValueError is then raised if tag_name is not in self.get_tags().keys().

get_tags()[source]#

Get tags from instance, with tag level inheritance and overrides.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

The get_tags method returns a dictionary of tags, with keys being keys of any attribute of _tags set in the class or any of its parent classes, or tags set via set_tags or clone_tags.

Values are the corresponding tag values, with overrides in the following order of descending priority:

  1. Tags set via set_tags or clone_tags on the instance,

at construction of the instance.

  1. Tags set in the _tags attribute of the class.

  2. Tags set in the _tags attribute of parent classes,

in order of inheritance.

Returns:
collected_tagsdict

Dictionary of tag name : tag value pairs. Collected from _tags class attribute via nested inheritance and then any overrides and new tags from _tags_dynamic object attribute.

is_composite()[source]#

Check if the object is composed of other BaseObjects.

A composite object is an object which contains objects, as parameters. Called on an instance, since this may differ by instance.

Returns:
composite: bool

Whether an object has any parameters whose values are BaseObject descendant instances.

property is_fitted[source]#

Whether fit has been called.

Inspects object’s _is_fitted` attribute that should initialize to ``False during object construction, and be set to True in calls to an object’s fit method.

Returns:
bool

Whether the estimator has been fit.

classmethod load_from_path(serial)[source]#

Load object from file location.

Parameters:
serialresult of ZipFile(path).open(“object)
Returns:
deserialized self resulting in output at path, of cls.save(path)
classmethod load_from_serial(serial)[source]#

Load object from serialized memory container.

Parameters:
serial1st element of output of cls.save(None)
Returns:
deserialized self resulting in output serial, of cls.save(None)
reset()[source]#

Reset the object to a clean post-init state.

Results in setting self to the state it had directly after the constructor call, with the same hyper-parameters. Config values set by set_config are also retained.

A reset call deletes any object attributes, except:

  • hyper-parameters = arguments of __init__ written to self, e.g., self.paramname where paramname is an argument of __init__

  • object attributes containing double-underscores, i.e., the string “__”. For instance, an attribute named “__myattr” is retained.

  • config attributes, configs are retained without change. That is, results of get_config before and after reset are equal.

Class and object methods, and class attributes are also unaffected.

Equivalent to clone, with the exception that reset mutates self instead of returning a new object.

After a self.reset() call, self is equal in value and state, to the object obtained after a constructor call``type(self)(**self.get_params(deep=False))``.

Returns:
self

Instance of class reset to a clean post-init state but retaining the current hyper-parameter values.

save(path=None, serialization_format='pickle')[source]#

Save serialized self to bytes-like object or to (.zip) file.

Behaviour: if path is None, returns an in-memory serialized self if path is a file location, stores self at that location as a zip file

saved files are zip files with following contents: _metadata - contains class of self, i.e., type(self) _obj - serialized self. This class uses the default serialization (pickle).

Parameters:
pathNone or file location (str or Path)

if None, self is saved to an in-memory object if file location, self is saved to that file location. If:

  • path=”estimator” then a zip file estimator.zip will be made at cwd.

  • path=”/home/stored/estimator” then a zip file estimator.zip will be

stored in /home/stored/.

serialization_format: str, default = “pickle”

Module to use for serialization. The available options are “pickle” and “cloudpickle”. Note that non-default formats might require installation of other soft dependencies.

Returns:
if path is None - in-memory serialized self
if path is file location - ZipFile with reference to the file
score(X, y) float[source]#

Scores predicted labels against ground truth labels on X.

Parameters:
Xsktime compatible time series panel data container of Panel scitype

time series to score predicted labels for.

Can be in any mtype of Panel scitype, for instance:

  • pd-multiindex: pd.DataFrame with columns = variables, index = pd.MultiIndex with first level = instance indices, second level = time indices

  • numpy3D: 3D np.array (any number of dimensions, equal length series) of shape [n_instances, n_dimensions, series_length]

  • or of any other supported Panel mtype

for list of mtypes, see datatypes.SCITYPE_REGISTER

for specifications, see examples/AA_datatypes_and_datasets.ipynb

Not all estimators support panels with multivariate or unequal length series, see the tag reference for details.

ysktime compatible tabular data container, Table scitype

1D iterable, of shape [n_instances] or 2D iterable, of shape [n_instances, n_dimensions] class labels for fitting 0-th indices correspond to instance indices in X 1-st indices (if applicable) correspond to multioutput vector indices in X supported sktime types: np.ndarray (1D, 2D), pd.Series, pd.DataFrame

Returns:
float, accuracy score of predict(X) vs y
set_config(**config_dict)[source]#

Set config flags to given values.

Parameters:
config_dictdict

Dictionary of config name : config value pairs. Valid configs, values, and their meaning is listed below:

displaystr, “diagram” (default), or “text”

how jupyter kernels display instances of self

  • “diagram” = html box diagram representation

  • “text” = string printout

print_changed_onlybool, default=True

whether printing of self lists only self-parameters that differ from defaults (False), or all parameter names and values (False). Does not nest, i.e., only affects self and not component estimators.

warningsstr, “on” (default), or “off”

whether to raise warnings, affects warnings from sktime only

  • “on” = will raise warnings from sktime

  • “off” = will not raise warnings from sktime

backend:parallelstr, optional, default=”None”

backend to use for parallelization when broadcasting/vectorizing, one of

  • “None”: executes loop sequentally, simple list comprehension

  • “loky”, “multiprocessing” and “threading”: uses joblib.Parallel

  • “joblib”: custom and 3rd party joblib backends, e.g., spark

  • “dask”: uses dask, requires dask package in environment

backend:parallel:paramsdict, optional, default={} (no parameters passed)

additional parameters passed to the parallelization backend as config. Valid keys depend on the value of backend:parallel:

  • “None”: no additional parameters, backend_params is ignored

  • “loky”, “multiprocessing” and “threading”: default joblib backends any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, with the exception of backend which is directly controlled by backend. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “joblib”: custom and 3rd party joblib backends, e.g., spark. Any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, backend must be passed as a key of backend_params in this case. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “dask”: any valid keys for dask.compute can be passed, e.g., scheduler

Returns:
selfreference to self.

Notes

Changes object state, copies configs in config_dict to self._config_dynamic.

set_fit_request(*, sample_weight: bool | None | str = '$UNCHANGED$') ComposableTimeSeriesForestClassifier[source]#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in fit.

Returns:
selfobject

The updated object.

set_params(**params)[source]#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_random_state(random_state=None, deep=True, self_policy='copy')[source]#

Set random_state pseudo-random seed parameters for self.

Finds random_state named parameters via self.get_params, and sets them to integers derived from random_state via set_params. These integers are sampled from chain hashing via sample_dependent_seed, and guarantee pseudo-random independence of seeded random generators.

Applies to random_state parameters in self, depending on self_policy, and remaining component objects if and only if deep=True.

Note: calls set_params even if self does not have a random_state, or none of the components have a random_state parameter. Therefore, set_random_state will reset any scikit-base object, even those without a random_state parameter.

Parameters:
random_stateint, RandomState instance or None, default=None

Pseudo-random number generator to control the generation of the random integers. Pass int for reproducible output across multiple function calls.

deepbool, default=True

Whether to set the random state in skbase object valued parameters, i.e., component estimators.

  • If False, will set only self’s random_state parameter, if exists.

  • If True, will set random_state parameters in component objects as well.

self_policystr, one of {“copy”, “keep”, “new”}, default=”copy”
  • “copy” : self.random_state is set to input random_state

  • “keep” : self.random_state is kept as is

  • “new” : self.random_state is set to a new random state,

derived from input random_state, and in general different from it

Returns:
selfreference to self
set_tags(**tag_dict)[source]#

Set instance level tag overrides to given values.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

set_tags sets dynamic tag overrides to the values as specified in tag_dict, with keys being the tag name, and dict values being the value to set the tag to.

The set_tags method should be called only in the __init__ method of an object, during construction, or directly after construction via __init__.

Current tag values can be inspected by get_tags or get_tag.

Parameters:
**tag_dictdict

Dictionary of tag name: tag value pairs.

Returns:
Self

Reference to self.