TEASER#

class TEASER(estimator=None, one_class_classifier=None, one_class_param_grid=None, classification_points=None, n_jobs=1, random_state=None)[source]#

Two-tier Early and Accurate Series Classifier (TEASER).

An early classifier which uses one class SVM’s trained on prediction probabilities to determine whether an early prediction is safe or not.

Overview:

Build n classifiers, where n is the number of classification_points. For each classifier, train a one class svm used to determine prediction safety at that series length. Tune the number of consecutive safe svm predictions required to consider the prediction safe.

While a prediction is still deemed unsafe:

Make a prediction using the series length at classification point i. Decide whether the predcition is safe or not using decide_prediction_safety.

Parameters:
estimator: sktime classifier, default=None

An sktime estimator to be built at each of the classification_points time stamps. Defaults to a WEASEL classifier.

one_class_classifier: one-class sklearn classifier, default=None

An sklearn one-class classifier used to determine whether an early decision is safe. Defaults to a tuned one-class SVM classifier.

one_class_param_grid: dict or list of dict, default=None

The hyper-parameters for the one-class classifier to learn using grid-search. Dictionary with parameters names (str) as keys and lists of parameter settings to try as values, or a list of such dictionaries.

classification_pointsList or None, default=None

List of integer time series time stamps to build classifiers and allow predictions at. Early predictions must have a series length that matches a value in the _classification_points List. Duplicate values will be removed, and the full series length will be appended if not present. If None, will use 20 thresholds linearly spaces from 0 to the series length.

n_jobsint, default=1

The number of jobs to run in parallel for both fit and predict. -1 means using all processors.

random_stateint or None, default=None

Seed for random number generation.

Attributes:
n_classes_int

The number of classes.

n_instances_int

The number of train cases.

n_dims_int

The number of dimensions per case.

series_length_int

The full length of each series.

classes_list

The unique class labels.

state_info2d np.ndarray (4 columns)

Information stored about input instances after the decision-making process in update/predict methods. Used in update methods to make decisions based on the results of previous method calls. Records in order: the time stamp index, the number of consecutive decisions made, the predicted class and the series length.

References

[1]

Schäfer, Patrick, and Ulf Leser. “TEASER: early and accurate time series classification.” Data mining and knowledge discovery 34, no. 5 (2020)

Examples

>>> from sktime.classification.early_classification import TEASER
>>> from sktime.classification.interval_based import TimeSeriesForestClassifier
>>> from sktime.datasets import load_unit_test
>>> X_train, y_train = load_unit_test(split="train", return_X_y=True)
>>> X_test, y_test = load_unit_test(split="test", return_X_y=True) 
>>> clf = TEASER(
...     classification_points=[6, 16, 24],
...     estimator=TimeSeriesForestClassifier(n_estimators=5),
... ) 
>>> clf.fit(X_train, y_train) 
TEASER(...)
>>> y_pred, decisions = clf.predict(X_test) 

Methods

check_is_fitted()

Check if the estimator has been fitted.

clone()

Obtain a clone of the object with same hyper-parameters.

clone_tags(estimator[, tag_names])

Clone tags from another estimator as dynamic override.

create_test_instance([parameter_set])

Construct Estimator instance if possible.

create_test_instances_and_names([parameter_set])

Create list of all test instances and a list of names for them.

filter_X(X, decisions)

Remove True cases from X given a boolean array of decisions.

filter_X_y(X, y, decisions)

Remove True cases from X and y given a boolean array of decisions.

fit(X, y)

Fit time series classifier to training data.

get_class_tag(tag_name[, tag_value_default])

Get a class tag's value.

get_class_tags()

Get class tags from the class and all its parent classes.

get_config()

Get config flags for self.

get_fitted_params([deep])

Get fitted parameters.

get_param_defaults()

Get object's parameter defaults.

get_param_names([sort])

Get object's parameter names.

get_params([deep])

Get a dict of parameters values for this object.

get_state_info()

Return the state information generated from the last predict/update call.

get_tag(tag_name[, tag_value_default, ...])

Get tag value from estimator class and dynamic tag overrides.

get_tags()

Get tags from estimator class and dynamic tag overrides.

get_test_params([parameter_set])

Return testing parameter settings for the estimator.

is_composite()

Check if the object is composed of other BaseObjects.

load_from_path(serial)

Load object from file location.

load_from_serial(serial)

Load object from serialized memory container.

predict(X)

Predicts labels for sequences in X.

predict_proba(X)

Predicts labels probabilities for sequences in X.

reset()

Reset the object to a clean post-init state.

reset_state_info()

Reset the state information used in update methods.

save([path, serialization_format])

Save serialized self to bytes-like object or to (.zip) file.

score(X, y)

Scores predicted labels against ground truth labels on X.

set_config(**config_dict)

Set config flags to given values.

set_params(**params)

Set the parameters of this object.

set_random_state([random_state, deep, ...])

Set random_state pseudo-random seed parameters for self.

set_tags(**tag_dict)

Set dynamic tags to given values.

split_indices(indices, decisions)

Split a list of indices given a boolean array of decisions.

split_indices_and_filter(X, indices, decisions)

Remove True cases and split a list of indices given an array of decisions.

update_predict(X)

Update label prediction for sequences in X at a larger series length.

update_predict_proba(X)

Update label probabilities for sequences in X at a larger series length.

classmethod get_test_params(parameter_set='default')[source]#

Return testing parameter settings for the estimator.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return "default" set.

Returns:
paramsdict or list of dict, default = {}

Parameters to create testing instances of the class.

check_is_fitted()[source]#

Check if the estimator has been fitted.

Raises:
NotFittedError

If the estimator has not been fitted yet.

clone()[source]#

Obtain a clone of the object with same hyper-parameters.

A clone is a different object without shared references, in post-init state. This function is equivalent to returning sklearn.clone of self.

Raises:
RuntimeError if the clone is non-conforming, due to faulty __init__.

Notes

If successful, equal in value to type(self)(**self.get_params(deep=False)).

clone_tags(estimator, tag_names=None)[source]#

Clone tags from another estimator as dynamic override.

Parameters:
estimatorestimator inheriting from :class:BaseEstimator
tag_namesstr or list of str, default = None

Names of tags to clone. If None then all tags in estimator are used as tag_names.

Returns:
Self

Reference to self.

Notes

Changes object state by setting tag values in tag_set from estimator as dynamic tags in self.

classmethod create_test_instance(parameter_set='default')[source]#

Construct Estimator instance if possible.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

Returns:
instanceinstance of the class with default parameters

Notes

get_test_params can return dict or list of dict. This function takes first or single dict that get_test_params returns, and constructs the object with that.

classmethod create_test_instances_and_names(parameter_set='default')[source]#

Create list of all test instances and a list of names for them.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

Returns:
objslist of instances of cls

i-th instance is cls(**cls.get_test_params()[i])

nameslist of str, same length as objs

i-th element is name of i-th instance of obj in tests convention is {cls.__name__}-{i} if more than one instance otherwise {cls.__name__}

static filter_X(X, decisions)[source]#

Remove True cases from X given a boolean array of decisions.

static filter_X_y(X, y, decisions)[source]#

Remove True cases from X and y given a boolean array of decisions.

fit(X, y)[source]#

Fit time series classifier to training data.

Parameters:
X3D np.array (any number of dimensions, equal length series)

of shape [n_instances, n_dimensions, series_length]

or 2D np.array (univariate, equal length series)

of shape [n_instances, series_length]

or pd.DataFrame with each column a dimension, each cell a pd.Series

(any number of dimensions, equal or unequal length series)

or of any other supported Panel mtype

for list of mtypes, see datatypes.SCITYPE_REGISTER for specifications, see examples/AA_datatypes_and_datasets.ipynb

y1D np.array of int, of shape [n_instances] - class labels for fitting

indices correspond to instance indices in X

Returns:
selfReference to self.

Notes

Changes state by creating a fitted model that updates attributes ending in “_” and sets is_fitted flag to True.

classmethod get_class_tag(tag_name, tag_value_default=None)[source]#

Get a class tag’s value.

Does not return information from dynamic tags (set via set_tags or clone_tags) that are defined on instances.

Parameters:
tag_namestr

Name of tag value.

tag_value_defaultany

Default/fallback value if tag is not found.

Returns:
tag_value

Value of the tag_name tag in self. If not found, returns tag_value_default.

classmethod get_class_tags()[source]#

Get class tags from the class and all its parent classes.

Retrieves tag: value pairs from _tags class attribute. Does not return information from dynamic tags (set via set_tags or clone_tags) that are defined on instances.

Returns:
collected_tagsdict

Dictionary of class tag name: tag value pairs. Collected from _tags class attribute via nested inheritance.

get_config()[source]#

Get config flags for self.

Returns:
config_dictdict

Dictionary of config name : config value pairs. Collected from _config class attribute via nested inheritance and then any overrides and new tags from _onfig_dynamic object attribute.

get_fitted_params(deep=True)[source]#

Get fitted parameters.

State required:

Requires state to be “fitted”.

Parameters:
deepbool, default=True

Whether to return fitted parameters of components.

  • If True, will return a dict of parameter name : value for this object, including fitted parameters of fittable components (= BaseEstimator-valued parameters).

  • If False, will return a dict of parameter name : value for this object, but not include fitted parameters of components.

Returns:
fitted_paramsdict with str-valued keys

Dictionary of fitted parameters, paramname : paramvalue keys-value pairs include:

  • always: all fitted parameters of this object, as via get_param_names values are fitted parameter value for that key, of this object

  • if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value

  • if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc

classmethod get_param_defaults()[source]#

Get object’s parameter defaults.

Returns:
default_dict: dict[str, Any]

Keys are all parameters of cls that have a default defined in __init__ values are the defaults, as defined in __init__.

classmethod get_param_names(sort=True)[source]#

Get object’s parameter names.

Parameters:
sortbool, default=True

Whether to return the parameter names sorted in alphabetical order (True), or in the order they appear in the class __init__ (False).

Returns:
param_names: list[str]

List of parameter names of cls. If sort=False, in same order as they appear in the class __init__. If sort=True, alphabetically ordered.

get_params(deep=True)[source]#

Get a dict of parameters values for this object.

Parameters:
deepbool, default=True

Whether to return parameters of components.

  • If True, will return a dict of parameter name : value for this object, including parameters of components (= BaseObject-valued parameters).

  • If False, will return a dict of parameter name : value for this object, but not include parameters of components.

Returns:
paramsdict with str-valued keys

Dictionary of parameters, paramname : paramvalue keys-value pairs include:

  • always: all parameters of this object, as via get_param_names values are parameter value for that key, of this object values are always identical to values passed at construction

  • if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value

  • if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc

get_state_info()[source]#

Return the state information generated from the last predict/update call.

Returns:
An array containing the state info for each decision in X from update and
predict methods. Contains classifier dependent information for future decisions
on the data and information on when a cases decision has been made. Each row
contains information for a case from the latest decision on its safety made in
update/predict. Successive updates are likely to remove rows from the
state_info, as it will only store as many rows as there are input instances to
update/predict.
get_tag(tag_name, tag_value_default=None, raise_error=True)[source]#

Get tag value from estimator class and dynamic tag overrides.

Parameters:
tag_namestr

Name of tag to be retrieved

tag_value_defaultany type, optional; default=None

Default/fallback value if tag is not found

raise_errorbool

whether a ValueError is raised when the tag is not found

Returns:
tag_valueAny

Value of the tag_name tag in self. If not found, returns an error if raise_error is True, otherwise it returns tag_value_default.

Raises:
ValueError if raise_error is True i.e. if tag_name is not in
self.get_tags().keys()
get_tags()[source]#

Get tags from estimator class and dynamic tag overrides.

Returns:
collected_tagsdict

Dictionary of tag name : tag value pairs. Collected from _tags class attribute via nested inheritance and then any overrides and new tags from _tags_dynamic object attribute.

is_composite()[source]#

Check if the object is composed of other BaseObjects.

A composite object is an object which contains objects, as parameters. Called on an instance, since this may differ by instance.

Returns:
composite: bool

Whether an object has any parameters whose values are BaseObjects.

property is_fitted[source]#

Whether fit has been called.

classmethod load_from_path(serial)[source]#

Load object from file location.

Parameters:
serialresult of ZipFile(path).open(“object)
Returns:
deserialized self resulting in output at path, of cls.save(path)
classmethod load_from_serial(serial)[source]#

Load object from serialized memory container.

Parameters:
serial1st element of output of cls.save(None)
Returns:
deserialized self resulting in output serial, of cls.save(None)
predict(X) tuple[ndarray, ndarray][source]#

Predicts labels for sequences in X.

Early classifiers can predict at series lengths shorter than the train data series length.

Predict will return -1 for cases which it cannot make a decision on yet. The output is only guaranteed to return a valid class label for all cases when using the full series length.

Parameters:
X3D np.array (any number of dimensions, equal length series)

of shape [n_instances, n_dimensions, series_length]

or 2D np.array (univariate, equal length series)

of shape [n_instances, series_length]

or pd.DataFrame with each column a dimension, each cell a pd.Series

(any number of dimensions, equal or unequal length series)

or of any other supported Panel mtype

for list of mtypes, see datatypes.SCITYPE_REGISTER for specifications, see examples/AA_datatypes_and_datasets.ipynb

Returns:
y1D np.array of int, of shape [n_instances] - predicted class labels

indices correspond to instance indices in X

decisions1D bool array

An array of booleans, containing the decision of whether a prediction is safe to use or not. i-th entry is the classifier decision that i-th instance safe to use

predict_proba(X) tuple[ndarray, ndarray][source]#

Predicts labels probabilities for sequences in X.

Early classifiers can predict at series lengths shorter than the train data series length.

Probability predictions will return [-1]*n_classes_ for cases which it cannot make a decision on yet. The output is only guaranteed to return a valid class label for all cases when using the full series length.

Parameters:
X3D np.array (any number of dimensions, equal length series)

of shape [n_instances, n_dimensions, series_length]

or 2D np.array (univariate, equal length series)

of shape [n_instances, series_length]

or pd.DataFrame with each column a dimension, each cell a pd.Series

(any number of dimensions, equal or unequal length series)

or of any other supported Panel mtype

for list of mtypes, see datatypes.SCITYPE_REGISTER for specifications, see examples/AA_datatypes_and_datasets.ipynb

Returns:
y2D array of shape [n_instances, n_classes] - predicted class probabilities

1st dimension indices correspond to instance indices in X 2nd dimension indices correspond to possible labels (integers) (i, j)-th entry is predictive probability that i-th instance is of class j

decisions1D bool array

An array of booleans, containing the decision of whether a prediction is safe to use or not. i-th entry is the classifier decision that i-th instance safe to use

reset()[source]#

Reset the object to a clean post-init state.

Using reset, runs __init__ with current values of hyper-parameters (result of get_params). This Removes any object attributes, except:

  • hyper-parameters = arguments of __init__

  • object attributes containing double-underscores, i.e., the string “__”

Class and object methods, and class attributes are also unaffected.

Returns:
self

Instance of class reset to a clean post-init state but retaining the current hyper-parameter values.

Notes

Equivalent to sklearn.clone but overwrites self. After self.reset() call, self is equal in value to type(self)(**self.get_params(deep=False))

reset_state_info()[source]#

Reset the state information used in update methods.

save(path=None, serialization_format='pickle')[source]#

Save serialized self to bytes-like object or to (.zip) file.

Behaviour: if path is None, returns an in-memory serialized self if path is a file location, stores self at that location as a zip file

saved files are zip files with following contents: _metadata - contains class of self, i.e., type(self) _obj - serialized self. This class uses the default serialization (pickle).

Parameters:
pathNone or file location (str or Path)

if None, self is saved to an in-memory object if file location, self is saved to that file location. If:

path=”estimator” then a zip file estimator.zip will be made at cwd. path=”/home/stored/estimator” then a zip file estimator.zip will be stored in /home/stored/.

serialization_format: str, default = “pickle”

Module to use for serialization. The available options are “pickle” and “cloudpickle”. Note that non-default formats might require installation of other soft dependencies.

Returns:
if path is None - in-memory serialized self
if path is file location - ZipFile with reference to the file
score(X, y) tuple[float, float, float][source]#

Scores predicted labels against ground truth labels on X.

Parameters:
X3D np.array (any number of dimensions, equal length series)

of shape [n_instances, n_dimensions, series_length]

or 2D np.array (univariate, equal length series)

of shape [n_instances, series_length]

or pd.DataFrame with each column a dimension, each cell a pd.Series

(any number of dimensions, equal or unequal length series)

or of any other supported Panel mtype

for list of mtypes, see datatypes.SCITYPE_REGISTER for specifications, see examples/AA_datatypes_and_datasets.ipynb

y1D np.ndarray of int, of shape [n_instances] - class labels (ground truth)

indices correspond to instance indices in X

Returns:
Tuple of floats, harmonic mean, accuracy and earliness scores of predict(X) vs y
set_config(**config_dict)[source]#

Set config flags to given values.

Parameters:
config_dictdict

Dictionary of config name : config value pairs. Valid configs, values, and their meaning is listed below:

displaystr, “diagram” (default), or “text”

how jupyter kernels display instances of self

  • “diagram” = html box diagram representation

  • “text” = string printout

print_changed_onlybool, default=True

whether printing of self lists only self-parameters that differ from defaults (False), or all parameter names and values (False). Does not nest, i.e., only affects self and not component estimators.

warningsstr, “on” (default), or “off”

whether to raise warnings, affects warnings from sktime only

  • “on” = will raise warnings from sktime

  • “off” = will not raise warnings from sktime

backend:parallelstr, optional, default=”None”

backend to use for parallelization when broadcasting/vectorizing, one of

  • “None”: executes loop sequentally, simple list comprehension

  • “loky”, “multiprocessing” and “threading”: uses joblib.Parallel

  • “joblib”: custom and 3rd party joblib backends, e.g., spark

  • “dask”: uses dask, requires dask package in environment

backend:parallel:paramsdict, optional, default={} (no parameters passed)

additional parameters passed to the parallelization backend as config. Valid keys depend on the value of backend:parallel:

  • “None”: no additional parameters, backend_params is ignored

  • “loky”, “multiprocessing” and “threading”: default joblib backends any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, with the exception of backend which is directly controlled by backend. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “joblib”: custom and 3rd party joblib backends, e.g., spark. Any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, backend must be passed as a key of backend_params in this case. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “dask”: any valid keys for dask.compute can be passed, e.g., scheduler

Returns:
selfreference to self.

Notes

Changes object state, copies configs in config_dict to self._config_dynamic.

set_params(**params)[source]#

Set the parameters of this object.

The method works on simple estimators as well as on composite objects. Parameter key strings <component>__<parameter> can be used for composites, i.e., objects that contain other objects, to access <parameter> in the component <component>. The string <parameter>, without <component>__, can also be used if this makes the reference unambiguous, e.g., there are no two parameters of components with the name <parameter>.

Parameters:
**paramsdict

BaseObject parameters, keys must be <component>__<parameter> strings. __ suffixes can alias full strings, if unique among get_params keys.

Returns:
selfreference to self (after parameters have been set)
set_random_state(random_state=None, deep=True, self_policy='copy')[source]#

Set random_state pseudo-random seed parameters for self.

Finds random_state named parameters via estimator.get_params, and sets them to integers derived from random_state via set_params. These integers are sampled from chain hashing via sample_dependent_seed, and guarantee pseudo-random independence of seeded random generators.

Applies to random_state parameters in estimator depending on self_policy, and remaining component estimators if and only if deep=True.

Note: calls set_params even if self does not have a random_state, or none of the components have a random_state parameter. Therefore, set_random_state will reset any scikit-base estimator, even those without a random_state parameter.

Parameters:
random_stateint, RandomState instance or None, default=None

Pseudo-random number generator to control the generation of the random integers. Pass int for reproducible output across multiple function calls.

deepbool, default=True

Whether to set the random state in sub-estimators. If False, will set only self’s random_state parameter, if exists. If True, will set random_state parameters in sub-estimators as well.

self_policystr, one of {“copy”, “keep”, “new”}, default=”copy”
  • “copy” : estimator.random_state is set to input random_state

  • “keep” : estimator.random_state is kept as is

  • “new” : estimator.random_state is set to a new random state,

derived from input random_state, and in general different from it

Returns:
selfreference to self
set_tags(**tag_dict)[source]#

Set dynamic tags to given values.

Parameters:
**tag_dictdict

Dictionary of tag name: tag value pairs.

Returns:
Self

Reference to self.

Notes

Changes object state by setting tag values in tag_dict as dynamic tags in self.

static split_indices(indices, decisions)[source]#

Split a list of indices given a boolean array of decisions.

static split_indices_and_filter(X, indices, decisions)[source]#

Remove True cases and split a list of indices given an array of decisions.

update_predict(X) tuple[ndarray, ndarray][source]#

Update label prediction for sequences in X at a larger series length.

Uses information stored in the classifiers state from previous predictions and updates at shorter series lengths. Update will only accept cases which have not yet had a decision made, cases which have had a positive decision should be removed from the input with the row ordering preserved.

If no state information is present, predict will be called instead.

Prediction updates will return -1 for cases which it cannot make a decision on yet. The output is only guaranteed to return a valid class label for all cases when using the full series length.

Parameters:
X3D np.array (any number of dimensions, equal length series)

of shape [n_instances, n_dimensions, series_length]

or 2D np.array (univariate, equal length series)

of shape [n_instances, series_length]

or pd.DataFrame with each column a dimension, each cell a pd.Series

(any number of dimensions, equal or unequal length series)

or of any other supported Panel mtype

for list of mtypes, see datatypes.SCITYPE_REGISTER for specifications, see examples/AA_datatypes_and_datasets.ipynb

Returns:
y1D np.array of int, of shape [n_instances] - predicted class labels

indices correspond to instance indices in X

decisions1D bool array

An array of booleans, containing the decision of whether a prediction is safe to use or not. i-th entry is the classifier decision that i-th instance safe to use

update_predict_proba(X) tuple[ndarray, ndarray][source]#

Update label probabilities for sequences in X at a larger series length.

Uses information stored in the classifiers state from previous predictions and updates at shorter series lengths. Update will only accept cases which have not yet had a decision made, cases which have had a positive decision should be removed from the input with the row ordering preserved.

If no state information is present, predict_proba will be called instead.

Probability predictions updates will return [-1]*n_classes_ for cases which it cannot make a decision on yet. The output is only guaranteed to return a valid class label for all cases when using the full series length.

Parameters:
X3D np.array (any number of dimensions, equal length series)

of shape [n_instances, n_dimensions, series_length]

or 2D np.array (univariate, equal length series)

of shape [n_instances, series_length]

or pd.DataFrame with each column a dimension, each cell a pd.Series

(any number of dimensions, equal or unequal length series)

or of any other supported Panel mtype

for list of mtypes, see datatypes.SCITYPE_REGISTER for specifications, see examples/AA_datatypes_and_datasets.ipynb

Returns:
y2D array of shape [n_instances, n_classes] - predicted class probabilities

1st dimension indices correspond to instance indices in X 2nd dimension indices correspond to possible labels (integers) (i, j)-th entry is predictive probability that i-th instance is of class j

decisions1D bool array

An array of booleans, containing the decision of whether a prediction is safe to use or not. i-th entry is the classifier decision that i-th instance safe to use