GaussianHMM#
- class GaussianHMM(n_components: int = 1, covariance_type: str = 'diag', min_covar: float = 0.001, startprob_prior: float = 1.0, transmat_prior: float = 1.0, means_prior: float = 0, means_weight: float = 0, covars_prior: float = 0.01, covars_weight: float = 1, algorithm: str = 'viterbi', random_state: float = None, n_iter: int = 10, tol: float = 0.01, verbose: bool = False, params: str = 'stmc', init_params: str = 'stmc', implementation: str = 'log')[source]#
Hidden Markov Model with Gaussian emissions.
- Parameters:
- n_componentsint
Number of states
- covariance_type{“spherical”, “diag”, “full”, “tied”}, optional
The type of covariance parameters to use: * “spherical” — each state uses a single variance value that
applies to all features.
- “diag” — each state uses a diagonal covariance matrix
(default).
- “full” — each state uses a full (i.e. unrestricted)
covariance matrix.
- “tied” — all mixture components of each state use the same
full covariance matrix (note that this is not the same as for
GaussianHMM
).
- min_covarfloat, optional
Floor on the diagonal of the covariance matrix to prevent overfitting. Defaults to 1e-3.
- means_prior, means_weightarray, shape (n_mix, ), optional
Mean and precision of the Normal prior distribution for
means_
.- covars_prior, covars_weightarray, shape (n_mix, ), optional
Parameters of the prior distribution for the covariance matrix
covars_
. Ifcovariance_type
is “spherical” or “diag” the prior is the inverse gamma distribution, otherwise — the inverse Wishart distribution.- startprob_priorarray, shape (n_components, ), optional
Parameters of the Dirichlet prior distribution for
startprob_
.- transmat_priorarray, shape (n_components, n_components), optional
Parameters of the Dirichlet prior distribution for each row of the transition probabilities
transmat_
.- algorithm{“viterbi”, “map”}, optional
Decoder algorithm.
- random_state: RandomState or an int seed, optional
A random number generator instance.
- n_iterint, optional
Maximum number of iterations to perform.
- tolfloat, optional
Convergence threshold. EM will stop if the gain in log-likelihood is below this value.
- verbosebool, optional
Whether per-iteration convergence reports are printed to
sys.stderr
. Convergence can also be diagnosed using themonitor_
attribute.- params, init_paramsstring, optional
The parameters that get updated during (
params
) or initialized before (init_params
) the training. Can contain any combination of ‘s’ for startprob, ‘t’ for transmat, ‘m’ for means and ‘c’ for covars. Defaults to all parameters.- implementation: string, optional
Determines if the forward-backward algorithm is implemented with logarithms (“log”), or using scaling (“scaling”). The default is to use logarithms for backwards compatibility.
- Attributes:
- n_featuresint
Dimensionality of the Gaussian emissions.
- monitor_ConvergenceMonitor
Monitor object used to check the convergence of EM.
- startprob_array, shape (n_components, )
Initial state occupation distribution.
- transmat_array, shape (n_components, n_components)
Matrix of transition probabilities between states.
- means_array, shape (n_components, n_features)
Mean parameters for each state.
- covars_array
Covariance parameters for each state. The shape depends on
covariance_type
: * (n_components, ) if “spherical”, * (n_components, n_features) if “diag”, * (n_components, n_features, n_features) if “full”, * (n_features, n_features) if “tied”.
Examples
>>> from sktime.annotation.hmm_learn import GaussianHMM >>> from sktime.annotation.datagen import piecewise_normal >>> data = piecewise_normal( ... means=[2, 4, 1], lengths=[10, 35, 40], random_state=7 ... ).reshape((-1, 1)) >>> model = GaussianHMM(algorithm='viterbi', n_components=2) >>> model = model.fit(data) >>> labeled_data = model.predict(data)
Methods
change_points_to_segments
(y_sparse[, start, end])Convert an series of change point indexes to segments.
Check if the estimator has been fitted.
clone
()Obtain a clone of the object with same hyper-parameters.
clone_tags
(estimator[, tag_names])Clone tags from another estimator as dynamic override.
create_test_instance
([parameter_set])Construct Estimator instance if possible.
create_test_instances_and_names
([parameter_set])Create list of all test instances and a list of names for them.
dense_to_sparse
(y_dense)Convert the dense output from an annotator to a sparse format.
fit
(X[, Y])Fit to training data.
fit_predict
(X[, Y])Fit to data, then predict it.
fit_transform
(X[, Y])Fit to data, then transform it.
get_class_tag
(tag_name[, tag_value_default])Get a class tag's value.
Get class tags from the class and all its parent classes.
Get config flags for self.
get_fitted_params
([deep])Get fitted parameters.
Get object's parameter defaults.
get_param_names
([sort])Get object's parameter names.
get_params
([deep])Get a dict of parameters values for this object.
get_tag
(tag_name[, tag_value_default, ...])Get tag value from estimator class and dynamic tag overrides.
get_tags
()Get tags from estimator class and dynamic tag overrides.
get_test_params
([parameter_set])Return testing parameter settings for the estimator.
Check if the object is composed of other BaseObjects.
load_from_path
(serial)Load object from file location.
load_from_serial
(serial)Load object from serialized memory container.
predict
(X)Create annotations on test/deployment data.
Predict changepoints/anomalies on test/deployment data.
Return scores for predicted annotations on test/deployment data.
Predict segments on test/deployment data.
reset
()Reset the object to a clean post-init state.
sample
([n_samples, random_state, currstate])Interface class which allows users to sample from their HMM.
save
([path, serialization_format])Save serialized self to bytes-like object or to (.zip) file.
segments_to_change_points
(y_sparse)Convert segments to change points.
set_config
(**config_dict)Set config flags to given values.
set_params
(**params)Set the parameters of this object.
set_random_state
([random_state, deep, ...])Set random_state pseudo-random seed parameters for self.
set_tags
(**tag_dict)Set dynamic tags to given values.
sparse_to_dense
(y_sparse, index)Convert the sparse output from an annotator to a dense format.
transform
(X)Create annotations on test/deployment data.
update
(X[, Y])Update model with new data and optional ground truth annotations.
Update model with new data and create annotations for it.
- classmethod get_test_params(parameter_set='default')[source]#
Return testing parameter settings for the estimator.
- Parameters:
- parameter_setstr, default=”default”
Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return
"default"
set.
- Returns:
- paramsdict or list of dict
- static change_points_to_segments(y_sparse, start=None, end=None)[source]#
Convert an series of change point indexes to segments.
- Parameters:
- y_sparsepd.Series
A series containing the indexes of change points.
- startoptional
Starting point of the first segment.
- endoptional
Ending point of the last segment
- Returns:
- pd.Series
A series with an interval index indicating the start and end points of the segments. The values of the series are the labels of the segments.
Examples
>>> import pandas as pd >>> from sktime.annotation.base._base import BaseSeriesAnnotator >>> change_points = pd.Series([1, 2, 5]) >>> BaseSeriesAnnotator.change_points_to_segments(change_points, 0, 7) [0, 1) -1 [1, 2) 1 [2, 5) 2 [5, 7) 3 dtype: int64
- check_is_fitted()[source]#
Check if the estimator has been fitted.
- Raises:
- NotFittedError
If the estimator has not been fitted yet.
- clone()[source]#
Obtain a clone of the object with same hyper-parameters.
A clone is a different object without shared references, in post-init state. This function is equivalent to returning sklearn.clone of self.
- Raises:
- RuntimeError if the clone is non-conforming, due to faulty
__init__
.
- RuntimeError if the clone is non-conforming, due to faulty
Notes
If successful, equal in value to
type(self)(**self.get_params(deep=False))
.
- clone_tags(estimator, tag_names=None)[source]#
Clone tags from another estimator as dynamic override.
- Parameters:
- estimatorestimator inheriting from :class:BaseEstimator
- tag_namesstr or list of str, default = None
Names of tags to clone. If None then all tags in estimator are used as tag_names.
- Returns:
- Self
Reference to self.
Notes
Changes object state by setting tag values in tag_set from estimator as dynamic tags in self.
- classmethod create_test_instance(parameter_set='default')[source]#
Construct Estimator instance if possible.
- Parameters:
- parameter_setstr, default=”default”
Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.
- Returns:
- instanceinstance of the class with default parameters
Notes
get_test_params can return dict or list of dict. This function takes first or single dict that get_test_params returns, and constructs the object with that.
- classmethod create_test_instances_and_names(parameter_set='default')[source]#
Create list of all test instances and a list of names for them.
- Parameters:
- parameter_setstr, default=”default”
Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.
- Returns:
- objslist of instances of cls
i-th instance is cls(**cls.get_test_params()[i])
- nameslist of str, same length as objs
i-th element is name of i-th instance of obj in tests convention is {cls.__name__}-{i} if more than one instance otherwise {cls.__name__}
- static dense_to_sparse(y_dense)[source]#
Convert the dense output from an annotator to a sparse format.
- Parameters:
- y_densepd.Series
If
y_sparse
contains only 1’s and 0’s, the 1’s represent change points or anomalies.If
y_sparse
contains only contains integers greater than 0, it is an an array of segments.
- Returns:
- pd.Series
If
y_sparse
is a series of changepoints/anomalies, a pandas series will be returned containing the indexes of the changepoints/anomaliesIf
y_sparse
is a series of segments, a series with an interval datatype index will be returned. The values of the series will be the labels of segments.
- fit(X, Y=None)[source]#
Fit to training data.
- Parameters:
- Xpd.DataFrame
Training data to fit model to (time series).
- Ypd.Series, optional
Ground truth annotations for training if annotator is supervised.
- Returns:
- self
Reference to self.
Notes
Creates fitted model that updates attributes ending in “_”. Sets _is_fitted flag to True.
- fit_predict(X, Y=None)[source]#
Fit to data, then predict it.
Fits model to X and Y with given annotation parameters and returns the annotations made by the model.
- Parameters:
- Xpd.DataFrame, pd.Series or np.ndarray
Data to be transformed
- Ypd.Series or np.ndarray, optional (default=None)
Target values of data to be predicted.
- Returns:
- selfpd.Series
Annotations for sequence X exact format depends on annotation type.
- fit_transform(X, Y=None)[source]#
Fit to data, then transform it.
Fits model to X and Y with given annotation parameters and returns the annotations made by the model.
- Parameters:
- Xpd.DataFrame, pd.Series or np.ndarray
Data to be transformed
- Ypd.Series or np.ndarray, optional (default=None)
Target values of data to be predicted.
- Returns:
- selfpd.Series
Annotations for sequence X exact format depends on annotation type.
- classmethod get_class_tag(tag_name, tag_value_default=None)[source]#
Get a class tag’s value.
Does not return information from dynamic tags (set via set_tags or clone_tags) that are defined on instances.
- Parameters:
- tag_namestr
Name of tag value.
- tag_value_defaultany
Default/fallback value if tag is not found.
- Returns:
- tag_value
Value of the tag_name tag in self. If not found, returns tag_value_default.
- classmethod get_class_tags()[source]#
Get class tags from the class and all its parent classes.
Retrieves tag: value pairs from _tags class attribute. Does not return information from dynamic tags (set via set_tags or clone_tags) that are defined on instances.
- Returns:
- collected_tagsdict
Dictionary of class tag name: tag value pairs. Collected from _tags class attribute via nested inheritance.
- get_config()[source]#
Get config flags for self.
- Returns:
- config_dictdict
Dictionary of config name : config value pairs. Collected from _config class attribute via nested inheritance and then any overrides and new tags from _onfig_dynamic object attribute.
- get_fitted_params(deep=True)[source]#
Get fitted parameters.
- State required:
Requires state to be “fitted”.
- Parameters:
- deepbool, default=True
Whether to return fitted parameters of components.
If True, will return a dict of parameter name : value for this object, including fitted parameters of fittable components (= BaseEstimator-valued parameters).
If False, will return a dict of parameter name : value for this object, but not include fitted parameters of components.
- Returns:
- fitted_paramsdict with str-valued keys
Dictionary of fitted parameters, paramname : paramvalue keys-value pairs include:
always: all fitted parameters of this object, as via
get_param_names
values are fitted parameter value for that key, of this objectif
deep=True
, also contains keys/value pairs of component parameters parameters of components are indexed as[componentname]__[paramname]
all parameters ofcomponentname
appear asparamname
with its valueif
deep=True
, also contains arbitrary levels of component recursion, e.g.,[componentname]__[componentcomponentname]__[paramname]
, etc
- classmethod get_param_defaults()[source]#
Get object’s parameter defaults.
- Returns:
- default_dict: dict[str, Any]
Keys are all parameters of cls that have a default defined in __init__ values are the defaults, as defined in __init__.
- classmethod get_param_names(sort=True)[source]#
Get object’s parameter names.
- Parameters:
- sortbool, default=True
Whether to return the parameter names sorted in alphabetical order (True), or in the order they appear in the class
__init__
(False).
- Returns:
- param_names: list[str]
List of parameter names of cls. If
sort=False
, in same order as they appear in the class__init__
. Ifsort=True
, alphabetically ordered.
- get_params(deep=True)[source]#
Get a dict of parameters values for this object.
- Parameters:
- deepbool, default=True
Whether to return parameters of components.
If True, will return a dict of parameter name : value for this object, including parameters of components (= BaseObject-valued parameters).
If False, will return a dict of parameter name : value for this object, but not include parameters of components.
- Returns:
- paramsdict with str-valued keys
Dictionary of parameters, paramname : paramvalue keys-value pairs include:
always: all parameters of this object, as via get_param_names values are parameter value for that key, of this object values are always identical to values passed at construction
if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value
if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc
- get_tag(tag_name, tag_value_default=None, raise_error=True)[source]#
Get tag value from estimator class and dynamic tag overrides.
- Parameters:
- tag_namestr
Name of tag to be retrieved
- tag_value_defaultany type, optional; default=None
Default/fallback value if tag is not found
- raise_errorbool
whether a ValueError is raised when the tag is not found
- Returns:
- tag_valueAny
Value of the tag_name tag in self. If not found, returns an error if raise_error is True, otherwise it returns tag_value_default.
- Raises:
- ValueError if raise_error is True i.e. if tag_name is not in
- self.get_tags().keys()
- get_tags()[source]#
Get tags from estimator class and dynamic tag overrides.
- Returns:
- collected_tagsdict
Dictionary of tag name : tag value pairs. Collected from _tags class attribute via nested inheritance and then any overrides and new tags from _tags_dynamic object attribute.
- is_composite()[source]#
Check if the object is composed of other BaseObjects.
A composite object is an object which contains objects, as parameters. Called on an instance, since this may differ by instance.
- Returns:
- composite: bool
Whether an object has any parameters whose values are BaseObjects.
- classmethod load_from_path(serial)[source]#
Load object from file location.
- Parameters:
- serialresult of ZipFile(path).open(“object)
- Returns:
- deserialized self resulting in output at
path
, ofcls.save(path)
- deserialized self resulting in output at
- classmethod load_from_serial(serial)[source]#
Load object from serialized memory container.
- Parameters:
- serial1st element of output of
cls.save(None)
- serial1st element of output of
- Returns:
- deserialized self resulting in output
serial
, ofcls.save(None)
- deserialized self resulting in output
- predict(X)[source]#
Create annotations on test/deployment data.
- Parameters:
- Xpd.DataFrame
Data to annotate (time series).
- Returns:
- Ypd.Series
Annotations for sequence X exact format depends on annotation type.
- predict_points(X)[source]#
Predict changepoints/anomalies on test/deployment data.
- Parameters:
- Xpd.DataFrame
Data to annotate, time series.
- Returns:
- Ypd.Series
A series whose values are the changepoints/anomalies in X.
- predict_scores(X)[source]#
Return scores for predicted annotations on test/deployment data.
- Parameters:
- Xpd.DataFrame
Data to annotate (time series).
- Returns:
- Ypd.Series
Scores for sequence X exact format depends on annotation type.
- predict_segments(X)[source]#
Predict segments on test/deployment data.
- Parameters:
- Xpd.DataFrame
Data to annotate, time series.
- Returns:
- Ypd.Series
A series with an index of intervals. Each interval is the range of a segment and the corresponding value is the label of the segment.
- reset()[source]#
Reset the object to a clean post-init state.
Using reset, runs __init__ with current values of hyper-parameters (result of get_params). This Removes any object attributes, except:
hyper-parameters = arguments of __init__
object attributes containing double-underscores, i.e., the string “__”
Class and object methods, and class attributes are also unaffected.
- Returns:
- self
Instance of class reset to a clean post-init state but retaining the current hyper-parameter values.
Notes
Equivalent to sklearn.clone but overwrites self. After self.reset() call, self is equal in value to type(self)(**self.get_params(deep=False))
- sample(n_samples=1, random_state=None, currstate=None)[source]#
Interface class which allows users to sample from their HMM.
- save(path=None, serialization_format='pickle')[source]#
Save serialized self to bytes-like object or to (.zip) file.
Behaviour: if
path
is None, returns an in-memory serialized self ifpath
is a file location, stores self at that location as a zip filesaved files are zip files with following contents: _metadata - contains class of self, i.e., type(self) _obj - serialized self. This class uses the default serialization (pickle).
- Parameters:
- pathNone or file location (str or Path)
if None, self is saved to an in-memory object if file location, self is saved to that file location. If:
path=”estimator” then a zip file
estimator.zip
will be made at cwd. path=”/home/stored/estimator” then a zip fileestimator.zip
will be stored in/home/stored/
.- serialization_format: str, default = “pickle”
Module to use for serialization. The available options are “pickle” and “cloudpickle”. Note that non-default formats might require installation of other soft dependencies.
- Returns:
- if
path
is None - in-memory serialized self - if
path
is file location - ZipFile with reference to the file
- if
- static segments_to_change_points(y_sparse)[source]#
Convert segments to change points.
- Parameters:
- y_sparsepd.DataFrame
A series of segments. The index must be the interval data type and the values should be the integer labels of the segments.
- Returns:
- pd.Series
A series containing the indexes of the start of each segment.
Examples
>>> import pandas as pd >>> from sktime.annotation.base._base import BaseSeriesAnnotator >>> segments = pd.Series( ... [3, -1, 2], ... index=pd.IntervalIndex.from_breaks([2, 5, 7, 9], closed="left") ... ) >>> BaseSeriesAnnotator.segments_to_change_points(segments) 0 2 1 5 2 7 dtype: int64
- set_config(**config_dict)[source]#
Set config flags to given values.
- Parameters:
- config_dictdict
Dictionary of config name : config value pairs. Valid configs, values, and their meaning is listed below:
- displaystr, “diagram” (default), or “text”
how jupyter kernels display instances of self
“diagram” = html box diagram representation
“text” = string printout
- print_changed_onlybool, default=True
whether printing of self lists only self-parameters that differ from defaults (False), or all parameter names and values (False). Does not nest, i.e., only affects self and not component estimators.
- warningsstr, “on” (default), or “off”
whether to raise warnings, affects warnings from sktime only
“on” = will raise warnings from sktime
“off” = will not raise warnings from sktime
- backend:parallelstr, optional, default=”None”
backend to use for parallelization when broadcasting/vectorizing, one of
“None”: executes loop sequentally, simple list comprehension
“loky”, “multiprocessing” and “threading”: uses
joblib.Parallel
“joblib”: custom and 3rd party
joblib
backends, e.g.,spark
“dask”: uses
dask
, requiresdask
package in environment
- backend:parallel:paramsdict, optional, default={} (no parameters passed)
additional parameters passed to the parallelization backend as config. Valid keys depend on the value of
backend:parallel
:“None”: no additional parameters,
backend_params
is ignored“loky”, “multiprocessing” and “threading”: default
joblib
backends any valid keys forjoblib.Parallel
can be passed here, e.g.,n_jobs
, with the exception ofbackend
which is directly controlled bybackend
. Ifn_jobs
is not passed, it will default to-1
, other parameters will default tojoblib
defaults.“joblib”: custom and 3rd party
joblib
backends, e.g.,spark
. Any valid keys forjoblib.Parallel
can be passed here, e.g.,n_jobs
,backend
must be passed as a key ofbackend_params
in this case. Ifn_jobs
is not passed, it will default to-1
, other parameters will default tojoblib
defaults.“dask”: any valid keys for
dask.compute
can be passed, e.g.,scheduler
- Returns:
- selfreference to self.
Notes
Changes object state, copies configs in config_dict to self._config_dynamic.
- set_params(**params)[source]#
Set the parameters of this object.
The method works on simple estimators as well as on composite objects. Parameter key strings
<component>__<parameter>
can be used for composites, i.e., objects that contain other objects, to access<parameter>
in the component<component>
. The string<parameter>
, without<component>__
, can also be used if this makes the reference unambiguous, e.g., there are no two parameters of components with the name<parameter>
.- Parameters:
- **paramsdict
BaseObject parameters, keys must be
<component>__<parameter>
strings. __ suffixes can alias full strings, if unique among get_params keys.
- Returns:
- selfreference to self (after parameters have been set)
- set_random_state(random_state=None, deep=True, self_policy='copy')[source]#
Set random_state pseudo-random seed parameters for self.
Finds
random_state
named parameters viaestimator.get_params
, and sets them to integers derived fromrandom_state
viaset_params
. These integers are sampled from chain hashing viasample_dependent_seed
, and guarantee pseudo-random independence of seeded random generators.Applies to
random_state
parameters inestimator
depending onself_policy
, and remaining component estimators if and only ifdeep=True
.Note: calls
set_params
even ifself
does not have arandom_state
, or none of the components have arandom_state
parameter. Therefore,set_random_state
will reset anyscikit-base
estimator, even those without arandom_state
parameter.- Parameters:
- random_stateint, RandomState instance or None, default=None
Pseudo-random number generator to control the generation of the random integers. Pass int for reproducible output across multiple function calls.
- deepbool, default=True
Whether to set the random state in sub-estimators. If False, will set only
self
’srandom_state
parameter, if exists. If True, will setrandom_state
parameters in sub-estimators as well.- self_policystr, one of {“copy”, “keep”, “new”}, default=”copy”
“copy” :
estimator.random_state
is set to inputrandom_state
“keep” :
estimator.random_state
is kept as is“new” :
estimator.random_state
is set to a new random state,
derived from input
random_state
, and in general different from it
- Returns:
- selfreference to self
- set_tags(**tag_dict)[source]#
Set dynamic tags to given values.
- Parameters:
- **tag_dictdict
Dictionary of tag name: tag value pairs.
- Returns:
- Self
Reference to self.
Notes
Changes object state by setting tag values in tag_dict as dynamic tags in self.
- static sparse_to_dense(y_sparse, index)[source]#
Convert the sparse output from an annotator to a dense format.
- Parameters:
- y_sparsepd.Series
If
y_sparse
is a series with an index of intervals, it should represent segments where each value of the series is label of a segment. Unclassified intervals should be labelled -1. Segments must never have the label 0.If the index of
y_sparse
is not a set of intervals, the values of the series should represent the indexes of changepoints/anomalies.
- indexarray-like
Indices that are to be annotated according to
y_sparse
.
- Returns:
- pd.Series
A series with an index of
index
is returned. * Ify_sparse
is a series of changepoints/anomalies then the returnedseries is labelled 0 and 1 dependendy on whether the index is associated with an anomaly/changepoint. Where 1 means anomaly/changepoint.
If
y_sparse
is a series of segments then the returned series is labelled depending on the segment its indexes fall into. Indexes that fall into no segments are labelled -1.
Examples
>>> import pandas as pd >>> from sktime.annotation.base._base import BaseSeriesAnnotator >>> y_sparse = pd.Series([2, 5, 7]) # Indices of changepoints/anomalies >>> index = range(0, 8) >>> BaseSeriesAnnotator.sparse_to_dense(y_sparse, index=index) 0 0 1 0 2 1 3 0 4 0 5 1 6 0 7 1 dtype: int64 >>> y_sparse = pd.Series( ... [1, 2, 1], ... index=pd.IntervalIndex.from_arrays( ... [0, 4, 6], [4, 6, 10], closed="left" ... ) ... ) >>> index = range(10) >>> BaseSeriesAnnotator.sparse_to_dense(y_sparse, index=index) 0 1 1 1 2 1 3 1 4 2 5 2 6 1 7 1 8 1 9 1 dtype: int64
- transform(X)[source]#
Create annotations on test/deployment data.
- Parameters:
- Xpd.DataFrame
Data to annotate (time series).
- Returns:
- Ypd.Series
Annotations for sequence X. The returned annotations will be in the dense format.
- update(X, Y=None)[source]#
Update model with new data and optional ground truth annotations.
- Parameters:
- Xpd.DataFrame
Training data to update model with (time series).
- Ypd.Series, optional
Ground truth annotations for training if annotator is supervised.
- Returns:
- self
Reference to self.
Notes
Updates fitted model that updates attributes ending in “_”.
- update_predict(X)[source]#
Update model with new data and create annotations for it.
- Parameters:
- Xpd.DataFrame
Training data to update model with, time series.
- Returns:
- Ypd.Series
Annotations for sequence X exact format depends on annotation type.
Notes
Updates fitted model that updates attributes ending in “_”.