MedianSquaredScaledError#

class MedianSquaredScaledError(multioutput='uniform_average', multilevel='uniform_average', sp=1, square_root=False)[source]#

Median squared scaled error (MdSSE) or root median squared scaled error (RMdSSE).

If square_root is False then calculates MdSSE, otherwise calculates RMdSSE if square_root is True. Both MdSSE and RMdSSE output is non-negative floating point. The best value is 0.0.

This is a squared variant of the MdASE loss metric. Like MASE and other scaled performance metrics this scale-free metric can be used to compare forecast methods on a single series or between series.

This metric is also suited for intermittent-demand series because it will not give infinite or undefined values unless the training data is a flat timeseries. In this case the function returns a large value instead of inf.

Works with multioutput (multivariate) timeseries data with homogeneous seasonal periodicity.

Parameters:
spint, default = 1

Seasonal periodicity of data.

square_rootbool, default = False

Whether to take the square root of the metric

multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’

Defines how to aggregate metric for multivariate (multioutput) data.

  • If array-like, values used as weights to average the errors.

  • If 'raw_values', returns a full set of errors in case of multioutput input.

  • If 'uniform_average', errors of all outputs are averaged with uniform weight.

multilevel{‘raw_values’, ‘uniform_average’, ‘uniform_average_time’}

Defines how to aggregate metric for hierarchical data (with levels).

  • If 'uniform_average' (default), errors are mean-averaged across levels.

  • If 'uniform_average_time', metric is applied to all data, ignoring level index.

  • If 'raw_values', does not average errors across levels, hierarchy is retained.

References

M5 Competition Guidelines.

https://mofc.unic.ac.cy/wp-content/uploads/2020/03/M5-Competitors-Guide-Final-10-March-2020.docx

Hyndman, R. J and Koehler, A. B. (2006). “Another look at measures of forecast accuracy”, International Journal of Forecasting, Volume 22, Issue 4.

Examples

>>> import numpy as np
>>> from sktime.performance_metrics.forecasting import MedianSquaredScaledError
>>> y_train = np.array([5, 0.5, 4, 6, 3, 5, 2])
>>> y_true = np.array([3, -0.5, 2, 7, 2])
>>> y_pred = np.array([2.5, 0.0, 2, 8, 1.25])
>>> rmdsse = MedianSquaredScaledError(square_root=True)
>>> rmdsse(y_true, y_pred, y_train=y_train)
0.16666666666666666
>>> y_train = np.array([[0.5, 1], [-1, 1], [7, -6]])
>>> y_true = np.array([[0.5, 1], [-1, 1], [7, -6]])
>>> y_pred = np.array([[0, 2], [-1, 2], [8, -5]])
>>> rmdsse(y_true, y_pred, y_train=y_train)
0.1472819539849714
>>> rmdsse = MedianSquaredScaledError(multioutput='raw_values', square_root=True)
>>> rmdsse(y_true, y_pred, y_train=y_train)
array([0.08687445, 0.20203051])
>>> rmdsse = MedianSquaredScaledError(multioutput=[0.3, 0.7], square_root=True)
>>> rmdsse(y_true, y_pred, y_train=y_train)
0.16914781383660782

Methods

__call__(y_true, y_pred, **kwargs)

Calculate metric value using underlying metric function.

clone()

Obtain a clone of the object with same hyper-parameters and config.

clone_tags(estimator[, tag_names])

Clone tags from another object as dynamic override.

create_test_instance([parameter_set])

Construct an instance of the class, using first test parameter set.

create_test_instances_and_names([parameter_set])

Create list of all test instances and a list of names for them.

evaluate(y_true, y_pred, **kwargs)

Evaluate the desired metric on given inputs.

evaluate_by_index(y_true, y_pred, **kwargs)

Return the metric evaluated at each time point.

func(y_pred[, sp, horizon_weight, ...])

Median squared scaled error (MdSSE) or root median squared scaled error (RMdSSE).

get_class_tag(tag_name[, tag_value_default])

Get class tag value from class, with tag level inheritance from parents.

get_class_tags()

Get class tags from class, with tag level inheritance from parent classes.

get_config()

Get config flags for self.

get_param_defaults()

Get object's parameter defaults.

get_param_names([sort])

Get object's parameter names.

get_params([deep])

Get a dict of parameters values for this object.

get_tag(tag_name[, tag_value_default, ...])

Get tag value from instance, with tag level inheritance and overrides.

get_tags()

Get tags from instance, with tag level inheritance and overrides.

get_test_params([parameter_set])

Return testing parameter settings for the estimator.

is_composite()

Check if the object is composed of other BaseObjects.

load_from_path(serial)

Load object from file location.

load_from_serial(serial)

Load object from serialized memory container.

reset()

Reset the object to a clean post-init state.

save([path, serialization_format])

Save serialized self to bytes-like object or to (.zip) file.

set_config(**config_dict)

Set config flags to given values.

set_params(**params)

Set the parameters of this object.

set_random_state([random_state, deep, ...])

Set random_state pseudo-random seed parameters for self.

set_tags(**tag_dict)

Set instance level tag overrides to given values.

func(y_pred, sp=1, horizon_weight=None, multioutput='uniform_average', square_root=False, **kwargs)[source]#

Median squared scaled error (MdSSE) or root median squared scaled error (RMdSSE).

If square_root is False then calculates MdSSE, otherwise calculates RMdSSE if square_root is True. Both MdSSE and RMdSSE output is non-negative floating point. The best value is 0.0.

This is a squared variant of the MdASE loss metric. Like MASE and other scaled performance metrics this scale-free metric can be used to compare forecast methods on a single series or between series.

This metric is also suited for intermittent-demand series because it will not give infinite or undefined values unless the training data is a flat timeseries. In this case the function returns a large value instead of inf.

Works with multioutput (multivariate) timeseries data with homogeneous seasonal periodicity.

Parameters:
y_truepd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon

Ground truth (correct) target values.

y_predpd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon

Forecasted values.

y_trainpd.Series, pd.DataFrame or np.array of shape (n_timepoints,) or (n_timepoints, n_outputs), default = None

Observed training values.

spint

Seasonal periodicity of training data.

horizon_weightarray-like of shape (fh,), default=None

Forecast horizon weights.

multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’

Defines how to aggregate metric for multivariate (multioutput) data. If array-like, values used as weights to average the errors. If ‘raw_values’, returns a full set of errors in case of multioutput input. If ‘uniform_average’, errors of all outputs are averaged with uniform weight.

Returns:
lossfloat

RMdSSE loss. If multioutput is ‘raw_values’, then RMdSSE is returned for each output separately. If multioutput is ‘uniform_average’ or an ndarray of weights, then the weighted average RMdSSE of all output errors is returned.

References

M5 Competition Guidelines.

https://mofc.unic.ac.cy/wp-content/uploads/2020/03/M5-Competitors-Guide-Final-10-March-2020.docx

Hyndman, R. J and Koehler, A. B. (2006). “Another look at measures of forecast accuracy”, International Journal of Forecasting, Volume 22, Issue 4.

Examples

>>> from sktime.performance_metrics.forecasting import median_squared_scaled_error
>>> y_train = np.array([5, 0.5, 4, 6, 3, 5, 2])
>>> y_true = np.array([3, -0.5, 2, 7, 2])
>>> y_pred = np.array([2.5, 0.0, 2, 8, 1.25])
>>> median_squared_scaled_error(y_true, y_pred, y_train=y_train, square_root=True)
0.16666666666666666
>>> y_train = np.array([[0.5, 1], [-1, 1], [7, -6]])
>>> y_true = np.array([[0.5, 1], [-1, 1], [7, -6]])
>>> y_pred = np.array([[0, 2], [-1, 2], [8, -5]])
>>> median_squared_scaled_error(y_true, y_pred, y_train=y_train, square_root=True)
0.1472819539849714
>>> median_squared_scaled_error(y_true, y_pred, y_train=y_train,     multioutput='raw_values', square_root=True)
array([0.08687445, 0.20203051])
>>> median_squared_scaled_error(y_true, y_pred, y_train=y_train,     multioutput=[0.3, 0.7], square_root=True)
0.16914781383660782
classmethod get_test_params(parameter_set='default')[source]#

Return testing parameter settings for the estimator.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return "default" set.

Returns:
paramsdict or list of dict, default = {}

Parameters to create testing instances of the class Each dict are parameters to construct an “interesting” test instance, i.e., MyClass(**params) or MyClass(**params[i]) creates a valid test instance. create_test_instance uses the first (or only) dictionary in params

__call__(y_true, y_pred, **kwargs)[source]#

Calculate metric value using underlying metric function.

Parameters:
y_truetime series in sktime compatible data container format.

Ground truth (correct) target values.

Individual data formats in sktime are so-called mtype specifications, each mtype implements an abstract scitype.

  • Series scitype = individual time series, vanilla forecasting. pd.DataFrame, pd.Series, or np.ndarray (1D or 2D)

  • Panel scitype = collection of time series, global/panel forecasting. pd.DataFrame with 2-level row MultiIndex (instance, time), 3D np.ndarray (instance, variable, time), list of Series typed pd.DataFrame

  • Hierarchical scitype = hierarchical collection, for hierarchical forecasting. pd.DataFrame with 3 or more level row MultiIndex (hierarchy_1, ..., hierarchy_n, time)

For further details on data format, see glossary on mtype. For usage, see forecasting tutorial examples/01_forecasting.ipynb

y_predtime series in sktime compatible data container format

Predicted values to evaluate against ground truth. Must be of same format as y_true, same indices and columns if indexed.

y_pred_benchmarkoptional, time series in sktime compatible data container format

Benchmark predictions to compare y_pred to, used for relative metrics. Required only if metric requires benchmark predictions, as indicated by tag requires-y-pred-benchmark. Otherwise, can be passed to ensure interface consistency, but is ignored. Must be of same format as y_true, same indices and columns if indexed.

y_trainoptional, time series in sktime compatible data container format

Training data used to normalize the error metric. Required only if metric requires training data, as indicated by tag requires-y-train. Otherwise, can be passed to ensure interface consistency, but is ignored. Must be of same format as y_true, same columns if indexed, but not necessarily same indices.

sample_weightoptional, 1D array-like, or callable, default=None

Sample weights for each time point.

  • If None, the time indices are considered equally weighted.

  • If an array, must be 1D. If y_true and y_pred``are a single time series, ``sample_weight must be of the same length as y_true. If the time series are panel or hierarchical, the length of all individual time series must be the same, and equal to the length of sample_weight, for all instances of time series passed.

  • If a callable, it must follow SampleWeightGenerator interface, or have one of the following signatures: y_true: pd.DataFrame -> 1D array-like, or y_true: pd.DataFrame x y_pred: pd.DataFrame -> 1D array-like.

Returns:
lossfloat, np.ndarray, or pd.DataFrame

Calculated metric, averaged or by variable. Weighted by sample_weight if provided.

  • float if multioutput="uniform_average" or array-like, and ``multilevel="uniform_average" or “uniform_average_time”``. Value is metric averaged over variables and levels (see class docstring)

  • np.ndarray of shape (y_true.columns,) if multioutput=”raw_values”` and multilevel="uniform_average" or "uniform_average_time". i-th entry is the, metric calculated for i-th variable

  • pd.DataFrame if multilevel="raw_values". of shape (n_levels, ), if multioutput="uniform_average"; of shape (n_levels, y_true.columns) if multioutput="raw_values". metric is applied per level, row averaging (yes/no) as in multioutput.

clone()[source]#

Obtain a clone of the object with same hyper-parameters and config.

A clone is a different object without shared references, in post-init state. This function is equivalent to returning sklearn.clone of self.

Equivalent to constructing a new instance of type(self), with parameters of self, that is, type(self)(**self.get_params(deep=False)).

If configs were set on self, the clone will also have the same configs as the original, equivalent to calling cloned_self.set_config(**self.get_config()).

Also equivalent in value to a call of self.reset, with the exception that clone returns a new object, instead of mutating self like reset.

Raises:
RuntimeError if the clone is non-conforming, due to faulty __init__.
clone_tags(estimator, tag_names=None)[source]#

Clone tags from another object as dynamic override.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

clone_tags sets dynamic tag overrides from another object, estimator.

The clone_tags method should be called only in the __init__ method of an object, during construction, or directly after construction via __init__.

The dynamic tags are set to the values of the tags in estimator, with the names specified in tag_names.

The default of tag_names writes all tags from estimator to self.

Current tag values can be inspected by get_tags or get_tag.

Parameters:
estimatorAn instance of :class:BaseObject or derived class
tag_namesstr or list of str, default = None

Names of tags to clone. The default (None) clones all tags from estimator.

Returns:
self

Reference to self.

classmethod create_test_instance(parameter_set='default')[source]#

Construct an instance of the class, using first test parameter set.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

Returns:
instanceinstance of the class with default parameters
classmethod create_test_instances_and_names(parameter_set='default')[source]#

Create list of all test instances and a list of names for them.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

Returns:
objslist of instances of cls

i-th instance is cls(**cls.get_test_params()[i])

nameslist of str, same length as objs

i-th element is name of i-th instance of obj in tests. The naming convention is {cls.__name__}-{i} if more than one instance, otherwise {cls.__name__}

evaluate(y_true, y_pred, **kwargs)[source]#

Evaluate the desired metric on given inputs.

Parameters:
y_truetime series in sktime compatible data container format.

Ground truth (correct) target values.

Individual data formats in sktime are so-called mtype specifications, each mtype implements an abstract scitype.

  • Series scitype = individual time series, vanilla forecasting. pd.DataFrame, pd.Series, or np.ndarray (1D or 2D)

  • Panel scitype = collection of time series, global/panel forecasting. pd.DataFrame with 2-level row MultiIndex (instance, time), 3D np.ndarray (instance, variable, time), list of Series typed pd.DataFrame

  • Hierarchical scitype = hierarchical collection, for hierarchical forecasting. pd.DataFrame with 3 or more level row MultiIndex (hierarchy_1, ..., hierarchy_n, time)

For further details on data format, see glossary on mtype. For usage, see forecasting tutorial examples/01_forecasting.ipynb

y_predtime series in sktime compatible data container format

Predicted values to evaluate against ground truth. Must be of same format as y_true, same indices and columns if indexed.

y_pred_benchmarkoptional, time series in sktime compatible data container format

Benchmark predictions to compare y_pred to, used for relative metrics. Required only if metric requires benchmark predictions, as indicated by tag requires-y-pred-benchmark. Otherwise, can be passed to ensure interface consistency, but is ignored. Must be of same format as y_true, same indices and columns if indexed.

y_trainoptional, time series in sktime compatible data container format

Training data used to normalize the error metric. Required only if metric requires training data, as indicated by tag requires-y-train. Otherwise, can be passed to ensure interface consistency, but is ignored. Must be of same format as y_true, same columns if indexed, but not necessarily same indices.

sample_weightoptional, 1D array-like, or callable, default=None

Sample weights or callable for each time point.

  • If None, the time indices are considered equally weighted.

  • If an array, must be 1D. If y_true and y_pred``are a single time series, ``sample_weight must be of the same length as y_true. If the time series are panel or hierarchical, the length of all individual time series must be the same, and equal to the length of sample_weight, for all instances of time series passed.

  • If a callable, it must follow SampleWeightGenerator interface, or have one of the following signatures: y_true: pd.DataFrame -> 1D array-like, or y_true: pd.DataFrame x y_pred: pd.DataFrame -> 1D array-like.

Returns:
lossfloat, np.ndarray, or pd.DataFrame

Calculated metric, averaged or by variable. Weighted by sample_weight if provided.

  • float if multioutput="uniform_average" or array-like, and ``multilevel="uniform_average" or “uniform_average_time”``. Value is metric averaged over variables and levels (see class docstring)

  • np.ndarray of shape (y_true.columns,) if multioutput=”raw_values”` and multilevel="uniform_average" or "uniform_average_time". i-th entry is the, metric calculated for i-th variable

  • pd.DataFrame if multilevel="raw_values". of shape (n_levels, ), if multioutput="uniform_average"; of shape (n_levels, y_true.columns) if multioutput="raw_values". metric is applied per level, row averaging (yes/no) as in multioutput.

evaluate_by_index(y_true, y_pred, **kwargs)[source]#

Return the metric evaluated at each time point.

Parameters:
y_truetime series in sktime compatible data container format.

Ground truth (correct) target values.

Individual data formats in sktime are so-called mtype specifications, each mtype implements an abstract scitype.

  • Series scitype = individual time series, vanilla forecasting. pd.DataFrame, pd.Series, or np.ndarray (1D or 2D)

  • Panel scitype = collection of time series, global/panel forecasting. pd.DataFrame with 2-level row MultiIndex (instance, time), 3D np.ndarray (instance, variable, time), list of Series typed pd.DataFrame

  • Hierarchical scitype = hierarchical collection, for hierarchical forecasting. pd.DataFrame with 3 or more level row MultiIndex (hierarchy_1, ..., hierarchy_n, time)

For further details on data format, see glossary on mtype. For usage, see forecasting tutorial examples/01_forecasting.ipynb

y_predtime series in sktime compatible data container format

Predicted values to evaluate against ground truth. Must be of same format as y_true, same indices and columns if indexed.

y_pred_benchmarkoptional, time series in sktime compatible data container format

Benchmark predictions to compare y_pred to, used for relative metrics. Required only if metric requires benchmark predictions, as indicated by tag requires-y-pred-benchmark. Otherwise, can be passed to ensure interface consistency, but is ignored. Must be of same format as y_true, same indices and columns if indexed.

y_trainoptional, time series in sktime compatible data container format

Training data used to normalize the error metric. Required only if metric requires training data, as indicated by tag requires-y-train. Otherwise, can be passed to ensure interface consistency, but is ignored. Must be of same format as y_true, same columns if indexed, but not necessarily same indices.

sample_weightoptional, 1D array-like, or callable, default=None

Sample weights or callable for each time point.

  • If None, the time indices are considered equally weighted.

  • If an array, must be 1D. If y_true and y_pred``are a single time series, ``sample_weight must be of the same length as y_true. If the time series are panel or hierarchical, the length of all individual time series must be the same, and equal to the length of sample_weight, for all instances of time series passed.

  • If a callable, it must follow SampleWeightGenerator interface, or have one of the following signatures: y_true: pd.DataFrame -> 1D array-like, or y_true: pd.DataFrame x y_pred: pd.DataFrame -> 1D array-like.

Returns:
losspd.Series or pd.DataFrame

Calculated metric, by time point (default=jackknife pseudo-values). Weighted by sample_weight if provided.

  • pd.Series if multioutput="uniform_average" or array-like. index is equal to index of y_true; entry at index i is metric at time i, averaged over variables

  • pd.DataFrame if multioutput="raw_values". index and columns equal to those of y_true; i,j-th entry is metric at time i, at variable j

classmethod get_class_tag(tag_name, tag_value_default=None)[source]#

Get class tag value from class, with tag level inheritance from parents.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

The get_class_tag method is a class method, and retrieves the value of a tag taking into account only class-level tag values and overrides.

It returns the value of the tag with name tag_name from the object, taking into account tag overrides, in the following order of descending priority:

  1. Tags set in the _tags attribute of the class.

  2. Tags set in the _tags attribute of parent classes,

in order of inheritance.

Does not take into account dynamic tag overrides on instances, set via set_tags or clone_tags, that are defined on instances.

To retrieve tag values with potential instance overrides, use the get_tag method instead.

Parameters:
tag_namestr

Name of tag value.

tag_value_defaultany type

Default/fallback value if tag is not found.

Returns:
tag_value

Value of the tag_name tag in self. If not found, returns tag_value_default.

classmethod get_class_tags()[source]#

Get class tags from class, with tag level inheritance from parent classes.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

The get_class_tags method is a class method, and retrieves the value of a tag taking into account only class-level tag values and overrides.

It returns a dictionary with keys being keys of any attribute of _tags set in the class or any of its parent classes.

Values are the corresponding tag values, with overrides in the following order of descending priority:

  1. Tags set in the _tags attribute of the class.

  2. Tags set in the _tags attribute of parent classes,

in order of inheritance.

Instances can override these tags depending on hyper-parameters.

To retrieve tags with potential instance overrides, use the get_tags method instead.

Does not take into account dynamic tag overrides on instances, set via set_tags or clone_tags, that are defined on instances.

For including overrides from dynamic tags, use get_tags.

Returns:
collected_tagsdict

Dictionary of tag name : tag value pairs. Collected from _tags class attribute via nested inheritance. NOT overridden by dynamic tags set by set_tags or clone_tags.

get_config()[source]#

Get config flags for self.

Configs are key-value pairs of self, typically used as transient flags for controlling behaviour.

get_config returns dynamic configs, which override the default configs.

Default configs are set in the class attribute _config of the class or its parent classes, and are overridden by dynamic configs set via set_config.

Configs are retained under clone or reset calls.

Returns:
config_dictdict

Dictionary of config name : config value pairs. Collected from _config class attribute via nested inheritance and then any overrides and new tags from _onfig_dynamic object attribute.

classmethod get_param_defaults()[source]#

Get object’s parameter defaults.

Returns:
default_dict: dict[str, Any]

Keys are all parameters of cls that have a default defined in __init__. Values are the defaults, as defined in __init__.

classmethod get_param_names(sort=True)[source]#

Get object’s parameter names.

Parameters:
sortbool, default=True

Whether to return the parameter names sorted in alphabetical order (True), or in the order they appear in the class __init__ (False).

Returns:
param_names: list[str]

List of parameter names of cls. If sort=False, in same order as they appear in the class __init__. If sort=True, alphabetically ordered.

get_params(deep=True)[source]#

Get a dict of parameters values for this object.

Parameters:
deepbool, default=True

Whether to return parameters of components.

  • If True, will return a dict of parameter name : value for this object, including parameters of components (= BaseObject-valued parameters).

  • If False, will return a dict of parameter name : value for this object, but not include parameters of components.

Returns:
paramsdict with str-valued keys

Dictionary of parameters, paramname : paramvalue keys-value pairs include:

  • always: all parameters of this object, as via get_param_names values are parameter value for that key, of this object values are always identical to values passed at construction

  • if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value

  • if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc

get_tag(tag_name, tag_value_default=None, raise_error=True)[source]#

Get tag value from instance, with tag level inheritance and overrides.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

The get_tag method retrieves the value of a single tag with name tag_name from the instance, taking into account tag overrides, in the following order of descending priority:

  1. Tags set via set_tags or clone_tags on the instance,

at construction of the instance.

  1. Tags set in the _tags attribute of the class.

  2. Tags set in the _tags attribute of parent classes,

in order of inheritance.

Parameters:
tag_namestr

Name of tag to be retrieved

tag_value_defaultany type, optional; default=None

Default/fallback value if tag is not found

raise_errorbool

whether a ValueError is raised when the tag is not found

Returns:
tag_valueAny

Value of the tag_name tag in self. If not found, raises an error if raise_error is True, otherwise it returns tag_value_default.

Raises:
ValueError, if raise_error is True.

The ValueError is then raised if tag_name is not in self.get_tags().keys().

get_tags()[source]#

Get tags from instance, with tag level inheritance and overrides.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

The get_tags method returns a dictionary of tags, with keys being keys of any attribute of _tags set in the class or any of its parent classes, or tags set via set_tags or clone_tags.

Values are the corresponding tag values, with overrides in the following order of descending priority:

  1. Tags set via set_tags or clone_tags on the instance,

at construction of the instance.

  1. Tags set in the _tags attribute of the class.

  2. Tags set in the _tags attribute of parent classes,

in order of inheritance.

Returns:
collected_tagsdict

Dictionary of tag name : tag value pairs. Collected from _tags class attribute via nested inheritance and then any overrides and new tags from _tags_dynamic object attribute.

is_composite()[source]#

Check if the object is composed of other BaseObjects.

A composite object is an object which contains objects, as parameters. Called on an instance, since this may differ by instance.

Returns:
composite: bool

Whether an object has any parameters whose values are BaseObject descendant instances.

classmethod load_from_path(serial)[source]#

Load object from file location.

Parameters:
serialresult of ZipFile(path).open(“object)
Returns:
deserialized self resulting in output at path, of cls.save(path)
classmethod load_from_serial(serial)[source]#

Load object from serialized memory container.

Parameters:
serial1st element of output of cls.save(None)
Returns:
deserialized self resulting in output serial, of cls.save(None)
reset()[source]#

Reset the object to a clean post-init state.

Results in setting self to the state it had directly after the constructor call, with the same hyper-parameters. Config values set by set_config are also retained.

A reset call deletes any object attributes, except:

  • hyper-parameters = arguments of __init__ written to self, e.g., self.paramname where paramname is an argument of __init__

  • object attributes containing double-underscores, i.e., the string “__”. For instance, an attribute named “__myattr” is retained.

  • config attributes, configs are retained without change. That is, results of get_config before and after reset are equal.

Class and object methods, and class attributes are also unaffected.

Equivalent to clone, with the exception that reset mutates self instead of returning a new object.

After a self.reset() call, self is equal in value and state, to the object obtained after a constructor call``type(self)(**self.get_params(deep=False))``.

Returns:
self

Instance of class reset to a clean post-init state but retaining the current hyper-parameter values.

save(path=None, serialization_format='pickle')[source]#

Save serialized self to bytes-like object or to (.zip) file.

Behaviour: if path is None, returns an in-memory serialized self if path is a file location, stores self at that location as a zip file

saved files are zip files with following contents: _metadata - contains class of self, i.e., type(self) _obj - serialized self. This class uses the default serialization (pickle).

Parameters:
pathNone or file location (str or Path)

if None, self is saved to an in-memory object if file location, self is saved to that file location. If:

path=”estimator” then a zip file estimator.zip will be made at cwd. path=”/home/stored/estimator” then a zip file estimator.zip will be stored in /home/stored/.

serialization_format: str, default = “pickle”

Module to use for serialization. The available options are “pickle” and “cloudpickle”. Note that non-default formats might require installation of other soft dependencies.

Returns:
if path is None - in-memory serialized self
if path is file location - ZipFile with reference to the file
set_config(**config_dict)[source]#

Set config flags to given values.

Parameters:
config_dictdict

Dictionary of config name : config value pairs. Valid configs, values, and their meaning is listed below:

displaystr, “diagram” (default), or “text”

how jupyter kernels display instances of self

  • “diagram” = html box diagram representation

  • “text” = string printout

print_changed_onlybool, default=True

whether printing of self lists only self-parameters that differ from defaults (False), or all parameter names and values (False). Does not nest, i.e., only affects self and not component estimators.

warningsstr, “on” (default), or “off”

whether to raise warnings, affects warnings from sktime only

  • “on” = will raise warnings from sktime

  • “off” = will not raise warnings from sktime

backend:parallelstr, optional, default=”None”

backend to use for parallelization when broadcasting/vectorizing, one of

  • “None”: executes loop sequentally, simple list comprehension

  • “loky”, “multiprocessing” and “threading”: uses joblib.Parallel

  • “joblib”: custom and 3rd party joblib backends, e.g., spark

  • “dask”: uses dask, requires dask package in environment

backend:parallel:paramsdict, optional, default={} (no parameters passed)

additional parameters passed to the parallelization backend as config. Valid keys depend on the value of backend:parallel:

  • “None”: no additional parameters, backend_params is ignored

  • “loky”, “multiprocessing” and “threading”: default joblib backends any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, with the exception of backend which is directly controlled by backend. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “joblib”: custom and 3rd party joblib backends, e.g., spark. Any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, backend must be passed as a key of backend_params in this case. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “dask”: any valid keys for dask.compute can be passed, e.g., scheduler

Returns:
selfreference to self.

Notes

Changes object state, copies configs in config_dict to self._config_dynamic.

set_params(**params)[source]#

Set the parameters of this object.

The method works on simple skbase objects as well as on composite objects. Parameter key strings <component>__<parameter> can be used for composites, i.e., objects that contain other objects, to access <parameter> in the component <component>. The string <parameter>, without <component>__, can also be used if this makes the reference unambiguous, e.g., there are no two parameters of components with the name <parameter>.

Parameters:
**paramsdict

BaseObject parameters, keys must be <component>__<parameter> strings. __ suffixes can alias full strings, if unique among get_params keys.

Returns:
selfreference to self (after parameters have been set)
set_random_state(random_state=None, deep=True, self_policy='copy')[source]#

Set random_state pseudo-random seed parameters for self.

Finds random_state named parameters via self.get_params, and sets them to integers derived from random_state via set_params. These integers are sampled from chain hashing via sample_dependent_seed, and guarantee pseudo-random independence of seeded random generators.

Applies to random_state parameters in self, depending on self_policy, and remaining component objects if and only if deep=True.

Note: calls set_params even if self does not have a random_state, or none of the components have a random_state parameter. Therefore, set_random_state will reset any scikit-base object, even those without a random_state parameter.

Parameters:
random_stateint, RandomState instance or None, default=None

Pseudo-random number generator to control the generation of the random integers. Pass int for reproducible output across multiple function calls.

deepbool, default=True

Whether to set the random state in skbase object valued parameters, i.e., component estimators.

  • If False, will set only self’s random_state parameter, if exists.

  • If True, will set random_state parameters in component objects as well.

self_policystr, one of {“copy”, “keep”, “new”}, default=”copy”
  • “copy” : self.random_state is set to input random_state

  • “keep” : self.random_state is kept as is

  • “new” : self.random_state is set to a new random state,

derived from input random_state, and in general different from it

Returns:
selfreference to self
set_tags(**tag_dict)[source]#

Set instance level tag overrides to given values.

Every scikit-base compatible object has a dictionary of tags. Tags may be used to store metadata about the object, or to control behaviour of the object.

Tags are key-value pairs specific to an instance self, they are static flags that are not changed after construction of the object.

set_tags sets dynamic tag overrides to the values as specified in tag_dict, with keys being the tag name, and dict values being the value to set the tag to.

The set_tags method should be called only in the __init__ method of an object, during construction, or directly after construction via __init__.

Current tag values can be inspected by get_tags or get_tag.

Parameters:
**tag_dictdict

Dictionary of tag name: tag value pairs.

Returns:
Self

Reference to self.