MedianAbsolutePercentageError#
- class MedianAbsolutePercentageError(multioutput='uniform_average', multilevel='uniform_average', symmetric=False)[source]#
Median absolute percentage error (MdAPE) or symmetric version.
For a univariate, non-hierarchical sample of true values \(y_1, \dots, y_n\) and predicted values \(\widehat{y}_1, \dots, \widehat{y}_n\), at time indices \(t_1, \dots, t_n\),
evaluate
or call returns the Median Absolute Percentage Error, \(median(\left|\frac{y_i - \widehat{y}_i}{y_i} \right|)\). (the time indices are not used)if
symmetric
is True then calculates symmetric Median Absolute Percentage Error (sMdAPE), defined as \(median(\frac{2|y_i-\widehat{y}_i|}{|y_i|+|\widehat{y}_i|})\).Both MdAPE and sMdAPE output non-negative floating point which is in fractional units rather than percentage. The best value is 0.0.
MdAPE and sMdAPE are measured in percentage error relative to the test data. Because it takes the absolute value rather than square the percentage forecast error, it penalizes large errors less than MSPE, RMSPE, MdSPE or RMdSPE.
Taking the median instead of the mean of the absolute percentage errors also makes this metric more robust to error outliers since the median tends to be a more robust measure of central tendency in the presence of outliers.
MAPE has no limit on how large the error can be, particulalrly when
y_true
values are close to zero. In such cases the function returns a large value instead ofinf
. While sMAPE is bounded at 2.multioutput
andmultilevel
control averaging across variables and hierarchy indices, see below.- Parameters:
- symmetricbool, default = False
Whether to calculate the symmetric version of the percentage metric
- multioutputstr or 1D array-like (n_outputs,), default=’uniform_average’
if str, must be one of {‘raw_values’, ‘uniform_average’} Defines how to aggregate metric for multivariate (multioutput) data. If array-like, values used as weights to average the errors. If ‘raw_values’, returns a full set of errors in case of multioutput input. If ‘uniform_average’, errors of all outputs are averaged with uniform weight.
- multilevel{‘raw_values’, ‘uniform_average’, ‘uniform_average_time’}
Defines how to aggregate metric for hierarchical data (with levels). If ‘uniform_average’ (default), errors are mean-averaged across levels. If ‘uniform_average_time’, metric is applied to all data, ignoring level index. If ‘raw_values’, does not average errors across levels, hierarchy is retained.
References
Hyndman, R. J and Koehler, A. B. (2006). “Another look at measures of forecast accuracy”, International Journal of Forecasting, Volume 22, Issue 4.
Examples
>>> import numpy as np >>> from sktime.performance_metrics.forecasting import MedianAbsolutePercentageError >>> y_true = np.array([3, -0.5, 2, 7, 2]) >>> y_pred = np.array([2.5, 0.0, 2, 8, 1.25]) >>> mdape = MedianAbsolutePercentageError(symmetric=False) >>> mdape(y_true, y_pred) 0.16666666666666666 >>> smdape = MedianAbsolutePercentageError(symmetric=True) >>> smdape(y_true, y_pred) 0.18181818181818182 >>> y_true = np.array([[0.5, 1], [-1, 1], [7, -6]]) >>> y_pred = np.array([[0, 2], [-1, 2], [8, -5]]) >>> mdape(y_true, y_pred) 0.5714285714285714 >>> smdape(y_true, y_pred) 0.39999999999999997 >>> mdape = MedianAbsolutePercentageError(multioutput='raw_values', symmetric=False) >>> mdape(y_true, y_pred) array([0.14285714, 1. ]) >>> smdape = MedianAbsolutePercentageError(multioutput='raw_values', symmetric=True) >>> smdape(y_true, y_pred) array([0.13333333, 0.66666667]) >>> mdape = MedianAbsolutePercentageError(multioutput=[0.3, 0.7], symmetric=False) >>> mdape(y_true, y_pred) 0.7428571428571428 >>> smdape = MedianAbsolutePercentageError(multioutput=[0.3, 0.7], symmetric=True) >>> smdape(y_true, y_pred) 0.5066666666666666
Methods
__call__
(y_true, y_pred, **kwargs)Calculate metric value using underlying metric function.
clone
()Obtain a clone of the object with same hyper-parameters.
clone_tags
(estimator[, tag_names])Clone tags from another estimator as dynamic override.
create_test_instance
([parameter_set])Construct Estimator instance if possible.
create_test_instances_and_names
([parameter_set])Create list of all test instances and a list of names for them.
evaluate
(y_true, y_pred, **kwargs)Evaluate the desired metric on given inputs.
evaluate_by_index
(y_true, y_pred, **kwargs)Return the metric evaluated at each time point.
func
(y_pred[, horizon_weight, multioutput, ...])Median absolute percentage error (MdAPE) or symmetric version.
get_class_tag
(tag_name[, tag_value_default])Get a class tag's value.
Get class tags from the class and all its parent classes.
Get config flags for self.
Get object's parameter defaults.
get_param_names
([sort])Get object's parameter names.
get_params
([deep])Get a dict of parameters values for this object.
get_tag
(tag_name[, tag_value_default, ...])Get tag value from estimator class and dynamic tag overrides.
get_tags
()Get tags from estimator class and dynamic tag overrides.
get_test_params
([parameter_set])Return testing parameter settings for the estimator.
Check if the object is composed of other BaseObjects.
load_from_path
(serial)Load object from file location.
load_from_serial
(serial)Load object from serialized memory container.
reset
()Reset the object to a clean post-init state.
save
([path, serialization_format])Save serialized self to bytes-like object or to (.zip) file.
set_config
(**config_dict)Set config flags to given values.
set_params
(**params)Set the parameters of this object.
set_random_state
([random_state, deep, ...])Set random_state pseudo-random seed parameters for self.
set_tags
(**tag_dict)Set dynamic tags to given values.
- func(y_pred, horizon_weight=None, multioutput='uniform_average', symmetric=False, **kwargs)[source]#
Median absolute percentage error (MdAPE) or symmetric version.
If
symmetric
is False then calculates MdAPE and ifsymmetric
is True then calculates symmetric median absolute percentage error (sMdAPE). Both MdAPE and sMdAPE output is non-negative floating point. The best value is 0.0.MdAPE and sMdAPE are measured in percentage error relative to the test data. Because it takes the absolute value rather than square the percentage forecast error, it penalizes large errors less than MSPE, RMSPE, MdSPE or RMdSPE.
Taking the median instead of the mean of the absolute percentage errors also makes this metric more robust to error outliers since the median tends to be a more robust measure of central tendency in the presence of outliers.
There is no limit on how large the error can be, particulalrly when
y_true
values are close to zero. In such cases the function returns a large value instead ofinf
.- Parameters:
- y_truepd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon
Ground truth (correct) target values.
- y_predpd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon
Forecasted values.
- horizon_weightarray-like of shape (fh,), default=None
Forecast horizon weights.
- multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’
Defines how to aggregate metric for multivariate (multioutput) data. If array-like, values used as weights to average the errors. If ‘raw_values’, returns a full set of errors in case of multioutput input. If ‘uniform_average’, errors of all outputs are averaged with uniform weight.
- symmetricbool, default=False
Calculates symmetric version of metric if True.
- Returns:
- lossfloat
MdAPE or sMdAPE loss. If multioutput is ‘raw_values’, then MdAPE or sMdAPE is returned for each output separately. If multioutput is ‘uniform_average’ or an ndarray of weights, then the weighted average MdAPE or sMdAPE of all output errors is returned.
See also
References
Hyndman, R. J and Koehler, A. B. (2006). “Another look at measures of forecast accuracy”, International Journal of Forecasting, Volume 22, Issue 4.
Examples
>>> from sktime.performance_metrics.forecasting import median_absolute_percentage_error >>> y_true = np.array([3, -0.5, 2, 7, 2]) >>> y_pred = np.array([2.5, 0.0, 2, 8, 1.25]) >>> median_absolute_percentage_error(y_true, y_pred, symmetric=False) 0.16666666666666666 >>> median_absolute_percentage_error(y_true, y_pred, symmetric=True) 0.18181818181818182 >>> y_true = np.array([[0.5, 1], [-1, 1], [7, -6]]) >>> y_pred = np.array([[0, 2], [-1, 2], [8, -5]]) >>> median_absolute_percentage_error(y_true, y_pred, symmetric=False) 0.5714285714285714 >>> median_absolute_percentage_error(y_true, y_pred, symmetric=True) 0.39999999999999997 >>> median_absolute_percentage_error(y_true, y_pred, multioutput='raw_values', symmetric=False) array([0.14285714, 1. ]) >>> median_absolute_percentage_error(y_true, y_pred, multioutput='raw_values', symmetric=True) array([0.13333333, 0.66666667]) >>> median_absolute_percentage_error(y_true, y_pred, multioutput=[0.3, 0.7], symmetric=False) 0.7428571428571428 >>> median_absolute_percentage_error(y_true, y_pred, multioutput=[0.3, 0.7], symmetric=True) 0.5066666666666666
- classmethod get_test_params(parameter_set='default')[source]#
Return testing parameter settings for the estimator.
- Parameters:
- parameter_setstr, default=”default”
Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return
"default"
set.
- Returns:
- paramsdict or list of dict, default = {}
Parameters to create testing instances of the class Each dict are parameters to construct an “interesting” test instance, i.e.,
MyClass(**params)
orMyClass(**params[i])
creates a valid test instance.create_test_instance
uses the first (or only) dictionary inparams
- __call__(y_true, y_pred, **kwargs)[source]#
Calculate metric value using underlying metric function.
- Parameters:
- y_truetime series in
sktime
compatible data container format. Ground truth (correct) target values.
Individual data formats in
sktime
are so-called mtype specifications, each mtype implements an abstract scitype.Series
scitype = individual time series, vanilla forecasting.pd.DataFrame
,pd.Series
, ornp.ndarray
(1D or 2D)Panel
scitype = collection of time series, global/panel forecasting.pd.DataFrame
with 2-level rowMultiIndex
(instance, time)
,3D np.ndarray
(instance, variable, time)
,list
ofSeries
typedpd.DataFrame
Hierarchical
scitype = hierarchical collection, for hierarchical forecasting.pd.DataFrame
with 3 or more level rowMultiIndex
(hierarchy_1, ..., hierarchy_n, time)
For further details on data format, see glossary on mtype. For usage, see forecasting tutorial
examples/01_forecasting.ipynb
- y_predtime series in
sktime
compatible data container format Predicted values to evaluate against ground truth. Must be of same format as
y_true
, same indices and columns if indexed.- y_pred_benchmarkoptional, time series in
sktime
compatible data container format Benchmark predictions to compare
y_pred
to, used for relative metrics. Required only if metric requires benchmark predictions, as indicated by tagrequires-y-pred-benchmark
. Otherwise, can be passed to ensure interface consistency, but is ignored. Must be of same format asy_true
, same indices and columns if indexed.- y_trainoptional, time series in
sktime
compatible data container format Training data used to normalize the error metric. Required only if metric requires training data, as indicated by tag
requires-y-train
. Otherwise, can be passed to ensure interface consistency, but is ignored. Must be of same format asy_true
, same columns if indexed, but not necessarily same indices.- sample_weightoptional, 1D array-like, default=None
Sample weights for each time point.
If
None
, the time indices are considered equally weighted.If an array, must be 1D. If
y_true
andy_pred``are a single time series, ``sample_weight
must be of the same length asy_true
. If the time series are panel or hierarchical, the length of all individual time series must be the same, and equal to the length ofsample_weight
, for all instances of time series passed.
- y_truetime series in
- Returns:
- lossfloat, np.ndarray, or pd.DataFrame
Calculated metric, averaged or by variable. Weighted by
sample_weight
if provided.float if
multioutput="uniform_average" or array-like, and ``multilevel="uniform_average"
or “uniform_average_time”``. Value is metric averaged over variables and levels (see class docstring)np.ndarray
of shape(y_true.columns,)
if multioutput=”raw_values”` andmultilevel="uniform_average"
or"uniform_average_time"
. i-th entry is the, metric calculated for i-th variablepd.DataFrame
ifmultilevel="raw_values"
. of shape(n_levels, )
, ifmultioutput="uniform_average"
; of shape(n_levels, y_true.columns)
ifmultioutput="raw_values"
. metric is applied per level, row averaging (yes/no) as inmultioutput
.
- clone()[source]#
Obtain a clone of the object with same hyper-parameters.
A clone is a different object without shared references, in post-init state. This function is equivalent to returning sklearn.clone of self.
- Raises:
- RuntimeError if the clone is non-conforming, due to faulty
__init__
.
- RuntimeError if the clone is non-conforming, due to faulty
Notes
If successful, equal in value to
type(self)(**self.get_params(deep=False))
.
- clone_tags(estimator, tag_names=None)[source]#
Clone tags from another estimator as dynamic override.
- Parameters:
- estimatorestimator inheriting from :class:BaseEstimator
- tag_namesstr or list of str, default = None
Names of tags to clone. If None then all tags in estimator are used as tag_names.
- Returns:
- Self
Reference to self.
Notes
Changes object state by setting tag values in tag_set from estimator as dynamic tags in self.
- classmethod create_test_instance(parameter_set='default')[source]#
Construct Estimator instance if possible.
- Parameters:
- parameter_setstr, default=”default”
Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.
- Returns:
- instanceinstance of the class with default parameters
Notes
get_test_params can return dict or list of dict. This function takes first or single dict that get_test_params returns, and constructs the object with that.
- classmethod create_test_instances_and_names(parameter_set='default')[source]#
Create list of all test instances and a list of names for them.
- Parameters:
- parameter_setstr, default=”default”
Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.
- Returns:
- objslist of instances of cls
i-th instance is cls(**cls.get_test_params()[i])
- nameslist of str, same length as objs
i-th element is name of i-th instance of obj in tests convention is {cls.__name__}-{i} if more than one instance otherwise {cls.__name__}
- evaluate(y_true, y_pred, **kwargs)[source]#
Evaluate the desired metric on given inputs.
- Parameters:
- y_truetime series in
sktime
compatible data container format. Ground truth (correct) target values.
Individual data formats in
sktime
are so-called mtype specifications, each mtype implements an abstract scitype.Series
scitype = individual time series, vanilla forecasting.pd.DataFrame
,pd.Series
, ornp.ndarray
(1D or 2D)Panel
scitype = collection of time series, global/panel forecasting.pd.DataFrame
with 2-level rowMultiIndex
(instance, time)
,3D np.ndarray
(instance, variable, time)
,list
ofSeries
typedpd.DataFrame
Hierarchical
scitype = hierarchical collection, for hierarchical forecasting.pd.DataFrame
with 3 or more level rowMultiIndex
(hierarchy_1, ..., hierarchy_n, time)
For further details on data format, see glossary on mtype. For usage, see forecasting tutorial
examples/01_forecasting.ipynb
- y_predtime series in
sktime
compatible data container format Predicted values to evaluate against ground truth. Must be of same format as
y_true
, same indices and columns if indexed.- y_pred_benchmarkoptional, time series in
sktime
compatible data container format Benchmark predictions to compare
y_pred
to, used for relative metrics. Required only if metric requires benchmark predictions, as indicated by tagrequires-y-pred-benchmark
. Otherwise, can be passed to ensure interface consistency, but is ignored. Must be of same format asy_true
, same indices and columns if indexed.- y_trainoptional, time series in
sktime
compatible data container format Training data used to normalize the error metric. Required only if metric requires training data, as indicated by tag
requires-y-train
. Otherwise, can be passed to ensure interface consistency, but is ignored. Must be of same format asy_true
, same columns if indexed, but not necessarily same indices.- sample_weightoptional, 1D array-like, default=None
Sample weights for each time point.
If
None
, the time indices are considered equally weighted.If an array, must be 1D. If
y_true
andy_pred``are a single time series, ``sample_weight
must be of the same length asy_true
. If the time series are panel or hierarchical, the length of all individual time series must be the same, and equal to the length ofsample_weight
, for all instances of time series passed.
- y_truetime series in
- Returns:
- lossfloat, np.ndarray, or pd.DataFrame
Calculated metric, averaged or by variable. Weighted by
sample_weight
if provided.float if
multioutput="uniform_average" or array-like, and ``multilevel="uniform_average"
or “uniform_average_time”``. Value is metric averaged over variables and levels (see class docstring)np.ndarray
of shape(y_true.columns,)
if multioutput=”raw_values”` andmultilevel="uniform_average"
or"uniform_average_time"
. i-th entry is the, metric calculated for i-th variablepd.DataFrame
ifmultilevel="raw_values"
. of shape(n_levels, )
, ifmultioutput="uniform_average"
; of shape(n_levels, y_true.columns)
ifmultioutput="raw_values"
. metric is applied per level, row averaging (yes/no) as inmultioutput
.
- evaluate_by_index(y_true, y_pred, **kwargs)[source]#
Return the metric evaluated at each time point.
- Parameters:
- y_truetime series in
sktime
compatible data container format. Ground truth (correct) target values.
Individual data formats in
sktime
are so-called mtype specifications, each mtype implements an abstract scitype.Series
scitype = individual time series, vanilla forecasting.pd.DataFrame
,pd.Series
, ornp.ndarray
(1D or 2D)Panel
scitype = collection of time series, global/panel forecasting.pd.DataFrame
with 2-level rowMultiIndex
(instance, time)
,3D np.ndarray
(instance, variable, time)
,list
ofSeries
typedpd.DataFrame
Hierarchical
scitype = hierarchical collection, for hierarchical forecasting.pd.DataFrame
with 3 or more level rowMultiIndex
(hierarchy_1, ..., hierarchy_n, time)
For further details on data format, see glossary on mtype. For usage, see forecasting tutorial
examples/01_forecasting.ipynb
- y_predtime series in
sktime
compatible data container format Predicted values to evaluate against ground truth. Must be of same format as
y_true
, same indices and columns if indexed.- y_pred_benchmarkoptional, time series in
sktime
compatible data container format Benchmark predictions to compare
y_pred
to, used for relative metrics. Required only if metric requires benchmark predictions, as indicated by tagrequires-y-pred-benchmark
. Otherwise, can be passed to ensure interface consistency, but is ignored. Must be of same format asy_true
, same indices and columns if indexed.- y_trainoptional, time series in
sktime
compatible data container format Training data used to normalize the error metric. Required only if metric requires training data, as indicated by tag
requires-y-train
. Otherwise, can be passed to ensure interface consistency, but is ignored. Must be of same format asy_true
, same columns if indexed, but not necessarily same indices.- sample_weightoptional, 1D array-like, default=None
Sample weights for each time point.
If
None
, the time indices are considered equally weighted.If an array, must be 1D. If
y_true
andy_pred``are a single time series, ``sample_weight
must be of the same length asy_true
. If the time series are panel or hierarchical, the length of all individual time series must be the same, and equal to the length ofsample_weight
, for all instances of time series passed.
- y_truetime series in
- Returns:
- losspd.Series or pd.DataFrame
Calculated metric, by time point (default=jackknife pseudo-values). Weighted by
sample_weight
if provided.pd.Series
ifmultioutput="uniform_average"
or array-like. index is equal to index ofy_true
; entry at index i is metric at time i, averaged over variablespd.DataFrame
ifmultioutput="raw_values"
. index and columns equal to those ofy_true
; i,j-th entry is metric at time i, at variable j
- classmethod get_class_tag(tag_name, tag_value_default=None)[source]#
Get a class tag’s value.
Does not return information from dynamic tags (set via set_tags or clone_tags) that are defined on instances.
- Parameters:
- tag_namestr
Name of tag value.
- tag_value_defaultany
Default/fallback value if tag is not found.
- Returns:
- tag_value
Value of the tag_name tag in self. If not found, returns tag_value_default.
- classmethod get_class_tags()[source]#
Get class tags from the class and all its parent classes.
Retrieves tag: value pairs from _tags class attribute. Does not return information from dynamic tags (set via set_tags or clone_tags) that are defined on instances.
- Returns:
- collected_tagsdict
Dictionary of class tag name: tag value pairs. Collected from _tags class attribute via nested inheritance.
- get_config()[source]#
Get config flags for self.
- Returns:
- config_dictdict
Dictionary of config name : config value pairs. Collected from _config class attribute via nested inheritance and then any overrides and new tags from _onfig_dynamic object attribute.
- classmethod get_param_defaults()[source]#
Get object’s parameter defaults.
- Returns:
- default_dict: dict[str, Any]
Keys are all parameters of cls that have a default defined in __init__ values are the defaults, as defined in __init__.
- classmethod get_param_names(sort=True)[source]#
Get object’s parameter names.
- Parameters:
- sortbool, default=True
Whether to return the parameter names sorted in alphabetical order (True), or in the order they appear in the class
__init__
(False).
- Returns:
- param_names: list[str]
List of parameter names of cls. If
sort=False
, in same order as they appear in the class__init__
. Ifsort=True
, alphabetically ordered.
- get_params(deep=True)[source]#
Get a dict of parameters values for this object.
- Parameters:
- deepbool, default=True
Whether to return parameters of components.
If True, will return a dict of parameter name : value for this object, including parameters of components (= BaseObject-valued parameters).
If False, will return a dict of parameter name : value for this object, but not include parameters of components.
- Returns:
- paramsdict with str-valued keys
Dictionary of parameters, paramname : paramvalue keys-value pairs include:
always: all parameters of this object, as via get_param_names values are parameter value for that key, of this object values are always identical to values passed at construction
if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value
if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc
- get_tag(tag_name, tag_value_default=None, raise_error=True)[source]#
Get tag value from estimator class and dynamic tag overrides.
- Parameters:
- tag_namestr
Name of tag to be retrieved
- tag_value_defaultany type, optional; default=None
Default/fallback value if tag is not found
- raise_errorbool
whether a ValueError is raised when the tag is not found
- Returns:
- tag_valueAny
Value of the tag_name tag in self. If not found, returns an error if raise_error is True, otherwise it returns tag_value_default.
- Raises:
- ValueError if raise_error is True i.e. if tag_name is not in
- self.get_tags().keys()
- get_tags()[source]#
Get tags from estimator class and dynamic tag overrides.
- Returns:
- collected_tagsdict
Dictionary of tag name : tag value pairs. Collected from _tags class attribute via nested inheritance and then any overrides and new tags from _tags_dynamic object attribute.
- is_composite()[source]#
Check if the object is composed of other BaseObjects.
A composite object is an object which contains objects, as parameters. Called on an instance, since this may differ by instance.
- Returns:
- composite: bool
Whether an object has any parameters whose values are BaseObjects.
- classmethod load_from_path(serial)[source]#
Load object from file location.
- Parameters:
- serialresult of ZipFile(path).open(“object)
- Returns:
- deserialized self resulting in output at
path
, ofcls.save(path)
- deserialized self resulting in output at
- classmethod load_from_serial(serial)[source]#
Load object from serialized memory container.
- Parameters:
- serial1st element of output of
cls.save(None)
- serial1st element of output of
- Returns:
- deserialized self resulting in output
serial
, ofcls.save(None)
- deserialized self resulting in output
- reset()[source]#
Reset the object to a clean post-init state.
Using reset, runs __init__ with current values of hyper-parameters (result of get_params). This Removes any object attributes, except:
hyper-parameters = arguments of __init__
object attributes containing double-underscores, i.e., the string “__”
Class and object methods, and class attributes are also unaffected.
- Returns:
- self
Instance of class reset to a clean post-init state but retaining the current hyper-parameter values.
Notes
Equivalent to sklearn.clone but overwrites self. After self.reset() call, self is equal in value to type(self)(**self.get_params(deep=False))
- save(path=None, serialization_format='pickle')[source]#
Save serialized self to bytes-like object or to (.zip) file.
Behaviour: if
path
is None, returns an in-memory serialized self ifpath
is a file location, stores self at that location as a zip filesaved files are zip files with following contents: _metadata - contains class of self, i.e., type(self) _obj - serialized self. This class uses the default serialization (pickle).
- Parameters:
- pathNone or file location (str or Path)
if None, self is saved to an in-memory object if file location, self is saved to that file location. If:
path=”estimator” then a zip file
estimator.zip
will be made at cwd. path=”/home/stored/estimator” then a zip fileestimator.zip
will be stored in/home/stored/
.- serialization_format: str, default = “pickle”
Module to use for serialization. The available options are “pickle” and “cloudpickle”. Note that non-default formats might require installation of other soft dependencies.
- Returns:
- if
path
is None - in-memory serialized self - if
path
is file location - ZipFile with reference to the file
- if
- set_config(**config_dict)[source]#
Set config flags to given values.
- Parameters:
- config_dictdict
Dictionary of config name : config value pairs. Valid configs, values, and their meaning is listed below:
- displaystr, “diagram” (default), or “text”
how jupyter kernels display instances of self
“diagram” = html box diagram representation
“text” = string printout
- print_changed_onlybool, default=True
whether printing of self lists only self-parameters that differ from defaults (False), or all parameter names and values (False). Does not nest, i.e., only affects self and not component estimators.
- warningsstr, “on” (default), or “off”
whether to raise warnings, affects warnings from sktime only
“on” = will raise warnings from sktime
“off” = will not raise warnings from sktime
- backend:parallelstr, optional, default=”None”
backend to use for parallelization when broadcasting/vectorizing, one of
“None”: executes loop sequentally, simple list comprehension
“loky”, “multiprocessing” and “threading”: uses
joblib.Parallel
“joblib”: custom and 3rd party
joblib
backends, e.g.,spark
“dask”: uses
dask
, requiresdask
package in environment
- backend:parallel:paramsdict, optional, default={} (no parameters passed)
additional parameters passed to the parallelization backend as config. Valid keys depend on the value of
backend:parallel
:“None”: no additional parameters,
backend_params
is ignored“loky”, “multiprocessing” and “threading”: default
joblib
backends any valid keys forjoblib.Parallel
can be passed here, e.g.,n_jobs
, with the exception ofbackend
which is directly controlled bybackend
. Ifn_jobs
is not passed, it will default to-1
, other parameters will default tojoblib
defaults.“joblib”: custom and 3rd party
joblib
backends, e.g.,spark
. Any valid keys forjoblib.Parallel
can be passed here, e.g.,n_jobs
,backend
must be passed as a key ofbackend_params
in this case. Ifn_jobs
is not passed, it will default to-1
, other parameters will default tojoblib
defaults.“dask”: any valid keys for
dask.compute
can be passed, e.g.,scheduler
- Returns:
- selfreference to self.
Notes
Changes object state, copies configs in config_dict to self._config_dynamic.
- set_params(**params)[source]#
Set the parameters of this object.
The method works on simple estimators as well as on composite objects. Parameter key strings
<component>__<parameter>
can be used for composites, i.e., objects that contain other objects, to access<parameter>
in the component<component>
. The string<parameter>
, without<component>__
, can also be used if this makes the reference unambiguous, e.g., there are no two parameters of components with the name<parameter>
.- Parameters:
- **paramsdict
BaseObject parameters, keys must be
<component>__<parameter>
strings. __ suffixes can alias full strings, if unique among get_params keys.
- Returns:
- selfreference to self (after parameters have been set)
- set_random_state(random_state=None, deep=True, self_policy='copy')[source]#
Set random_state pseudo-random seed parameters for self.
Finds
random_state
named parameters viaestimator.get_params
, and sets them to integers derived fromrandom_state
viaset_params
. These integers are sampled from chain hashing viasample_dependent_seed
, and guarantee pseudo-random independence of seeded random generators.Applies to
random_state
parameters inestimator
depending onself_policy
, and remaining component estimators if and only ifdeep=True
.Note: calls
set_params
even ifself
does not have arandom_state
, or none of the components have arandom_state
parameter. Therefore,set_random_state
will reset anyscikit-base
estimator, even those without arandom_state
parameter.- Parameters:
- random_stateint, RandomState instance or None, default=None
Pseudo-random number generator to control the generation of the random integers. Pass int for reproducible output across multiple function calls.
- deepbool, default=True
Whether to set the random state in sub-estimators. If False, will set only
self
’srandom_state
parameter, if exists. If True, will setrandom_state
parameters in sub-estimators as well.- self_policystr, one of {“copy”, “keep”, “new”}, default=”copy”
“copy” :
estimator.random_state
is set to inputrandom_state
“keep” :
estimator.random_state
is kept as is“new” :
estimator.random_state
is set to a new random state,
derived from input
random_state
, and in general different from it
- Returns:
- selfreference to self