check_estimator#

check_estimator(estimator, raise_exceptions=False, tests_to_run=None, fixtures_to_run=None, verbose=True, tests_to_exclude=None, fixtures_to_exclude=None)[source]#

Run all tests on one single estimator.

Tests that are run on estimator:

all tests in test_all_estimators all interface compatibility tests from the module of estimator’s scitype

for example, test_all_forecasters if estimator is a forecaster

Parameters:
estimatorestimator class or estimator instance
raise_exceptionsbool, optional, default=False

whether to return exceptions/failures in the results dict, or raise them

  • if False: returns exceptions in returned results dict

  • if True: raises exceptions as they occur

tests_to_runstr or list of str, optional. Default = run all tests.

Names (test/function name string) of tests to run. sub-sets tests that are run to the tests given here.

fixtures_to_runstr or list of str, optional. Default = run all tests.

pytest test-fixture combination codes, which test-fixture combinations to run. sub-sets tests and fixtures to run to the list given here. If both tests_to_run and fixtures_to_run are provided, runs the union, i.e., all test-fixture combinations for tests in tests_to_run,

plus all test-fixture combinations in fixtures_to_run.

verbosestr, optional, default=True.

whether to print out informative summary of tests run.

tests_to_excludestr or list of str, names of tests to exclude. default = None

removes tests that should not be run, after subsetting via tests_to_run.

fixtures_to_excludestr or list of str, fixtures to exclude. default = None

removes test-fixture combinations that should not be run. This is done after subsetting via fixtures_to_run.

Returns:
resultsdict of results of the tests in self

keys are test/fixture strings, identical as in pytest, e.g., test[fixture] entries are the string “PASSED” if the test passed, or the exception raised if the test did not pass returned only if all tests pass, or raise_exceptions=False

Raises:
if raise_exceptions=True,
raises any exception produced by the tests directly

Examples

>>> from sktime.transformations.series.exponent import ExponentTransformer
>>> from sktime.utils.estimator_checks import check_estimator

Running all tests for ExponentTransformer class, this uses all instances from get_test_params and compatible scenarios

>>> results = check_estimator(ExponentTransformer)
All tests PASSED!

Running all tests for a specific ExponentTransformer this uses the instance that is passed and compatible scenarios

>>> results = check_estimator(ExponentTransformer(42))
All tests PASSED!

Running specific test (all fixtures) for ExponentTransformer

>>> results = check_estimator(ExponentTransformer, tests_to_run="test_clone")
All tests PASSED!

{‘test_clone[ExponentTransformer-0]’: ‘PASSED’, ‘test_clone[ExponentTransformer-1]’: ‘PASSED’}

Running one specific test-fixture-combination for ExponentTransformer

>>> check_estimator(
...    ExponentTransformer, fixtures_to_run="test_clone[ExponentTransformer-1]"
... )
All tests PASSED!
{'test_clone[ExponentTransformer-1]': 'PASSED'}