API Reference¶
This page contains the full reference to pytest’s API.
Functions¶
pytest.approx¶
-
approx
(expected, rel=None, abs=None, nan_ok: bool = False) → _pytest.python_api.ApproxBase[source]¶ Assert that two numbers (or two sets of numbers) are equal to each other within some tolerance.
Due to the intricacies of floating-point arithmetic, numbers that we would intuitively expect to be equal are not always so:
>>> 0.1 + 0.2 == 0.3 False
This problem is commonly encountered when writing tests, e.g. when making sure that floating-point values are what you expect them to be. One way to deal with this problem is to assert that two floating-point numbers are equal to within some appropriate tolerance:
>>> abs((0.1 + 0.2) - 0.3) < 1e-6 True
However, comparisons like this are tedious to write and difficult to understand. Furthermore, absolute comparisons like the one above are usually discouraged because there’s no tolerance that works well for all situations.
1e-6
is good for numbers around1
, but too small for very big numbers and too big for very small ones. It’s better to express the tolerance as a fraction of the expected value, but relative comparisons like that are even more difficult to write correctly and concisely.The
approx
class performs floating-point comparisons using a syntax that’s as intuitive as possible:>>> from pytest import approx >>> 0.1 + 0.2 == approx(0.3) True
The same syntax also works for sequences of numbers:
>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6)) True
Dictionary values:
>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6}) True
numpy
arrays:>>> import numpy as np >>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) True
And for a
numpy
array against a scalar:>>> import numpy as np >>> np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) True
By default,
approx
considers numbers within a relative tolerance of1e-6
(i.e. one part in a million) of its expected value to be equal. This treatment would lead to surprising results if the expected value was0.0
, because nothing but0.0
itself is relatively close to0.0
. To handle this case less surprisingly,approx
also considers numbers within an absolute tolerance of1e-12
of its expected value to be equal. Infinity and NaN are special cases. Infinity is only considered equal to itself, regardless of the relative tolerance. NaN is not considered equal to anything by default, but you can make it be equal to itself by setting thenan_ok
argument to True. (This is meant to facilitate comparing arrays that use NaN to mean “no data”.)Both the relative and absolute tolerances can be changed by passing arguments to the
approx
constructor:>>> 1.0001 == approx(1) False >>> 1.0001 == approx(1, rel=1e-3) True >>> 1.0001 == approx(1, abs=1e-3) True
If you specify
abs
but notrel
, the comparison will not consider the relative tolerance at all. In other words, two numbers that are within the default relative tolerance of1e-6
will still be considered unequal if they exceed the specified absolute tolerance. If you specify bothabs
andrel
, the numbers will be considered equal if either tolerance is met:>>> 1 + 1e-8 == approx(1) True >>> 1 + 1e-8 == approx(1, abs=1e-12) False >>> 1 + 1e-8 == approx(1, rel=1e-6, abs=1e-12) True
You can also use
approx
to compare nonnumeric types, or dicts and sequences containing nonnumeric types, in which case it falls back to strict equality. This can be useful for comparing dicts and sequences that can contain optional values:>>> {"required": 1.0000005, "optional": None} == approx({"required": 1, "optional": None}) True >>> [None, 1.0000005] == approx([None,1]) True >>> ["foo", 1.0000005] == approx([None,1]) False
If you’re thinking about using
approx
, then you might want to know how it compares to other good ways of comparing floating-point numbers. All of these algorithms are based on relative and absolute tolerances and should agree for the most part, but they do have meaningful differences:math.isclose(a, b, rel_tol=1e-9, abs_tol=0.0)
: True if the relative tolerance is met w.r.t. eithera
orb
or if the absolute tolerance is met. Because the relative tolerance is calculated w.r.t. botha
andb
, this test is symmetric (i.e. neithera
norb
is a “reference value”). You have to specify an absolute tolerance if you want to compare to0.0
because there is no tolerance by default. More information…numpy.isclose(a, b, rtol=1e-5, atol=1e-8)
: True if the difference betweena
andb
is less that the sum of the relative tolerance w.r.t.b
and the absolute tolerance. Because the relative tolerance is only calculated w.r.t.b
, this test is asymmetric and you can think ofb
as the reference value. Support for comparing sequences is provided bynumpy.allclose
. More information…unittest.TestCase.assertAlmostEqual(a, b)
: True ifa
andb
are within an absolute tolerance of1e-7
. No relative tolerance is considered and the absolute tolerance cannot be changed, so this function is not appropriate for very large or very small numbers. Also, it’s only available in subclasses ofunittest.TestCase
and it’s ugly because it doesn’t follow PEP8. More information…a == pytest.approx(b, rel=1e-6, abs=1e-12)
: True if the relative tolerance is met w.r.t.b
or if the absolute tolerance is met. Because the relative tolerance is only calculated w.r.t.b
, this test is asymmetric and you can think ofb
as the reference value. In the special case that you explicitly specify an absolute tolerance but not a relative tolerance, only the absolute tolerance is considered.
Warning
Changed in version 3.2.
In order to avoid inconsistent behavior,
TypeError
is raised for>
,>=
,<
and<=
comparisons. The example below illustrates the problem:assert approx(0.1) > 0.1 + 1e-10 # calls approx(0.1).__gt__(0.1 + 1e-10) assert 0.1 + 1e-10 > approx(0.1) # calls approx(0.1).__lt__(0.1 + 1e-10)
In the second example one expects
approx(0.1).__le__(0.1 + 1e-10)
to be called. But instead,approx(0.1).__lt__(0.1 + 1e-10)
is used to comparison. This is because the call hierarchy of rich comparisons follows a fixed behavior. More information…Changed in version 3.7.1:
approx
raisesTypeError
when it encounters a dict value or sequence element of nonnumeric type.Changed in version 6.1.0:
approx
falls back to strict equality for nonnumeric types instead of raisingTypeError
.
pytest.fail¶
Tutorial: Skip and xfail: dealing with tests that cannot succeed
pytest.skip¶
-
skip
(msg[, allow_module_level=False])[source]¶ Skip an executing test with the given message.
This function should be called only during testing (setup, call or teardown) or during collection by using the
allow_module_level
flag. This function can be called in doctests as well.- Parameters
allow_module_level (bool) – Allows this function to be called at module level, skipping the rest of the module. Defaults to False.
Note
It is better to use the pytest.mark.skipif marker when possible to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. Similarly, use the
# doctest: +SKIP
directive (see doctest.SKIP) to skip a doctest statically.
pytest.importorskip¶
-
importorskip
(modname: str, minversion: Optional[str] = None, reason: Optional[str] = None) → Any[source]¶ Import and return the requested module
modname
, or skip the current test if the module cannot be imported.- Parameters
- Returns
The imported module. This should be assigned to its canonical name.
Example:
docutils = pytest.importorskip("docutils")
pytest.xfail¶
-
xfail
(reason: str = '') → NoReturn[source]¶ Imperatively xfail an executing test or setup function with the given reason.
This function should be called only during testing (setup, call or teardown).
Note
It is better to use the pytest.mark.xfail marker when possible to declare a test to be xfailed under certain conditions like known bugs or missing features.
pytest.main¶
-
main
(args: Optional[Union[List[str], py._path.local.LocalPath]] = None, plugins: Optional[Sequence[Union[str, object]]] = None) → Union[int, _pytest.config.ExitCode][source]¶ Perform an in-process test run.
- Parameters
args – List of command line arguments.
plugins – List of plugin objects to be auto-registered during initialization.
- Returns
An exit code.
pytest.param¶
-
param
(*values[, id][, marks])[source]¶ Specify a parameter in pytest.mark.parametrize calls or parametrized fixtures.
@pytest.mark.parametrize( "test_input,expected", [("3+5", 8), pytest.param("6*9", 42, marks=pytest.mark.xfail),], ) def test_eval(test_input, expected): assert eval(test_input) == expected
- Parameters
values – Variable args of the values of the parameter set, in order.
marks – A single mark or a list of marks to be applied to this parameter set.
id (str) – The id to attribute to this parameter set.
pytest.raises¶
Tutorial: Assertions about expected exceptions.
-
with
raises
(expected_exception: Union[Type[_E], Tuple[Type[_E], …]], *, match: Optional[Union[str, Pattern[str]]] = '...') → RaisesContext[_E] as excinfo[source]¶ -
with
raises
(expected_exception: Union[Type[_E], Tuple[Type[_E], …]], func: Callable[[…], Any], *args: Any, **kwargs: Any) → _pytest._code.code.ExceptionInfo[_E] as excinfo Assert that a code block/function call raises
expected_exception
or raise a failure exception otherwise.- Parameters
match –
If specified, a string containing a regular expression, or a regular expression object, that is tested against the string representation of the exception using
re.search
. To match a literal string that may contain special characters, the pattern can first be escaped withre.escape
.(This is only used when
pytest.raises
is used as a context manager, and passed through to the function otherwise. When usingpytest.raises
as a function, you can use:pytest.raises(Exc, func, match="passed on").match("my pattern")
.)
Use
pytest.raises
as a context manager, which will capture the exception of the given type:>>> import pytest >>> with pytest.raises(ZeroDivisionError): ... 1/0
If the code block does not raise the expected exception (
ZeroDivisionError
in the example above), or no exception at all, the check will fail instead.You can also use the keyword argument
match
to assert that the exception matches a text or regex:>>> with pytest.raises(ValueError, match='must be 0 or None'): ... raise ValueError("value must be 0 or None") >>> with pytest.raises(ValueError, match=r'must be \d+$'): ... raise ValueError("value must be 42")
The context manager produces an
ExceptionInfo
object which can be used to inspect the details of the captured exception:>>> with pytest.raises(ValueError) as exc_info: ... raise ValueError("value must be 42") >>> assert exc_info.type is ValueError >>> assert exc_info.value.args[0] == "value must be 42"
Note
When using
pytest.raises
as a context manager, it’s worthwhile to note that normal context manager rules apply and that the exception raised must be the final line in the scope of the context manager. Lines of code after that, within the scope of the context manager will not be executed. For example:>>> value = 15 >>> with pytest.raises(ValueError) as exc_info: ... if value > 10: ... raise ValueError("value must be <= 10") ... assert exc_info.type is ValueError # this will not execute
Instead, the following approach must be taken (note the difference in scope):
>>> with pytest.raises(ValueError) as exc_info: ... if value > 10: ... raise ValueError("value must be <= 10") ... >>> assert exc_info.type is ValueError
Using with
pytest.mark.parametrize
When using pytest.mark.parametrize it is possible to parametrize tests such that some runs raise an exception and others do not.
See Parametrizing conditional raising for an example.
Legacy form
It is possible to specify a callable by passing a to-be-called lambda:
>>> raises(ZeroDivisionError, lambda: 1/0) <ExceptionInfo ...>
or you can specify an arbitrary callable with arguments:
>>> def f(x): return 1/x ... >>> raises(ZeroDivisionError, f, 0) <ExceptionInfo ...> >>> raises(ZeroDivisionError, f, x=0) <ExceptionInfo ...>
The form above is fully supported but discouraged for new code because the context manager form is regarded as more readable and less error-prone.
Note
Similar to caught exception objects in Python, explicitly clearing local references to returned
ExceptionInfo
objects can help the Python interpreter speed up its garbage collection.Clearing those references breaks a reference cycle (
ExceptionInfo
–> caught exception –> frame stack raising the exception –> current frame stack –> local variables –>ExceptionInfo
) which makes Python keep all objects referenced from that cycle (including all local variables in the current frame) alive until the next cyclic garbage collection run. More detailed information can be found in the official Python documentation for the try statement.
pytest.deprecated_call¶
Tutorial: Ensuring code triggers a deprecation warning.
-
with
deprecated_call
(*, match: Optional[Union[str, Pattern[str]]] = '...') → WarningsRecorder[source]¶ -
with
deprecated_call
(func: Callable[[…], T], *args: Any, **kwargs: Any) → T Assert that code produces a
DeprecationWarning
orPendingDeprecationWarning
.This function can be used as a context manager:
>>> import warnings >>> def api_call_v2(): ... warnings.warn('use v3 of this api', DeprecationWarning) ... return 200 >>> import pytest >>> with pytest.deprecated_call(): ... assert api_call_v2() == 200
It can also be used by passing a function and
*args
and**kwargs
, in which case it will ensure callingfunc(*args, **kwargs)
produces one of the warnings types above. The return value is the return value of the function.In the context manager form you may use the keyword argument
match
to assert that the warning matches a text or regex.The context manager produces a list of
warnings.WarningMessage
objects, one for each warning raised.
pytest.register_assert_rewrite¶
Tutorial: Assertion Rewriting.
-
register_assert_rewrite
(*names: str) → None[source]¶ Register one or more module names to be rewritten on import.
This function will make sure that this module or all modules inside the package will get their assert statements rewritten. Thus you should make sure to call this before the module is actually imported, usually in your __init__.py if you are a plugin using a package.
- Raises
TypeError – If the given module names are not strings.
pytest.warns¶
Tutorial: Asserting warnings with the warns function
-
with
warns
(expected_warning: Optional[Union[Type[Warning], Tuple[Type[Warning], …]]], *, match: Optional[Union[str, Pattern[str]]] = None) → WarningsChecker[source]¶ -
with
warns
(expected_warning: Optional[Union[Type[Warning], Tuple[Type[Warning], …]]], func: Callable[[…], T], *args: Any, **kwargs: Any) → T Assert that code raises a particular class of warning.
Specifically, the parameter
expected_warning
can be a warning class or sequence of warning classes, and the inside thewith
block must issue a warning of that class or classes.This helper produces a list of
warnings.WarningMessage
objects, one for each warning raised.This function can be used as a context manager, or any of the other ways
pytest.raises()
can be used:>>> import pytest >>> with pytest.warns(RuntimeWarning): ... warnings.warn("my warning", RuntimeWarning)
In the context manager form you may use the keyword argument
match
to assert that the warning matches a text or regex:>>> with pytest.warns(UserWarning, match='must be 0 or None'): ... warnings.warn("value must be 0 or None", UserWarning) >>> with pytest.warns(UserWarning, match=r'must be \d+$'): ... warnings.warn("value must be 42", UserWarning) >>> with pytest.warns(UserWarning, match=r'must be \d+$'): ... warnings.warn("this is not here", UserWarning) Traceback (most recent call last): ... Failed: DID NOT WARN. No warnings of type ...UserWarning... was emitted...
pytest.freeze_includes¶
Tutorial: Freezing pytest.
Marks¶
Marks can be used apply meta data to test functions (but not fixtures), which can then be accessed by fixtures or plugins.
pytest.mark.filterwarnings¶
Tutorial: @pytest.mark.filterwarnings.
Add warning filters to marked test items.
-
pytest.mark.
filterwarnings
(filter)¶ - Parameters
filter (str) –
A warning specification string, which is composed of contents of the tuple
(action, message, category, module, lineno)
as specified in The Warnings filter section of the Python documentation, separated by":"
. Optional fields can be omitted. Module names passed for filtering are not regex-escaped.For example:
@pytest.mark.filterwarnings("ignore:.*usage will be deprecated.*:DeprecationWarning") def test_foo(): ...
pytest.mark.parametrize¶
Tutorial: Parametrizing fixtures and test functions.
This mark has the same signature as _pytest.python.Metafunc.parametrize()
; see there.
pytest.mark.skipif¶
Tutorial: Skipping test functions.
Skip a test function if a condition is True
.
-
pytest.mark.
skipif
(condition, *, reason=None)¶ - Parameters
condition (bool or str) –
True/False
if the condition should be skipped or a condition string.reason (str) – Reason why the test function is being skipped.
pytest.mark.usefixtures¶
Tutorial: Use fixtures in classes and modules with usefixtures.
Mark a test function as using the given fixture names.
-
pytest.mark.
usefixtures
(*names)¶ - Parameters
args – The names of the fixture to use, as strings.
Note
When using usefixtures
in hooks, it can only load fixtures when applied to a test function before test setup
(for example in the pytest_collection_modifyitems
hook).
Also note that this mark has no effect when applied to fixtures.
pytest.mark.xfail¶
Tutorial: XFail: mark test functions as expected to fail.
Marks a test function as expected to fail.
-
pytest.mark.
xfail
(condition=None, *, reason=None, raises=None, run=True, strict=False)¶ - Parameters
condition (bool or str) – Condition for marking the test function as xfail (
True/False
or a condition string). If a bool, you also have to specifyreason
(see condition string).reason (str) – Reason why the test function is marked as xfail.
raises (Type[Exception]) – Exception subclass expected to be raised by the test function; other exceptions will fail the test.
run (bool) – If the test function should actually be executed. If
False
, the function will always xfail and will not be executed (useful if a function is segfaulting).strict (bool) –
If
False
(the default) the function will be shown in the terminal output asxfailed
if it fails and asxpass
if it passes. In both cases this will not cause the test suite to fail as a whole. This is particularly useful to mark flaky tests (tests that fail at random) to be tackled later.If
True
, the function will be shown in the terminal output asxfailed
if it fails, but if it unexpectedly passes then it will fail the test suite. This is particularly useful to mark functions that are always failing and there should be a clear indication if they unexpectedly start to pass (for example a new release of a library fixes a known bug).
Custom marks¶
Marks are created dynamically using the factory object pytest.mark
and applied as a decorator.
For example:
@pytest.mark.timeout(10, "slow", method="thread")
def test_function():
...
Will create and attach a Mark
object to the collected
Item
, which can then be accessed by fixtures or hooks with
Node.iter_markers
. The mark
object will have the following attributes:
mark.args == (10, "slow")
mark.kwargs == {"method": "thread"}
Example for using multiple custom markers:
@pytest.mark.timeout(10, "slow", method="thread")
@pytest.mark.slow
def test_function():
...
When Node.iter_markers
or Node.iter_markers
is used with multiple markers, the marker closest to the function will be iterated over first. The above example will result in @pytest.mark.slow
followed by @pytest.mark.timeout(...)
.
Fixtures¶
Tutorial: pytest fixtures: explicit, modular, scalable.
Fixtures are requested by test functions or other fixtures by declaring them as argument names.
Example of a test requiring a fixture:
def test_output(capsys):
print("hello")
out, err = capsys.readouterr()
assert out == "hello\n"
Example of a fixture requiring another fixture:
@pytest.fixture
def db_session(tmpdir):
fn = tmpdir / "db.file"
return connect(str(fn))
For more details, consult the full fixtures docs.
@pytest.fixture¶
-
@
fixture
(fixture_function: _FixtureFunction, *, scope: Union[_Scope, Callable[[str, Config], _Scope]] = 'function', params: Optional[Iterable[object]] = None, autouse: bool = False, ids: Optional[Union[Iterable[Union[None, str, float, int, bool]], Callable[[Any], Optional[object]]]] = None, name: Optional[str] = None) → _FixtureFunction[source]¶ -
@
fixture
(fixture_function: None = None, *, scope: Union[_Scope, Callable[[str, Config], _Scope]] = 'function', params: Optional[Iterable[object]] = None, autouse: bool = False, ids: Optional[Union[Iterable[Union[None, str, float, int, bool]], Callable[[Any], Optional[object]]]] = None, name: Optional[str] = 'None') → _pytest.fixtures.FixtureFunctionMarker Decorator to mark a fixture factory function.
This decorator can be used, with or without parameters, to define a fixture function.
The name of the fixture function can later be referenced to cause its invocation ahead of running tests: test modules or classes can use the
pytest.mark.usefixtures(fixturename)
marker.Test functions can directly use fixture names as input arguments in which case the fixture instance returned from the fixture function will be injected.
Fixtures can provide their values to test functions using
return
oryield
statements. When usingyield
the code block after theyield
statement is executed as teardown code regardless of the test outcome, and must yield exactly once.- Parameters
scope –
The scope for which this fixture is shared; one of
"function"
(default),"class"
,"module"
,"package"
or"session"
.This parameter may also be a callable which receives
(fixture_name, config)
as parameters, and must return astr
with one of the values mentioned above.See Dynamic scope in the docs for more information.
params – An optional list of parameters which will cause multiple invocations of the fixture function and all of the tests using it. The current parameter is available in
request.param
.autouse – If True, the fixture func is activated for all tests that can see it. If False (the default), an explicit reference is needed to activate the fixture.
ids – List of string ids each corresponding to the params so that they are part of the test id. If no ids are provided they will be generated automatically from the params.
name – The name of the fixture. This defaults to the name of the decorated function. If a fixture is used in the same module in which it is defined, the function name of the fixture will be shadowed by the function arg that requests the fixture; one way to resolve this is to name the decorated function
fixture_<fixturename>
and then use@pytest.fixture(name='<fixturename>')
.
config.cache¶
Tutorial: Cache: working with cross-testrun state.
The config.cache
object allows other plugins and fixtures
to store and retrieve values across test runs. To access it from fixtures
request pytestconfig
into your fixture and get it with pytestconfig.cache
.
Under the hood, the cache plugin uses the simple
dumps
/loads
API of the json
stdlib module.
config.cache
is an instance of pytest.Cache
:
-
final class
Cache
[source]¶ -
makedir
(name: str) → py._path.local.LocalPath[source]¶ Return a directory path object with the given name.
If the directory does not yet exist, it will be created. You can use it to manage files to e.g. store/retrieve database dumps across test sessions.
- Parameters
name – Must be a string not containing a
/
separator. Make sure the name contains your plugin or application identifiers to prevent clashes with other cache users.
-
get
(key: str, default)[source]¶ Return the cached value for the given key.
If no value was yet cached or the value cannot be read, the specified default is returned.
- Parameters
key – Must be a
/
separated value. Usually the first name is the name of your plugin or your application.default – The value to return in case of a cache-miss or invalid cache value.
-
set
(key: str, value: object) → None[source]¶ Save value for the given key.
- Parameters
key – Must be a
/
separated value. Usually the first name is the name of your plugin or your application.value – Must be of any combination of basic python types, including nested types like lists of dictionaries.
-
capsys¶
Tutorial: Capturing of the stdout/stderr output.
-
capsys
()[source]¶ Enable text capturing of writes to
sys.stdout
andsys.stderr
.The captured output is made available via
capsys.readouterr()
method calls, which return a(out, err)
namedtuple.out
anderr
will betext
objects.Returns an instance of
CaptureFixture[str]
.Example:
def test_output(capsys): print("hello") captured = capsys.readouterr() assert captured.out == "hello\n"
-
class
CaptureFixture
[source]¶ Object returned by the
capsys
,capsysbinary
,capfd
andcapfdbinary
fixtures.
capsysbinary¶
Tutorial: Capturing of the stdout/stderr output.
-
capsysbinary
()[source]¶ Enable bytes capturing of writes to
sys.stdout
andsys.stderr
.The captured output is made available via
capsysbinary.readouterr()
method calls, which return a(out, err)
namedtuple.out
anderr
will bebytes
objects.Returns an instance of
CaptureFixture[bytes]
.Example:
def test_output(capsysbinary): print("hello") captured = capsysbinary.readouterr() assert captured.out == b"hello\n"
capfd¶
Tutorial: Capturing of the stdout/stderr output.
-
capfd
()[source]¶ Enable text capturing of writes to file descriptors
1
and2
.The captured output is made available via
capfd.readouterr()
method calls, which return a(out, err)
namedtuple.out
anderr
will betext
objects.Returns an instance of
CaptureFixture[str]
.Example:
def test_system_echo(capfd): os.system('echo "hello"') captured = capfd.readouterr() assert captured.out == "hello\n"
capfdbinary¶
Tutorial: Capturing of the stdout/stderr output.
-
capfdbinary
()[source]¶ Enable bytes capturing of writes to file descriptors
1
and2
.The captured output is made available via
capfd.readouterr()
method calls, which return a(out, err)
namedtuple.out
anderr
will bebyte
objects.Returns an instance of
CaptureFixture[bytes]
.Example:
def test_system_echo(capfdbinary): os.system('echo "hello"') captured = capfdbinary.readouterr() assert captured.out == b"hello\n"
doctest_namespace¶
Tutorial: Doctest integration for modules and test files.
-
doctest_namespace
()[source]¶ Fixture that returns a
dict
that will be injected into the namespace of doctests.Usually this fixture is used in conjunction with another
autouse
fixture:@pytest.fixture(autouse=True) def add_np(doctest_namespace): doctest_namespace["np"] = numpy
For more details: ‘doctest_namespace’ fixture.
request¶
Tutorial: Pass different values to a test function, depending on command line options.
The request
fixture is a special fixture providing information of the requesting test function.
-
class
FixtureRequest
[source]¶ A request for a fixture from a test or fixture function.
A request object gives access to the requesting test context and has an optional
param
attribute in case the fixture is parametrized indirectly.-
scope
: _Scope¶ Scope string, one of “function”, “class”, “module”, “session”.
-
fixturenames
¶ Names of all active fixtures in this request.
-
node
¶ Underlying collection node (depends on current request scope).
-
config
¶ The pytest config object associated with this request.
-
function
¶ Test function object if the request has a per-function scope.
-
cls
¶ Class (can be None) where the test function was collected.
-
instance
¶ Instance (can be None) on which test function was collected.
-
module
¶ Python module object where the test function was collected.
-
fspath
¶ The file system path of the test module which collected this test.
-
keywords
¶ Keywords/markers dictionary for the underlying node.
-
session
¶ Pytest session object.
-
addfinalizer
(finalizer: Callable[], object]) → None[source]¶ Add finalizer/teardown function to be called after the last test within the requesting test context finished execution.
-
applymarker
(marker: Union[str, _pytest.mark.structures.MarkDecorator]) → None[source]¶ Apply a marker to a single test function invocation.
This method is useful if you don’t want to have a keyword/marker on all function invocations.
- Parameters
marker – A
_pytest.mark.MarkDecorator
object created by a call topytest.mark.NAME(...)
.
-
raiseerror
(msg: Optional[str]) → NoReturn[source]¶ Raise a FixtureLookupError with the given message.
-
getfixturevalue
(argname: str) → Any[source]¶ Dynamically run a named fixture function.
Declaring fixtures via function argument is recommended where possible. But if you can only decide whether to use another fixture at test setup time, you may use this function to retrieve it inside a fixture or test function body.
- Raises
pytest.FixtureLookupError – If the given fixture could not be found.
-
pytestconfig¶
-
pytestconfig
()[source]¶ Session-scoped fixture that returns the
_pytest.config.Config
object.Example:
def test_foo(pytestconfig): if pytestconfig.getoption("verbose") > 0: ...
record_property¶
Tutorial: record_property.
-
record_property
()[source]¶ Add extra properties to the calling test.
User properties become part of the test report and are available to the configured reporters, like JUnit XML.
The fixture is callable with
name, value
. The value is automatically XML-encoded.Example:
def test_function(record_property): record_property("example_key", 1)
record_testsuite_property¶
Tutorial: record_testsuite_property.
-
record_testsuite_property
()[source]¶ Record a new
<property>
tag as child of the root<testsuite>
.This is suitable to writing global information regarding the entire test suite, and is compatible with
xunit2
JUnit family.This is a
session
-scoped fixture which is called with(name, value)
. Example:def test_foo(record_testsuite_property): record_testsuite_property("ARCH", "PPC") record_testsuite_property("STORAGE_TYPE", "CEPH")
name
must be a string,value
will be converted to a string and properly xml-escaped.Warning
Currently this fixture does not work with the pytest-xdist plugin. See issue #7767 for details.
caplog¶
Tutorial: Logging.
-
caplog
()[source]¶ Access and control log capturing.
Captured logs are available through the following properties/methods:
* caplog.messages -> list of format-interpolated log messages * caplog.text -> string containing formatted log output * caplog.records -> list of logging.LogRecord instances * caplog.record_tuples -> list of (logger_name, level, message) tuples * caplog.clear() -> clear captured records and formatted log output string
Returns a
pytest.LogCaptureFixture
instance.
-
final class
LogCaptureFixture
[source]¶ Provides access and control of log capturing.
-
handler
¶ Get the logging handler used by the fixture.
- Return type
LogCaptureHandler
-
get_records
(when: str) → List[logging.LogRecord][source]¶ Get the logging records for one of the possible test phases.
- Parameters
when (str) – Which test phase to obtain the records from. Valid values are: “setup”, “call” and “teardown”.
- Returns
The list of captured records at the given stage.
- Return type
List[logging.LogRecord]
New in version 3.4.
-
text
¶ The formatted log text.
-
records
¶ The list of log records.
-
record_tuples
¶ A list of a stripped down version of log records intended for use in assertion comparison.
The format of the tuple is:
(logger_name, log_level, message)
-
messages
¶ A list of format-interpolated log messages.
Unlike ‘records’, which contains the format string and parameters for interpolation, log messages in this list are all interpolated.
Unlike ‘text’, which contains the output from the handler, log messages in this list are unadorned with levels, timestamps, etc, making exact comparisons more reliable.
Note that traceback or stack info (from
logging.exception()
or theexc_info
orstack_info
arguments to the logging functions) is not included, as this is added by the formatter in the handler.New in version 3.7.
-
set_level
(level: Union[int, str], logger: Optional[str] = None) → None[source]¶ Set the level of a logger for the duration of a test.
Changed in version 3.4: The levels of the loggers changed by this function will be restored to their initial values at the end of the test.
-
monkeypatch¶
Tutorial: Monkeypatching/mocking modules and environments.
-
monkeypatch
()[source]¶ A convenient fixture for monkey-patching.
The fixture provides these methods to modify objects, dictionaries or os.environ:
monkeypatch.setattr(obj, name, value, raising=True) monkeypatch.delattr(obj, name, raising=True) monkeypatch.setitem(mapping, name, value) monkeypatch.delitem(obj, name, raising=True) monkeypatch.setenv(name, value, prepend=False) monkeypatch.delenv(name, raising=True) monkeypatch.syspath_prepend(path) monkeypatch.chdir(path)
All modifications will be undone after the requesting test function or fixture has finished. The
raising
parameter determines if a KeyError or AttributeError will be raised if the set/deletion operation has no target.Returns a
MonkeyPatch
instance.
-
final class
MonkeyPatch
[source]¶ Helper to conveniently monkeypatch attributes/items/environment variables/syspath.
Returned by the
monkeypatch
fixture.- Versionchanged:
6.2 Can now also be used directly as
pytest.MonkeyPatch()
, for when the fixture is not available. In this case, usewith MonkeyPatch.context() as mp:
or remember to callundo()
explicitly.
-
classmethod with
context
() → Generator[_pytest.monkeypatch.MonkeyPatch, None, None][source]¶ Context manager that returns a new
MonkeyPatch
object which undoes any patching done inside thewith
block upon exit.Example:
import functools def test_partial(monkeypatch): with monkeypatch.context() as m: m.setattr(functools, "partial", 3)
Useful in situations where it is desired to undo some patches before the test ends, such as mocking
stdlib
functions that might break pytest itself if mocked (for examples of this see #3290.
-
setattr
(target: str, name: object, value: _pytest.monkeypatch.Notset = <notset>, raising: bool = True) → None[source]¶ -
setattr
(target: object, name: str, value: object, raising: bool = True) → None Set attribute value on target, memorizing the old value.
For convenience you can specify a string as
target
which will be interpreted as a dotted import path, with the last part being the attribute name. For example,monkeypatch.setattr("os.getcwd", lambda: "/")
would set thegetcwd
function of theos
module.Raises AttributeError if the attribute does not exist, unless
raising
is set to False.
-
delattr
(target: Union[object, str], name: Union[str, _pytest.monkeypatch.Notset] = <notset>, raising: bool = True) → None[source]¶ Delete attribute
name
fromtarget
.If no
name
is specified andtarget
is a string it will be interpreted as a dotted import path with the last part being the attribute name.Raises AttributeError it the attribute does not exist, unless
raising
is set to False.
-
setitem
(dic: MutableMapping[K, V], name: K, value: V) → None[source]¶ Set dictionary entry
name
to value.
-
delitem
(dic: MutableMapping[K, V], name: K, raising: bool = True) → None[source]¶ Delete
name
from dict.Raises
KeyError
if it doesn’t exist, unlessraising
is set to False.
-
setenv
(name: str, value: str, prepend: Optional[str] = None) → None[source]¶ Set environment variable
name
tovalue
.If
prepend
is a character, read the current environment variable value and prepend thevalue
adjoined with theprepend
character.
-
delenv
(name: str, raising: bool = True) → None[source]¶ Delete
name
from the environment.Raises
KeyError
if it does not exist, unlessraising
is set to False.
-
chdir
(path) → None[source]¶ Change the current working directory to the specified path.
Path can be a string or a py.path.local object.
-
undo
() → None[source]¶ Undo previous changes.
This call consumes the undo stack. Calling it a second time has no effect unless you do more monkeypatching after the undo call.
There is generally no need to call
undo()
, since it is called automatically during tear-down.Note that the same
monkeypatch
fixture is used across a single test function invocation. Ifmonkeypatch
is used both by the test function itself and one of the test fixtures, callingundo()
will undo all of the changes made in both functions.
pytester¶
New in version 6.2.
Provides a Pytester
instance that can be used to run and test pytest itself.
It provides an empty directory where pytest can be executed in isolation, and contains facilities to write tests, configuration files, and match against expected output.
To use it, include in your topmost conftest.py
file:
pytest_plugins = "pytester"
-
final class
Pytester
[source]¶ Facilities to write tests/configuration files, execute pytest in isolation, and match against expected output, perfect for black-box testing of pytest plugins.
It attempts to isolate the test run from external factors as much as possible, modifying the current working directory to
path
and environment variables during initialization.Attributes:
- Variables
path (Path) – temporary directory path used to create files/run tests from, etc.
plugins – A list of plugins to use with
parseconfig()
andrunpytest()
. Initially this is an empty list but plugins can be added to the list. The type of items to add to the list depends on the method using them so refer to them for details.
-
path
¶ Temporary directory where files are created and pytest is executed.
-
make_hook_recorder
(pluginmanager: _pytest.config.PytestPluginManager) → _pytest.pytester.HookRecorder[source]¶ Create a new
HookRecorder
for a PluginManager.
-
chdir
() → None[source]¶ Cd into the temporary directory.
This is done automatically upon instantiation.
-
makefile
(ext: str, *args: str, **kwargs: str) → pathlib.Path[source]¶ Create new file(s) in the test directory.
- Parameters
ext (str) – The extension the file(s) should use, including the dot, e.g.
.py
.args – All args are treated as strings and joined using newlines. The result is written as contents to the file. The name of the file is based on the test function requesting this fixture.
kwargs – Each keyword is the name of a file, while the value of it will be written as contents of the file.
Examples:
pytester.makefile(".txt", "line1", "line2") pytester.makefile(".ini", pytest="[pytest]\naddopts=-rs\n")
-
makeconftest
(source: str) → pathlib.Path[source]¶ Write a contest.py file with ‘source’ as contents.
-
makeini
(source: str) → pathlib.Path[source]¶ Write a tox.ini file with ‘source’ as contents.
-
getinicfg
(source: str) → iniconfig.SectionWrapper[source]¶ Return the pytest section from the tox.ini config file.
-
makepyprojecttoml
(source: str) → pathlib.Path[source]¶ Write a pyproject.toml file with ‘source’ as contents.
New in version 6.0.
-
makepyfile
(*args, **kwargs) → pathlib.Path[source]¶ Shortcut for .makefile() with a .py extension.
Defaults to the test name with a ‘.py’ extension, e.g test_foobar.py, overwriting existing files.
Examples:
def test_something(pytester): # Initial file is created test_something.py. pytester.makepyfile("foobar") # To create multiple files, pass kwargs accordingly. pytester.makepyfile(custom="foobar") # At this point, both 'test_something.py' & 'custom.py' exist in the test directory.
-
maketxtfile
(*args, **kwargs) → pathlib.Path[source]¶ Shortcut for .makefile() with a .txt extension.
Defaults to the test name with a ‘.txt’ extension, e.g test_foobar.txt, overwriting existing files.
Examples:
def test_something(pytester): # Initial file is created test_something.txt. pytester.maketxtfile("foobar") # To create multiple files, pass kwargs accordingly. pytester.maketxtfile(custom="foobar") # At this point, both 'test_something.txt' & 'custom.txt' exist in the test directory.
-
syspathinsert
(path: Optional[Union[str, os.PathLike[str]]] = None) → None[source]¶ Prepend a directory to sys.path, defaults to
tmpdir
.This is undone automatically when this object dies at the end of each test.
-
mkdir
(name: str) → pathlib.Path[source]¶ Create a new (sub)directory.
-
mkpydir
(name: str) → pathlib.Path[source]¶ Create a new python package.
This creates a (sub)directory with an empty
__init__.py
file so it gets recognised as a Python package.
-
copy_example
(name: Optional[str] = None) → pathlib.Path[source]¶ Copy file from project’s directory into the testdir.
- Parameters
name (str) – The name of the file to copy.
- Returns
path to the copied directory (inside
self.path
).
-
class
Session
(*k, **kw)¶ -
exception
Failed
¶ Signals a stop as failed test run.
-
exception
Interrupted
¶ Signals that the test run was interrupted.
-
for ... in
collect
() → Iterator[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]¶ Return a list of children (items and collectors) for this collection node.
-
classmethod
from_config
(config: _pytest.config.Config) → _pytest.main.Session¶
-
for ... in
genitems
(node: Union[_pytest.nodes.Item, _pytest.nodes.Collector]) → Iterator[_pytest.nodes.Item]¶
-
gethookproxy
(fspath: py._path.local.LocalPath)¶
-
perform_collect
(args: Optional[Sequence[str]] = None, genitems: bool = True) → Sequence[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]¶ Perform the collection phase for this session.
This is called by the default
pytest_collection
hook implementation; see the documentation of this hook for more details. For testing purposes, it may also be called directly on a freshSession
.This function normally recursively expands any collectors collected from the session to their items, and only items are returned. For testing purposes, this may be suppressed by passing
genitems=False
, in which case the return value contains these collectors unexpanded, andsession.items
is empty.
-
pytest_collectreport
(report: Union[_pytest.reports.TestReport, _pytest.reports.CollectReport]) → None¶
-
pytest_runtest_logreport
(report: Union[_pytest.reports.TestReport, _pytest.reports.CollectReport]) → None¶
-
exception
-
getnode
(config: _pytest.config.Config, arg: Union[str, os.PathLike[str]]) → Optional[Union[_pytest.nodes.Item, _pytest.nodes.Collector]][source]¶ Return the collection node of a file.
- Parameters
config (_pytest.config.Config) – A pytest config. See
parseconfig()
andparseconfigure()
for creating it.arg (py.path.local) – Path to the file.
-
getpathnode
(path: Union[str, os.PathLike[str]])[source]¶ Return the collection node of a file.
This is like
getnode()
but usesparseconfigure()
to create the (configured) pytest Config instance.- Parameters
path (py.path.local) – Path to the file.
-
genitems
(colitems: Sequence[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]) → List[_pytest.nodes.Item][source]¶ Generate all test items from a collection node.
This recurses into the collection node and returns a list of all the test items contained within.
-
runitem
(source: str) → Any[source]¶ Run the “test_func” Item.
The calling test instance (class containing the test method) must provide a
.getrunner()
method which should return a runner which can run the test protocol for a single item, e.g._pytest.runner.runtestprotocol()
.
-
inline_runsource
(source: str, *cmdlineargs) → _pytest.pytester.HookRecorder[source]¶ Run a test module in process using
pytest.main()
.This run writes “source” into a temporary file and runs
pytest.main()
on it, returning aHookRecorder
instance for the result.- Parameters
source – The source code of the test module.
cmdlineargs – Any extra command line arguments to use.
- Returns
HookRecorder
instance of the result.
-
inline_genitems
(*args) → Tuple[List[_pytest.nodes.Item], _pytest.pytester.HookRecorder][source]¶ Run
pytest.main(['--collectonly'])
in-process.Runs the
pytest.main()
function to run all of pytest inside the test process itself likeinline_run()
, but returns a tuple of the collected items and aHookRecorder
instance.
-
inline_run
(*args: Union[str, os.PathLike[str]], plugins=(), no_reraise_ctrlc: bool = False) → _pytest.pytester.HookRecorder[source]¶ Run
pytest.main()
in-process, returning a HookRecorder.Runs the
pytest.main()
function to run all of pytest inside the test process itself. This means it can return aHookRecorder
instance which gives more detailed results from that run than can be done by matching stdout/stderr fromrunpytest()
.- Parameters
args – Command line arguments to pass to
pytest.main()
.plugins – Extra plugin instances the
pytest.main()
instance should use.no_reraise_ctrlc – Typically we reraise keyboard interrupts from the child run. If True, the KeyboardInterrupt exception is captured.
- Returns
A
HookRecorder
instance.
-
runpytest_inprocess
(*args: Union[str, os.PathLike[str]], **kwargs: Any) → _pytest.pytester.RunResult[source]¶ Return result of running pytest in-process, providing a similar interface to what self.runpytest() provides.
-
runpytest
(*args: Union[str, os.PathLike[str]], **kwargs: Any) → _pytest.pytester.RunResult[source]¶ Run pytest inline or in a subprocess, depending on the command line option “–runpytest” and return a
RunResult
.
-
parseconfig
(*args: Union[str, os.PathLike[str]]) → _pytest.config.Config[source]¶ Return a new pytest Config instance from given commandline args.
This invokes the pytest bootstrapping code in _pytest.config to create a new
_pytest.core.PluginManager
and call the pytest_cmdline_parse hook to create a new_pytest.config.Config
instance.If
plugins
has been populated they should be plugin modules to be registered with the PluginManager.
-
parseconfigure
(*args: Union[str, os.PathLike[str]]) → _pytest.config.Config[source]¶ Return a new pytest configured Config instance.
Returns a new
_pytest.config.Config
instance likeparseconfig()
, but also calls the pytest_configure hook.
-
getitem
(source: str, funcname: str = 'test_func') → _pytest.nodes.Item[source]¶ Return the test item for a test function.
Writes the source to a python file and runs pytest’s collection on the resulting module, returning the test item for the requested function name.
- Parameters
source – The module source.
funcname – The name of the test function for which to return a test item.
-
getitems
(source: str) → List[_pytest.nodes.Item][source]¶ Return all test items collected from the module.
Writes the source to a Python file and runs pytest’s collection on the resulting module, returning all test items contained within.
-
getmodulecol
(source: Union[str, pathlib.Path], configargs=(), *, withinit: bool = False)[source]¶ Return the module collection node for
source
.Writes
source
to a file usingmakepyfile()
and then runs the pytest collection on it, returning the collection node for the test module.- Parameters
source – The source code of the module to collect.
configargs – Any extra arguments to pass to
parseconfigure()
.withinit – Whether to also write an
__init__.py
file to the same directory to ensure it is a package.
-
collect_by_name
(modcol: _pytest.nodes.Collector, name: str) → Optional[Union[_pytest.nodes.Item, _pytest.nodes.Collector]][source]¶ Return the collection node for name from the module collection.
Searchs a module collection node for a collection node matching the given name.
- Parameters
modcol – A module collection node; see
getmodulecol()
.name – The name of the node to return.
-
popen
(cmdargs, stdout: Union[int, TextIO] = -1, stderr: Union[int, TextIO] = -1, stdin=<class 'object'>, **kw)[source]¶ Invoke subprocess.Popen.
Calls subprocess.Popen making sure the current working directory is in the PYTHONPATH.
You probably want to use
run()
instead.
-
run
(*cmdargs: Union[str, os.PathLike[str]], timeout: Optional[float] = None, stdin=<class 'object'>) → _pytest.pytester.RunResult[source]¶ Run a command with arguments.
Run a process using subprocess.Popen saving the stdout and stderr.
- Parameters
cmdargs – The sequence of arguments to pass to
subprocess.Popen()
, with path-like objects being converted tostr
automatically.timeout – The period in seconds after which to timeout and raise
Pytester.TimeoutExpired
.stdin – Optional standard input. Bytes are being send, closing the pipe, otherwise it is passed through to
popen
. Defaults toCLOSE_STDIN
, which translates to using a pipe (subprocess.PIPE
) that gets closed.
- Return type
-
runpython
(script) → _pytest.pytester.RunResult[source]¶ Run a python script using sys.executable as interpreter.
- Return type
-
runpytest_subprocess
(*args, timeout: Optional[float] = None) → _pytest.pytester.RunResult[source]¶ Run pytest as a subprocess with given arguments.
Any plugins added to the
plugins
list will be added using the-p
command line option. Additionally--basetemp
is used to put any temporary files and directories in a numbered directory prefixed with “runpytest-” to not conflict with the normal numbered pytest location for temporary files and directories.- Parameters
args – The sequence of arguments to pass to the pytest subprocess.
timeout – The period in seconds after which to timeout and raise
Pytester.TimeoutExpired
.
- Return type
-
class
RunResult
[source]¶ The result of running a command.
-
outlines
¶ List of lines captured from stdout.
-
errlines
¶ List of lines captured from stderr.
-
stdout
¶ LineMatcher
of stdout.Use e.g.
str(stdout)
to reconstruct stdout, or the commonly usedstdout.fnmatch_lines()
method.
-
stderr
¶ LineMatcher
of stderr.
-
duration
¶ Duration in seconds.
-
parseoutcomes
() → Dict[str, int][source]¶ Return a dictionary of outcome noun -> count from parsing the terminal output that the test process produced.
The returned nouns will always be in plural form:
======= 1 failed, 1 passed, 1 warning, 1 error in 0.13s ====
Will return
{"failed": 1, "passed": 1, "warnings": 1, "errors": 1}
.
-
-
class
LineMatcher
[source]¶ Flexible matching of text.
This is a convenience class to test large texts like the output of commands.
The constructor takes a list of lines without their trailing newlines, i.e.
text.splitlines()
.-
__str__
() → str[source]¶ Return the entire original text.
New in version 6.2: You can use
str()
in older versions.
-
fnmatch_lines_random
(lines2: Sequence[str]) → None[source]¶ Check lines exist in the output in any order (using
fnmatch.fnmatch()
).
-
re_match_lines_random
(lines2: Sequence[str]) → None[source]¶ Check lines exist in the output in any order (using
re.match()
).
-
get_lines_after
(fnline: str) → Sequence[str][source]¶ Return all lines following the given line in the text.
The given line can contain glob wildcards.
-
fnmatch_lines
(lines2: Sequence[str], *, consecutive: bool = False) → None[source]¶ Check lines exist in the output (using
fnmatch.fnmatch()
).The argument is a list of lines which have to match and can use glob wildcards. If they do not match a pytest.fail() is called. The matches and non-matches are also shown as part of the error message.
- Parameters
lines2 – String patterns to match.
consecutive – Match lines consecutively?
-
re_match_lines
(lines2: Sequence[str], *, consecutive: bool = False) → None[source]¶ Check lines exist in the output (using
re.match()
).The argument is a list of lines which have to match using
re.match
. If they do not match a pytest.fail() is called.The matches and non-matches are also shown as part of the error message.
- Parameters
lines2 – string patterns to match.
consecutive – match lines consecutively?
-
no_fnmatch_line
(pat: str) → None[source]¶ Ensure captured lines do not match the given pattern, using
fnmatch.fnmatch
.- Parameters
pat (str) – The pattern to match lines.
-
-
class
HookRecorder
[source]¶ Record all hooks called in a plugin manager.
This wraps all the hook calls in the plugin manager, recording each call before propagating the normal calls.
-
matchreport
(inamepart: str = '', names: Union[str, Iterable[str]] = ('pytest_runtest_logreport', 'pytest_collectreport'), when: Optional[str] = None) → Union[_pytest.reports.CollectReport, _pytest.reports.TestReport][source]¶ Return a testreport whose dotted import path matches.
-
testdir¶
Identical to pytester
, but provides an instance whose methods return
legacy py.path.local
objects instead when applicable.
New code should avoid using testdir
in favor of pytester
.
-
final class
Testdir
[source]¶ Similar to
Pytester
, but this class works with legacy py.path.local objects instead.All methods just forward to an internal
Pytester
instance, converting results topy.path.local
objects as necessary.-
exception
TimeoutExpired
¶
-
class
Session
(*k, **kw)¶ -
exception
Failed
¶ Signals a stop as failed test run.
-
exception
Interrupted
¶ Signals that the test run was interrupted.
-
for ... in
collect
() → Iterator[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]¶ Return a list of children (items and collectors) for this collection node.
-
classmethod
from_config
(config: _pytest.config.Config) → _pytest.main.Session¶
-
for ... in
genitems
(node: Union[_pytest.nodes.Item, _pytest.nodes.Collector]) → Iterator[_pytest.nodes.Item]¶
-
gethookproxy
(fspath: py._path.local.LocalPath)¶
-
perform_collect
(args: Optional[Sequence[str]] = None, genitems: bool = True) → Sequence[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]¶ Perform the collection phase for this session.
This is called by the default
pytest_collection
hook implementation; see the documentation of this hook for more details. For testing purposes, it may also be called directly on a freshSession
.This function normally recursively expands any collectors collected from the session to their items, and only items are returned. For testing purposes, this may be suppressed by passing
genitems=False
, in which case the return value contains these collectors unexpanded, andsession.items
is empty.
-
pytest_collectreport
(report: Union[_pytest.reports.TestReport, _pytest.reports.CollectReport]) → None¶
-
pytest_runtest_logreport
(report: Union[_pytest.reports.TestReport, _pytest.reports.CollectReport]) → None¶
-
exception
-
tmpdir
¶ Temporary directory where tests are executed.
-
make_hook_recorder
(pluginmanager) → _pytest.pytester.HookRecorder[source]¶
-
chdir
() → None[source]¶ See
Pytester.chdir()
.
-
makefile
(ext, *args, **kwargs) → py._path.local.LocalPath[source]¶ See
Pytester.makefile()
.
-
makeini
(source) → py._path.local.LocalPath[source]¶ See
Pytester.makeini()
.
-
getinicfg
(source: str) → iniconfig.SectionWrapper[source]¶ See
Pytester.getinicfg()
.
-
mkdir
(name) → py._path.local.LocalPath[source]¶ See
Pytester.mkdir()
.
-
mkpydir
(name) → py._path.local.LocalPath[source]¶ See
Pytester.mkpydir()
.
-
getnode
(config: _pytest.config.Config, arg) → Optional[Union[_pytest.nodes.Item, _pytest.nodes.Collector]][source]¶ See
Pytester.getnode()
.
-
genitems
(colitems: List[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]) → List[_pytest.nodes.Item][source]¶ See
Pytester.genitems()
.
-
runitem
(source)[source]¶ See
Pytester.runitem()
.
-
runpytest_inprocess
(*args, **kwargs) → _pytest.pytester.RunResult[source]¶
-
runpytest
(*args, **kwargs) → _pytest.pytester.RunResult[source]¶ See
Pytester.runpytest()
.
-
parseconfig
(*args) → _pytest.config.Config[source]¶
-
parseconfigure
(*args) → _pytest.config.Config[source]¶
-
getitem
(source, funcname='test_func')[source]¶ See
Pytester.getitem()
.
-
getitems
(source)[source]¶ See
Pytester.getitems()
.
-
collect_by_name
(modcol: _pytest.nodes.Collector, name: str) → Optional[Union[_pytest.nodes.Item, _pytest.nodes.Collector]][source]¶
-
popen
(cmdargs, stdout: Union[int, TextIO] = -1, stderr: Union[int, TextIO] = -1, stdin=<class 'object'>, **kw)[source]¶ See
Pytester.popen()
.
-
run
(*cmdargs, timeout=None, stdin=<class 'object'>) → _pytest.pytester.RunResult[source]¶ See
Pytester.run()
.
-
runpython
(script) → _pytest.pytester.RunResult[source]¶ See
Pytester.runpython()
.
-
runpytest_subprocess
(*args, timeout=None) → _pytest.pytester.RunResult[source]¶
-
spawn
(cmd: str, expect_timeout: float = 10.0) → pexpect.spawn[source]¶ See
Pytester.spawn()
.
-
exception
recwarn¶
Tutorial: Asserting warnings with the warns function
-
recwarn
()[source]¶ Return a
WarningsRecorder
instance that records all warnings emitted by test functions.See http://docs.python.org/library/warnings.html for information on warning categories.
-
class
WarningsRecorder
[source]¶ A context manager to record raised warnings.
Adapted from
warnings.catch_warnings
.-
list
¶ The list of recorded warnings.
-
Each recorded warning is an instance of warnings.WarningMessage
.
Note
DeprecationWarning
and PendingDeprecationWarning
are treated
differently; see Ensuring code triggers a deprecation warning.
tmp_path¶
Tutorial: Temporary directories and files
-
tmp_path
()[source]¶ Return a temporary directory path object which is unique to each test function invocation, created as a sub directory of the base temporary directory.
By default, a new base temporary directory is created each test session, and old bases are removed after 3 sessions, to aid in debugging. If
--basetemp
is used then it is cleared each session. See The default base temporary directory.The returned object is a
pathlib.Path
object.
tmp_path_factory¶
Tutorial: The tmp_path_factory fixture
tmp_path_factory
is an instance of TempPathFactory
:
tmpdir¶
Tutorial: Temporary directories and files
-
tmpdir
()[source]¶ Return a temporary directory path object which is unique to each test function invocation, created as a sub directory of the base temporary directory.
By default, a new base temporary directory is created each test session, and old bases are removed after 3 sessions, to aid in debugging. If
--basetemp
is used then it is cleared each session. See The default base temporary directory.The returned object is a py.path.local path object.
tmpdir_factory¶
Tutorial: The ‘tmpdir_factory’ fixture
tmp_path_factory
is an instance of TempdirFactory
:
Hooks¶
Tutorial: Writing plugins.
Reference to all hooks which can be implemented by conftest.py files and plugins.
Bootstrapping hooks¶
Bootstrapping hooks called for plugins registered early enough (internal and setuptools plugins).
-
pytest_load_initial_conftests
(early_config: Config, parser: Parser, args: List[str]) → None[source]¶ Called to implement the loading of initial conftest files ahead of command line option parsing.
Note
This hook will not be called for
conftest.py
files, only for setuptools plugins.- Parameters
early_config (_pytest.config.Config) – The pytest config object.
args (List[str]) – Arguments passed on the command line.
parser (_pytest.config.argparsing.Parser) – To add command line options.
-
pytest_cmdline_preparse
(config: Config, args: List[str]) → None[source]¶ (Deprecated) modify command line arguments before option parsing.
This hook is considered deprecated and will be removed in a future pytest version. Consider using
pytest_load_initial_conftests()
instead.Note
This hook will not be called for
conftest.py
files, only for setuptools plugins.- Parameters
config (_pytest.config.Config) – The pytest config object.
args (List[str]) – Arguments passed on the command line.
-
pytest_cmdline_parse
(pluginmanager: PytestPluginManager, args: List[str]) → Optional[Config][source]¶ Return an initialized config object, parsing the specified args.
Stops at first non-None result, see firstresult: stop at first non-None result.
Note
This hook will only be called for plugin classes passed to the
plugins
arg when using pytest.main to perform an in-process test run.- Parameters
pluginmanager (_pytest.config.PytestPluginManager) – Pytest plugin manager.
args (List[str]) – List of arguments passed on the command line.
-
pytest_cmdline_main
(config: Config) → Optional[Union[ExitCode, int]][source]¶ Called for performing the main command line action. The default implementation will invoke the configure hooks and runtest_mainloop.
Note
This hook will not be called for
conftest.py
files, only for setuptools plugins.Stops at first non-None result, see firstresult: stop at first non-None result.
- Parameters
config (_pytest.config.Config) – The pytest config object.
Initialization hooks¶
Initialization hooks called for plugins and conftest.py
files.
-
pytest_addoption
(parser: Parser, pluginmanager: PytestPluginManager) → None[source]¶ Register argparse-style options and ini-style config values, called once at the beginning of a test run.
Note
This function should be implemented only in plugins or
conftest.py
files situated at the tests root directory due to how pytest discovers plugins during startup.- Parameters
parser (_pytest.config.argparsing.Parser) – To add command line options, call
parser.addoption(...)
. To add ini-file values callparser.addini(...)
.pluginmanager (_pytest.config.PytestPluginManager) – pytest plugin manager, which can be used to install
hookspec()
’s orhookimpl()
’s and allow one plugin to call another plugin’s hooks to change how command line options are added.
Options can later be accessed through the
config
object, respectively:config.getoption(name)
to retrieve the value of a command line option.config.getini(name)
to retrieve a value read from an ini-style file.
The config object is passed around on many internal objects via the
.config
attribute or can be retrieved as thepytestconfig
fixture.Note
This hook is incompatible with
hookwrapper=True
.
-
pytest_addhooks
(pluginmanager: PytestPluginManager) → None[source]¶ Called at plugin registration time to allow adding new hooks via a call to
pluginmanager.add_hookspecs(module_or_class, prefix)
.- Parameters
pluginmanager (_pytest.config.PytestPluginManager) – pytest plugin manager.
Note
This hook is incompatible with
hookwrapper=True
.
-
pytest_configure
(config: Config) → None[source]¶ Allow plugins and conftest files to perform initial configuration.
This hook is called for every plugin and initial conftest file after command line options have been parsed.
After that, the hook is called for other conftest files as they are imported.
Note
This hook is incompatible with
hookwrapper=True
.- Parameters
config (_pytest.config.Config) – The pytest config object.
-
pytest_unconfigure
(config: Config) → None[source]¶ Called before test process is exited.
- Parameters
config (_pytest.config.Config) – The pytest config object.
-
pytest_sessionstart
(session: Session) → None[source]¶ Called after the
Session
object has been created and before performing collection and entering the run test loop.- Parameters
session (pytest.Session) – The pytest session object.
-
pytest_sessionfinish
(session: Session, exitstatus: Union[int, ExitCode]) → None[source]¶ Called after whole test run finished, right before returning the exit status to the system.
- Parameters
session (pytest.Session) – The pytest session object.
exitstatus (int) – The status which pytest will return to the system.
-
pytest_plugin_registered
(plugin: _PluggyPlugin, manager: PytestPluginManager) → None[source]¶ A new pytest plugin got registered.
- Parameters
plugin – The plugin module or instance.
manager (_pytest.config.PytestPluginManager) – pytest plugin manager.
Note
This hook is incompatible with
hookwrapper=True
.
Collection hooks¶
pytest
calls the following hooks for collecting files and directories:
-
pytest_collection
(session: Session) → Optional[object][source]¶ Perform the collection phase for the given session.
Stops at first non-None result, see firstresult: stop at first non-None result. The return value is not used, but only stops further processing.
The default collection phase is this (see individual hooks for full details):
Starting from
session
as the initial collector:
pytest_collectstart(collector)
report = pytest_make_collect_report(collector)
pytest_exception_interact(collector, call, report)
if an interactive exception occurredFor each collected node:
If an item,
pytest_itemcollected(item)
If a collector, recurse into it.
pytest_collectreport(report)
pytest_collection_modifyitems(session, config, items)
pytest_deselected(items)
for any deselected items (may be called multiple times)
pytest_collection_finish(session)
Set
session.items
to the list of collected itemsSet
session.testscollected
to the number of collected items
You can implement this hook to only perform some action before collection, for example the terminal plugin uses it to start displaying the collection counter (and returns
None
).- Parameters
session (pytest.Session) – The pytest session object.
-
pytest_ignore_collect
(path: py._path.local.LocalPath, config: Config) → Optional[bool][source]¶ Return True to prevent considering this path for collection.
This hook is consulted for all files and directories prior to calling more specific hooks.
Stops at first non-None result, see firstresult: stop at first non-None result.
- Parameters
path (py.path.local) – The path to analyze.
config (_pytest.config.Config) – The pytest config object.
-
pytest_collect_file
(path: py._path.local.LocalPath, parent: Collector) → Optional[Collector][source]¶ Create a Collector for the given path, or None if not relevant.
The new node needs to have the specified
parent
as a parent.- Parameters
path (py.path.local) – The path to collect.
-
pytest_pycollect_makemodule
(path: py._path.local.LocalPath, parent) → Optional[Module][source]¶ Return a Module collector or None for the given path.
This hook will be called for each matching test module path. The pytest_collect_file hook needs to be used if you want to create test modules for files that do not match as a test module.
Stops at first non-None result, see firstresult: stop at first non-None result.
- Parameters
path (py.path.local) – The path of module to collect.
For influencing the collection of objects in Python modules you can use the following hook:
-
pytest_pycollect_makeitem
(collector: PyCollector, name: str, obj: object) → Union[None, Item, Collector, List[Union[Item, Collector]]][source]¶ Return a custom item/collector for a Python object in a module, or None.
Stops at first non-None result, see firstresult: stop at first non-None result.
-
pytest_generate_tests
(metafunc: Metafunc) → None[source]¶ Generate (multiple) parametrized calls to a test function.
-
pytest_make_parametrize_id
(config: Config, val: object, argname: str) → Optional[str][source]¶ Return a user-friendly string representation of the given
val
that will be used by @pytest.mark.parametrize calls, or None if the hook doesn’t know aboutval
.The parameter name is available as
argname
, if required.Stops at first non-None result, see firstresult: stop at first non-None result.
- Parameters
config (_pytest.config.Config) – The pytest config object.
val – The parametrized value.
argname (str) – The automatic parameter name produced by pytest.
After collection is complete, you can modify the order of items, delete or otherwise amend the test items:
-
pytest_collection_modifyitems
(session: Session, config: Config, items: List[Item]) → None[source]¶ Called after collection has been performed. May filter or re-order the items in-place.
- Parameters
session (pytest.Session) – The pytest session object.
config (_pytest.config.Config) – The pytest config object.
items (List[pytest.Item]) – List of item objects.
Note
If this hook is implemented in conftest.py
files, it always receives all collected items, not only those
under the conftest.py
where it is implemented.
-
pytest_collection_finish
(session: Session) → None[source]¶ Called after collection has been performed and modified.
- Parameters
session (pytest.Session) – The pytest session object.
Test running (runtest) hooks¶
All runtest related hooks receive a pytest.Item
object.
-
pytest_runtestloop
(session: Session) → Optional[object][source]¶ Perform the main runtest loop (after collection finished).
The default hook implementation performs the runtest protocol for all items collected in the session (
session.items
), unless the collection failed or thecollectonly
pytest option is set.If at any point
pytest.exit()
is called, the loop is terminated immediately.If at any point
session.shouldfail
orsession.shouldstop
are set, the loop is terminated after the runtest protocol for the current item is finished.- Parameters
session (pytest.Session) – The pytest session object.
Stops at first non-None result, see firstresult: stop at first non-None result. The return value is not used, but only stops further processing.
-
pytest_runtest_protocol
(item: Item, nextitem: Optional[Item]) → Optional[object][source]¶ Perform the runtest protocol for a single test item.
The default runtest protocol is this (see individual hooks for full details):
pytest_runtest_logstart(nodeid, location)
- Setup phase:
call = pytest_runtest_setup(item)
(wrapped inCallInfo(when="setup")
)report = pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
pytest_exception_interact(call, report)
if an interactive exception occurred
- Call phase, if the the setup passed and the
setuponly
pytest option is not set: call = pytest_runtest_call(item)
(wrapped inCallInfo(when="call")
)report = pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
pytest_exception_interact(call, report)
if an interactive exception occurred
- Call phase, if the the setup passed and the
- Teardown phase:
call = pytest_runtest_teardown(item, nextitem)
(wrapped inCallInfo(when="teardown")
)report = pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
pytest_exception_interact(call, report)
if an interactive exception occurred
pytest_runtest_logfinish(nodeid, location)
- Parameters
item – Test item for which the runtest protocol is performed.
nextitem – The scheduled-to-be-next test item (or None if this is the end my friend).
Stops at first non-None result, see firstresult: stop at first non-None result. The return value is not used, but only stops further processing.
-
pytest_runtest_logstart
(nodeid: str, location: Tuple[str, Optional[int], str]) → None[source]¶ Called at the start of running the runtest protocol for a single item.
See
pytest_runtest_protocol()
for a description of the runtest protocol.- Parameters
nodeid (str) – Full node ID of the item.
location – A tuple of
(filename, lineno, testname)
.
-
pytest_runtest_logfinish
(nodeid: str, location: Tuple[str, Optional[int], str]) → None[source]¶ Called at the end of running the runtest protocol for a single item.
See
pytest_runtest_protocol()
for a description of the runtest protocol.- Parameters
nodeid (str) – Full node ID of the item.
location – A tuple of
(filename, lineno, testname)
.
-
pytest_runtest_setup
(item: Item) → None[source]¶ Called to perform the setup phase for a test item.
The default implementation runs
setup()
onitem
and all of its parents (which haven’t been setup yet). This includes obtaining the values of fixtures required by the item (which haven’t been obtained yet).
-
pytest_runtest_call
(item: Item) → None[source]¶ Called to run the test for test item (the call phase).
The default implementation calls
item.runtest()
.
-
pytest_runtest_teardown
(item: Item, nextitem: Optional[Item]) → None[source]¶ Called to perform the teardown phase for a test item.
The default implementation runs the finalizers and calls
teardown()
onitem
and all of its parents (which need to be torn down). This includes running the teardown phase of fixtures required by the item (if they go out of scope).- Parameters
nextitem – The scheduled-to-be-next test item (None if no further test item is scheduled). This argument can be used to perform exact teardowns, i.e. calling just enough finalizers so that nextitem only needs to call setup-functions.
-
pytest_runtest_makereport
(item: Item, call: CallInfo[None]) → Optional[TestReport][source]¶ Called to create a
_pytest.reports.TestReport
for each of the setup, call and teardown runtest phases of a test item.See
pytest_runtest_protocol()
for a description of the runtest protocol.Stops at first non-None result, see firstresult: stop at first non-None result.
For deeper understanding you may look at the default implementation of
these hooks in _pytest.runner
and maybe also
in _pytest.pdb
which interacts with _pytest.capture
and its input/output capturing in order to immediately drop
into interactive debugging when a test failure occurs.
-
pytest_pyfunc_call
(pyfuncitem: Function) → Optional[object][source]¶ Call underlying test function.
Stops at first non-None result, see firstresult: stop at first non-None result.
Reporting hooks¶
Session related reporting hooks:
-
pytest_make_collect_report
(collector: Collector) → Optional[CollectReport][source]¶ Perform
collector.collect()
and return a CollectReport.Stops at first non-None result, see firstresult: stop at first non-None result.
-
pytest_deselected
(items: Sequence[Item]) → None[source]¶ Called for deselected test items, e.g. by keyword.
May be called multiple times.
-
pytest_report_header
(config: Config, startdir: py._path.local.LocalPath) → Union[str, List[str]][source]¶ Return a string or list of strings to be displayed as header info for terminal reporting.
- Parameters
config (_pytest.config.Config) – The pytest config object.
startdir (py.path.local) – The starting dir.
Note
Lines returned by a plugin are displayed before those of plugins which ran before it. If you want to have your line(s) displayed first, use trylast=True.
Note
This function should be implemented only in plugins or
conftest.py
files situated at the tests root directory due to how pytest discovers plugins during startup.
-
pytest_report_collectionfinish
(config: Config, startdir: py._path.local.LocalPath, items: Sequence[Item]) → Union[str, List[str]][source]¶ Return a string or list of strings to be displayed after collection has finished successfully.
These strings will be displayed after the standard “collected X items” message.
New in version 3.2.
- Parameters
config (_pytest.config.Config) – The pytest config object.
startdir (py.path.local) – The starting dir.
items – List of pytest items that are going to be executed; this list should not be modified.
Note
Lines returned by a plugin are displayed before those of plugins which ran before it. If you want to have your line(s) displayed first, use trylast=True.
-
pytest_report_teststatus
(report: Union[CollectReport, TestReport], config: Config) → Tuple[str, str, Union[str, Mapping[str, bool]]][source]¶ Return result-category, shortletter and verbose word for status reporting.
The result-category is a category in which to count the result, for example “passed”, “skipped”, “error” or the empty string.
The shortletter is shown as testing progresses, for example “.”, “s”, “E” or the empty string.
The verbose word is shown as testing progresses in verbose mode, for example “PASSED”, “SKIPPED”, “ERROR” or the empty string.
pytest may style these implicitly according to the report outcome. To provide explicit styling, return a tuple for the verbose word, for example
"rerun", "R", ("RERUN", {"yellow": True})
.- Parameters
report – The report object whose status is to be returned.
config (_pytest.config.Config) – The pytest config object.
Stops at first non-None result, see firstresult: stop at first non-None result.
-
pytest_terminal_summary
(terminalreporter: TerminalReporter, exitstatus: ExitCode, config: Config) → None[source]¶ Add a section to terminal summary reporting.
- Parameters
terminalreporter (_pytest.terminal.TerminalReporter) – The internal terminal reporter object.
exitstatus (int) – The exit status that will be reported back to the OS.
config (_pytest.config.Config) – The pytest config object.
New in version 4.2: The
config
parameter.
-
pytest_fixture_setup
(fixturedef: FixtureDef[Any], request: SubRequest) → Optional[object][source]¶ Perform fixture setup execution.
- Returns
The return value of the call to the fixture function.
Stops at first non-None result, see firstresult: stop at first non-None result.
Note
If the fixture function returns None, other implementations of this hook function will continue to be called, according to the behavior of the firstresult: stop at first non-None result option.
-
pytest_fixture_post_finalizer
(fixturedef: FixtureDef[Any], request: SubRequest) → None[source]¶ Called after fixture teardown, but before the cache is cleared, so the fixture result
fixturedef.cached_result
is still available (notNone
).
-
pytest_warning_captured
(warning_message: warnings.WarningMessage, when: Literal[‘config’, ‘collect’, ‘runtest’], item: Optional[Item], location: Optional[Tuple[str, int, str]]) → None[source]¶ (Deprecated) Process a warning captured by the internal pytest warnings plugin.
Deprecated since version 6.0.
This hook is considered deprecated and will be removed in a future pytest version. Use
pytest_warning_recorded()
instead.- Parameters
warning_message (warnings.WarningMessage) – The captured warning. This is the same object produced by
warnings.catch_warnings()
, and contains the same attributes as the parameters ofwarnings.showwarning()
.when (str) –
Indicates when the warning was captured. Possible values:
"config"
: during pytest configuration/initialization stage."collect"
: during test collection."runtest"
: during test execution.
item (pytest.Item|None) – The item being executed if
when
is"runtest"
, otherwiseNone
.location (tuple) – When available, holds information about the execution context of the captured warning (filename, linenumber, function).
function
evaluates to <module> when the execution context is at the module level.
-
pytest_warning_recorded
(warning_message: warnings.WarningMessage, when: Literal[‘config’, ‘collect’, ‘runtest’], nodeid: str, location: Optional[Tuple[str, int, str]]) → None[source]¶ Process a warning captured by the internal pytest warnings plugin.
- Parameters
warning_message (warnings.WarningMessage) – The captured warning. This is the same object produced by
warnings.catch_warnings()
, and contains the same attributes as the parameters ofwarnings.showwarning()
.when (str) –
Indicates when the warning was captured. Possible values:
"config"
: during pytest configuration/initialization stage."collect"
: during test collection."runtest"
: during test execution.
nodeid (str) – Full id of the item.
location (tuple|None) – When available, holds information about the execution context of the captured warning (filename, linenumber, function).
function
evaluates to <module> when the execution context is at the module level.
New in version 6.0.
Central hook for reporting about test execution:
-
pytest_runtest_logreport
(report: TestReport) → None[source]¶ Process the
_pytest.reports.TestReport
produced for each of the setup, call and teardown runtest phases of an item.See
pytest_runtest_protocol()
for a description of the runtest protocol.
Assertion related hooks:
-
pytest_assertrepr_compare
(config: Config, op: str, left: object, right: object) → Optional[List[str]][source]¶ Return explanation for comparisons in failing assert expressions.
Return None for no custom explanation, otherwise return a list of strings. The strings will be joined by newlines but any newlines in a string will be escaped. Note that all but the first line will be indented slightly, the intention is for the first line to be a summary.
- Parameters
config (_pytest.config.Config) – The pytest config object.
-
pytest_assertion_pass
(item: Item, lineno: int, orig: str, expl: str) → None[source]¶ (Experimental) Called whenever an assertion passes.
New in version 5.0.
Use this hook to do some processing after a passing assertion. The original assertion information is available in the
orig
string and the pytest introspected assertion information is available in theexpl
string.This hook must be explicitly enabled by the
enable_assertion_pass_hook
ini-file option:[pytest] enable_assertion_pass_hook=true
You need to clean the .pyc files in your project directory and interpreter libraries when enabling this option, as assertions will require to be re-written.
- Parameters
item (pytest.Item) – pytest item object of current test.
lineno (int) – Line number of the assert statement.
orig (str) – String with the original assertion.
expl (str) – String with the assert explanation.
Note
This hook is experimental, so its parameters or even the hook itself might be changed/removed without warning in any future pytest release.
If you find this hook useful, please share your feedback in an issue.
Debugging/Interaction hooks¶
There are few hooks which can be used for special reporting or interaction with exceptions:
-
pytest_internalerror
(excrepr: ExceptionRepr, excinfo: ExceptionInfo[BaseException]) → Optional[bool][source]¶ Called for internal errors.
Return True to suppress the fallback handling of printing an INTERNALERROR message directly to sys.stderr.
-
pytest_keyboard_interrupt
(excinfo: ExceptionInfo[Union[KeyboardInterrupt, Exit]]) → None[source]¶ Called for keyboard interrupt.
-
pytest_exception_interact
(node: Union[Item, Collector], call: CallInfo[Any], report: Union[CollectReport, TestReport]) → None[source]¶ Called when an exception was raised which can potentially be interactively handled.
May be called during collection (see
pytest_make_collect_report()
), in which casereport
is a_pytest.reports.CollectReport
.May be called during runtest of an item (see
pytest_runtest_protocol()
), in which casereport
is a_pytest.reports.TestReport
.This hook is not called if the exception that was raised is an internal exception like
skip.Exception
.
Objects¶
Full reference to objects accessible from fixtures or hooks.
CallInfo¶
-
final class
CallInfo
[source]¶ Result/Exception info a function invocation.
- Parameters
result (T) – The return value of the call, if it didn’t raise. Can only be accessed if excinfo is None.
excinfo (Optional[ExceptionInfo]) – The captured exception of the call, if it raised.
start (float) – The system time when the call started, in seconds since the epoch.
stop (float) – The system time when the call ended, in seconds since the epoch.
duration (float) – The call duration, in seconds.
when (str) – The context of invocation: “setup”, “call”, “teardown”, …
Collector¶
-
class
Collector
[source]¶ Bases:
_pytest.nodes.Node
Collector instances create children through collect() and thus iteratively build a tree.
-
exception
CollectError
[source]¶ Bases:
Exception
An error during collection, contains a custom message.
-
collect
() → Iterable[Union[_pytest.nodes.Item, _pytest.nodes.Collector]][source]¶ Return a list of children (items and collectors) for this collection node.
-
repr_failure
(excinfo: _pytest._code.code.ExceptionInfo[BaseException]) → Union[str, _pytest._code.code.TerminalRepr][source]¶ Return a representation of a collection failure.
- Parameters
excinfo – Exception information for the failure.
-
exception
CollectReport¶
-
final class
CollectReport
[source]¶ Bases:
_pytest.reports.BaseReport
Collection report object.
-
outcome
¶ Test outcome, always one of “passed”, “failed”, “skipped”.
-
longrepr
: Union[None, _pytest._code.code.ExceptionInfo[BaseException], Tuple[str, int, str], str, _pytest._code.code.TerminalRepr]¶ None or a failure representation.
-
result
¶ The collected items and collection nodes.
-
caplog
¶ Return captured log lines, if log capturing is enabled.
New in version 3.5.
-
capstderr
¶ Return captured text from stderr, if capturing is enabled.
New in version 3.0.
-
capstdout
¶ Return captured text from stdout, if capturing is enabled.
New in version 3.0.
-
count_towards_summary
¶ Experimental Whether this report should be counted towards the totals shown at the end of the test session: “1 passed, 1 failure, etc”.
Note
This function is considered experimental, so beware that it is subject to changes even in patch releases.
-
head_line
¶ Experimental The head line shown with longrepr output for this report, more commonly during traceback representation during failures:
________ Test.foo ________
In the example above, the head_line is “Test.foo”.
Note
This function is considered experimental, so beware that it is subject to changes even in patch releases.
-
longreprtext
¶ Read-only property that returns the full string representation of
longrepr
.New in version 3.0.
-
Config¶
-
final class
Config
[source]¶ Access to configuration values, pluginmanager and plugin hooks.
- Parameters
pluginmanager (PytestPluginManager) –
invocation_params (InvocationParams) – Object containing parameters regarding the
pytest.main()
invocation.
-
final class
InvocationParams
(args: Iterable[str], plugins: Optional[Sequence[Union[str, object]]], dir: pathlib.Path)[source]¶ Holds parameters passed during
pytest.main()
.The object attributes are read-only.
New in version 5.1.
Note
Note that the environment variable
PYTEST_ADDOPTS
and theaddopts
ini option are handled by pytest, not being included in theargs
attribute.Plugins accessing
InvocationParams
must be aware of that.-
args
¶ The command-line arguments as passed to
pytest.main()
.- Type
Tuple[str, ..]
-
dir
¶ The directory from which
pytest.main()
was invoked.- Type
-
-
option
¶ Access to command line option as attributes.
- Type
-
invocation_params
¶ The parameters with which pytest was invoked.
- Type
-
pluginmanager
¶ The plugin manager handles plugin registration and hook invocation.
- Type
-
invocation_dir
¶ The directory from which pytest was invoked.
Prefer to use
invocation_params.dir
, which is apathlib.Path
.- Type
py.path.local
-
rootdir
¶ The path to the rootdir.
Prefer to use
rootpath
, which is apathlib.Path
.- Type
py.path.local
-
inipath
¶ The path to the configfile.
- Type
Optional[pathlib.Path]
New in version 6.1.
-
inifile
¶ The path to the configfile.
Prefer to use
inipath
, which is apathlib.Path
.- Type
Optional[py.path.local]
-
add_cleanup
(func: Callable[], None]) → None[source]¶ Add a function to be called when the config object gets out of use (usually coninciding with pytest_unconfigure).
-
classmethod
fromdictargs
(option_dict, args) → _pytest.config.Config[source]¶ Constructor usable for subprocesses.
-
for ... in
pytest_collection
() → Generator[None, None, None][source]¶ Validate invalid ini keys after collection is done so we take in account options added by late-loading conftest files.
-
issue_config_time_warning
(warning: Warning, stacklevel: int) → None[source]¶ Issue and handle a warning during the “configure” stage.
During
pytest_configure
we can’t capture warnings using thecatch_warnings_for_item
function because it is not possible to have hookwrappers aroundpytest_configure
.This function is mainly intended for plugins that need to issue warnings during
pytest_configure
(or similar stages).- Parameters
warning – The warning instance.
stacklevel – stacklevel forwarded to warnings.warn.
-
addinivalue_line
(name: str, line: str) → None[source]¶ Add a line to an ini-file option. The option must have been declared but might not yet be set in which case the line becomes the first line in its value.
-
getini
(name: str)[source]¶ Return configuration value from an ini file.
If the specified name hasn’t been registered through a prior
parser.addini
call (usually from a plugin), a ValueError is raised.
-
getoption
(name: str, default=<NOTSET>, skip: bool = False)[source]¶ Return command line option value.
- Parameters
name – Name of the option. You may also specify the literal
--OPT
option instead of the “dest” option name.default – Default value if no option of that name exists.
skip – If True, raise pytest.skip if option does not exists or has a None value.
ExceptionInfo¶
-
final class
ExceptionInfo
(excinfo: Optional[Tuple[Type[_E], _E, traceback]], striptext: str = '', traceback: Optional[_pytest._code.code.Traceback] = None)[source]¶ Wraps sys.exc_info() objects and offers help for navigating the traceback.
-
classmethod
from_exc_info
(exc_info: Tuple[Type[_E], _E, traceback], exprinfo: Optional[str] = None) → _pytest._code.code.ExceptionInfo[_E][source]¶ Return an ExceptionInfo for an existing exc_info tuple.
Warning
Experimental API
- Parameters
exprinfo – A text string helping to determine if we should strip
AssertionError
from the output. Defaults to the exception message/__str__()
.
-
classmethod
from_current
(exprinfo: Optional[str] = None) → _pytest._code.code.ExceptionInfo[BaseException][source]¶ Return an ExceptionInfo matching the current traceback.
Warning
Experimental API
- Parameters
exprinfo – A text string helping to determine if we should strip
AssertionError
from the output. Defaults to the exception message/__str__()
.
-
classmethod
for_later
() → _pytest._code.code.ExceptionInfo[_E][source]¶ Return an unfilled ExceptionInfo.
-
fill_unfilled
(exc_info: Tuple[Type[_E], _E, traceback]) → None[source]¶ Fill an unfilled ExceptionInfo created with
for_later()
.
-
type
¶ The exception class.
-
value
¶ The exception value.
-
tb
¶ The exception raw traceback.
-
typename
¶ The type name of the exception.
-
traceback
¶ The traceback.
-
exconly
(tryshort: bool = False) → str[source]¶ Return the exception as a string.
When ‘tryshort’ resolves to True, and the exception is a _pytest._code._AssertionError, only the actual exception part of the exception representation is returned (so ‘AssertionError: ‘ is removed from the beginning).
-
errisinstance
(exc: Union[Type[BaseException], Tuple[Type[BaseException], …]]) → bool[source]¶ Return True if the exception is an instance of exc.
Consider using
isinstance(excinfo.value, exc)
instead.
-
getrepr
(showlocals: bool = False, style: _TracebackStyle = 'long', abspath: bool = False, tbfilter: bool = True, funcargs: bool = False, truncate_locals: bool = True, chain: bool = True) → Union[ReprExceptionInfo, ExceptionChainRepr][source]¶ Return str()able representation of this exception info.
- Parameters
showlocals (bool) – Show locals per traceback entry. Ignored if
style=="native"
.style (str) – long|short|no|native|value traceback style.
abspath (bool) – If paths should be changed to absolute or left unchanged.
tbfilter (bool) – Hide entries that contain a local variable
__tracebackhide__==True
. Ignored ifstyle=="native"
.funcargs (bool) – Show fixtures (“funcargs” for legacy purposes) per traceback entry.
truncate_locals (bool) – With
showlocals==True
, make sure locals can be safely represented as strings.chain (bool) – If chained exceptions in Python 3 should be shown.
Changed in version 3.9: Added the
chain
parameter.
-
match
(regexp: Union[str, Pattern[str]]) → Literal[True][source]¶ Check whether the regular expression
regexp
matches the string representation of the exception usingre.search()
.If it matches
True
is returned, otherwise anAssertionError
is raised.
-
classmethod
ExitCode¶
-
final class
ExitCode
(value)[source]¶ Encodes the valid exit codes by pytest.
Currently users and plugins may supply other exit codes as well.
New in version 5.0.
-
OK
= 0¶ Tests passed.
-
TESTS_FAILED
= 1¶ Tests failed.
-
INTERRUPTED
= 2¶ pytest was interrupted.
-
INTERNAL_ERROR
= 3¶ An internal error got in the way.
-
USAGE_ERROR
= 4¶ pytest was misused.
-
NO_TESTS_COLLECTED
= 5¶ pytest couldn’t find tests.
-
File¶
-
class
File
[source]¶ Bases:
_pytest.nodes.FSCollector
Base class for collecting tests from a file.
Function¶
-
class
Function
[source]¶ Bases:
_pytest.python.PyobjMixin
,_pytest.nodes.Item
An Item responsible for setting up and executing a Python test function.
- param name:
The full function name, including any decorations like those added by parametrization (
my_func[my_param]
).- param parent:
The parent Node.
- param config:
The pytest Config object.
- param callspec:
If given, this is function has been parametrized and the callspec contains meta information about the parametrization.
- param callobj:
If given, the object which will be called when the Function is invoked, otherwise the callobj will be obtained from
parent
usingoriginalname
.- param keywords:
Keywords bound to the function object for “-k” matching.
- param session:
The pytest Session object.
- param fixtureinfo:
Fixture information already resolved at this fixture node..
- param originalname:
The attribute name to use for accessing the underlying function object. Defaults to
name
. Set this if name is different from the original name, for example when it contains decorations like those added by parametrization (my_func[my_param]
).
-
originalname
¶ Original function name, without any decorations (for example parametrization adds a
"[...]"
suffix to function names), used to access the underlying function object fromparent
(in casecallobj
is not given explicitly).New in version 3.0.
-
function
¶ Underlying python ‘function’ object.
-
repr_failure
(excinfo: _pytest._code.code.ExceptionInfo[BaseException]) → Union[str, _pytest._code.code.TerminalRepr][source]¶ Return a representation of a collection or test failure.
- Parameters
excinfo – Exception information for the failure.
FunctionDefinition¶
Item¶
-
class
Item
[source]¶ Bases:
_pytest.nodes.Node
A basic test invocation item.
Note that for a single function there might be multiple test invocation items.
-
user_properties
: List[Tuple[str, object]]¶ A list of tuples (name, value) that holds user defined properties for this test.
-
MarkDecorator¶
-
class
MarkDecorator
(mark: _pytest.mark.structures.Mark)[source]¶ A decorator for applying a mark on test functions and classes.
MarkDecorators are created with
pytest.mark
:mark1 = pytest.mark.NAME # Simple MarkDecorator mark2 = pytest.mark.NAME(name1=value) # Parametrized MarkDecorator
and can then be applied as decorators to test functions:
@mark2 def test_function(): pass
When a MarkDecorator is called it does the following:
If called with a single class as its only positional argument and no additional keyword arguments, it attaches the mark to the class so it gets applied automatically to all test cases found in that class.
If called with a single function as its only positional argument and no additional keyword arguments, it attaches the mark to the function, containing all the arguments already stored internally in the MarkDecorator.
When called in any other case, it returns a new MarkDecorator instance with the original MarkDecorator’s content updated with the arguments passed to this call.
Note: The rules above prevent MarkDecorators from storing only a single function or class reference as their positional argument with no additional keyword or positional arguments. You can work around this by using
with_args()
.-
name
¶ Alias for mark.name.
-
args
¶ Alias for mark.args.
-
kwargs
¶ Alias for mark.kwargs.
MarkGenerator¶
-
final class
MarkGenerator
[source]¶ Factory for
MarkDecorator
objects - exposed as apytest.mark
singleton instance.Example:
import pytest @pytest.mark.slowtest def test_function(): pass
applies a ‘slowtest’
Mark
ontest_function
.
Mark¶
-
final class
Mark
(name: str, args: Tuple[Any, …], kwargs: Mapping[str, Any], param_ids_from: Optional[Mark] = None, param_ids_generated: Optional[Sequence[str]] = None)[source]¶ -
name
¶ Name of the mark.
-
args
¶ Positional arguments of the mark decorator.
-
kwargs
¶ Keyword arguments of the mark decorator.
-
combined_with
(other: _pytest.mark.structures.Mark) → _pytest.mark.structures.Mark[source]¶ Return a new Mark which is a combination of this Mark and another Mark.
Combines by appending args and merging kwargs.
-
Metafunc¶
-
final class
Metafunc
(definition: _pytest.python.FunctionDefinition, fixtureinfo: _pytest.fixtures.FuncFixtureInfo, config: _pytest.config.Config, cls=None, module=None)[source]¶ Objects passed to the
pytest_generate_tests
hook.They help to inspect a test function and to generate tests according to test configuration or values specified in the class or module where a test function is defined.
-
definition
¶ Access to the underlying
_pytest.python.FunctionDefinition
.
-
config
¶ Access to the
_pytest.config.Config
object for the test session.
-
module
¶ The module object where the test function is defined in.
-
function
¶ Underlying Python test function.
-
fixturenames
¶ Set of fixture names required by the test function.
-
cls
¶ Class object where the test function is defined in or
None
.
-
parametrize
(argnames: Union[str, List[str], Tuple[str, …]], argvalues: Iterable[Union[_pytest.mark.structures.ParameterSet, Sequence[object], object]], indirect: Union[bool, Sequence[str]] = False, ids: Optional[Union[Iterable[Union[None, str, float, int, bool]], Callable[[Any], Optional[object]]]] = None, scope: Optional[_Scope] = None, *, _param_mark: Optional[_pytest.mark.structures.Mark] = None) → None[source]¶ Add new invocations to the underlying test function using the list of argvalues for the given argnames. Parametrization is performed during the collection phase. If you need to setup expensive resources see about setting indirect to do it rather at test setup time.
- Parameters
argnames – A comma-separated string denoting one or more argument names, or a list/tuple of argument strings.
argvalues –
The list of argvalues determines how often a test is invoked with different argument values.
If only one argname was specified argvalues is a list of values. If N argnames were specified, argvalues must be a list of N-tuples, where each tuple-element specifies a value for its respective argname.
indirect – A list of arguments’ names (subset of argnames) or a boolean. If True the list contains all names from the argnames. Each argvalue corresponding to an argname in this list will be passed as request.param to its respective argname fixture function so that it can perform more expensive setups during the setup phase of a test rather than at collection time.
ids –
Sequence of (or generator for) ids for
argvalues
, or a callable to return part of the id for each argvalue.With sequences (and generators like
itertools.count()
) the returned ids should be of typestring
,int
,float
,bool
, orNone
. They are mapped to the corresponding index inargvalues
.None
means to use the auto-generated id.If it is a callable it will be called for each entry in
argvalues
, and the return value is used as part of the auto-generated id for the whole set (where parts are joined with dashes (“-“)). This is useful to provide more specific ids for certain items, e.g. dates. ReturningNone
will use an auto-generated id.If no ids are provided they will be generated automatically from the argvalues.
scope – If specified it denotes the scope of the parameters. The scope is used for grouping tests by parameter instances. It will also override any fixture-function defined scope, allowing to set a dynamic scope using test context or configuration.
-
Node¶
-
class
Node
[source]¶ Base class for Collector and Item, the components of the test collection tree.
Collector subclasses have children; Items are leaf nodes.
-
name
¶ A unique name within the scope of the parent node.
-
parent
¶ The parent collector node.
-
fspath
¶ Filesystem path where this node was collected from (can be None).
-
keywords
¶ Keywords/markers collected from all scopes.
-
own_markers
: List[_pytest.mark.structures.Mark]¶ The marker objects belonging to this node.
-
classmethod
from_parent
(parent: _pytest.nodes.Node, **kw)[source]¶ Public constructor for Nodes.
This indirection got introduced in order to enable removing the fragile logic from the node constructors.
Subclasses can use
super().from_parent(...)
when overriding the construction.- Parameters
parent – The parent node of this Node.
-
ihook
¶ fspath-sensitive hook proxy used to call pytest hooks.
-
warn
(warning: Warning) → None[source]¶ Issue a warning for this Node.
Warnings will be displayed after the test session, unless explicitly suppressed.
- Parameters
warning (Warning) – The warning instance to issue.
- Raises
ValueError – If
warning
instance is not a subclass of Warning.
Example usage:
node.warn(PytestWarning("some message")) node.warn(UserWarning("some message"))
Changed in version 6.2: Any subclass of
Warning
is now accepted, rather than onlyPytestWarning
subclasses.
-
nodeid
¶ A ::-separated string denoting its collection tree address.
-
listchain
() → List[_pytest.nodes.Node][source]¶ Return list of all parent collectors up to self, starting from the root of collection tree.
-
add_marker
(marker: Union[str, _pytest.mark.structures.MarkDecorator], append: bool = True) → None[source]¶ Dynamically add a marker object to the node.
- Parameters
append – Whether to append the marker, or prepend it.
-
iter_markers
(name: Optional[str] = None) → Iterator[_pytest.mark.structures.Mark][source]¶ Iterate over all markers of the node.
- Parameters
name – If given, filter the results by the name attribute.
-
for ... in
iter_markers_with_node
(name: Optional[str] = None) → Iterator[Tuple[_pytest.nodes.Node, _pytest.mark.structures.Mark]][source]¶ Iterate over all markers of the node.
- Parameters
name – If given, filter the results by the name attribute.
- Returns
An iterator of (node, mark) tuples.
-
get_closest_marker
(name: str) → Optional[_pytest.mark.structures.Mark][source]¶ -
get_closest_marker
(name: str, default: _pytest.mark.structures.Mark) → _pytest.mark.structures.Mark Return the first marker matching the name, from closest (for example function) to farther level (for example module level).
- Parameters
default – Fallback return value if no marker was found.
name – Name to filter by.
-
addfinalizer
(fin: Callable[], object]) → None[source]¶ Register a function to be called when this node is finalized.
This method can only be called when this node is active in a setup chain, for example during self.setup().
-
getparent
(cls: Type[_NodeType]) → Optional[_NodeType][source]¶ Get the next parent node (including self) which is an instance of the given class.
-
repr_failure
(excinfo: _pytest._code.code.ExceptionInfo[BaseException], style: Optional[_TracebackStyle] = None) → Union[str, _pytest._code.code.TerminalRepr][source]¶ Return a representation of a collection or test failure.
- Parameters
excinfo – Exception information for the failure.
-
Parser¶
-
final class
Parser
[source]¶ Parser for command line arguments and ini-file values.
- Variables
extra_info – Dict of generic param -> value to display in case there’s an error processing the command line arguments.
-
getgroup
(name: str, description: str = '', after: Optional[str] = None) → _pytest.config.argparsing.OptionGroup[source]¶ Get (or create) a named option Group.
- Name
Name of the option group.
- Description
Long description for –help output.
- After
Name of another group, used for ordering –help output.
The returned group object has an
addoption
method with the same signature asparser.addoption
but will be shown in the respective group in the output ofpytest. --help
.
-
addoption
(*opts: str, **attrs: Any) → None[source]¶ Register a command line option.
- Opts
Option names, can be short or long options.
- Attrs
Same attributes which the
add_argument()
function of the argparse library accepts.
After command line parsing, options are available on the pytest config object via
config.option.NAME
whereNAME
is usually set by passing adest
attribute, for exampleaddoption("--long", dest="NAME", ...)
.
-
parse_known_args
(args: Sequence[Union[str, py._path.local.LocalPath]], namespace: Optional[argparse.Namespace] = None) → argparse.Namespace[source]¶ Parse and return a namespace object with known arguments at this point.
-
parse_known_and_unknown_args
(args: Sequence[Union[str, py._path.local.LocalPath]], namespace: Optional[argparse.Namespace] = None) → Tuple[argparse.Namespace, List[str]][source]¶ Parse and return a namespace object with known arguments, and the remaining arguments unknown at this point.
-
addini
(name: str, help: str, type: Optional[Literal[‘string’, ‘pathlist’, ‘args’, ‘linelist’, ‘bool’]] = None, default=None) → None[source]¶ Register an ini-file option.
- Name
Name of the ini-variable.
- Type
Type of the variable, can be
string
,pathlist
,args
,linelist
orbool
. Defaults tostring
ifNone
or not passed.- Default
Default value if no ini-file option exists but is queried.
The value of ini-variables can be retrieved via a call to
config.getini(name)
.
PytestPluginManager¶
-
final class
PytestPluginManager
[source]¶ Bases:
pluggy._manager.PluginManager
A
pluggy.PluginManager
with additional pytest-specific functionality:Loading plugins from the command line,
PYTEST_PLUGINS
env variable andpytest_plugins
global variables found in plugins being loaded.conftest.py
loading during start-up.
-
register
(plugin: object, name: Optional[str] = None) → Optional[str][source]¶ Register a plugin and return its canonical name or
None
if the name is blocked from registering. Raise aValueError
if the plugin is already registered.
-
import_plugin
(modname: str, consider_entry_points: bool = False) → None[source]¶ Import a plugin with
modname
.If
consider_entry_points
is True, entry point names are also considered to find a plugin.
-
add_hookcall_monitoring
(before, after)¶ add before/after tracing functions for all hooks and return an undo function which, when called, will remove the added tracers.
before(hook_name, hook_impls, kwargs)
will be called ahead of all hook calls and receive a hookcaller instance, a list of HookImpl instances and the keyword arguments for the hook call.after(outcome, hook_name, hook_impls, kwargs)
receives the same arguments asbefore
but also apluggy._callers._Result
object which represents the result of the overall hook call.
-
add_hookspecs
(module_or_class)¶ add new hook specifications defined in the given
module_or_class
. Functions are recognized if they have been decorated accordingly.
-
check_pending
()¶ Verify that all hooks which have not been verified against a hook specification are optional, otherwise raise
PluginValidationError
.
-
enable_tracing
()¶ enable tracing of hook calls and return an undo function.
-
get_canonical_name
(plugin)¶ Return canonical name for a plugin object. Note that a plugin may be registered under a different name which was specified by the caller of
register(plugin, name)
. To obtain the name of an registered plugin useget_name(plugin)
instead.
-
get_hookcallers
(plugin)¶ get all hook callers for the specified plugin.
-
get_name
(plugin)¶ Return name for registered plugin or
None
if not registered.
-
get_plugin
(name)¶ Return a plugin or
None
for the given name.
-
get_plugins
()¶ return the set of registered plugins.
-
has_plugin
(name)¶ Return
True
if a plugin with the given name is registered.
-
is_blocked
(name)¶ return
True
if the given plugin name is blocked.
-
is_registered
(plugin)¶ Return
True
if the plugin is already registered.
-
list_name_plugin
()¶ return list of name/plugin pairs.
-
list_plugin_distinfo
()¶ return list of distinfo/plugin tuples for all setuptools registered plugins.
-
load_setuptools_entrypoints
(group, name=None)¶ Load modules from querying the specified setuptools
group
.
-
set_blocked
(name)¶ block registrations of the given name, unregister if already registered.
-
subset_hook_caller
(name, remove_plugins)¶ Return a new
_hooks._HookCaller
instance for the named method which manages calls to all registered plugins except the ones from remove_plugins.
-
unregister
(plugin=None, name=None)¶ unregister a plugin object and all its contained hook implementations from internal data structures.
Session¶
-
final class
Session
[source]¶ Bases:
_pytest.nodes.FSCollector
-
exception
Interrupted
¶ Bases:
KeyboardInterrupt
Signals that the test run was interrupted.
-
perform_collect
(args: Optional[Sequence[str]] = None, genitems: Literal[True] = True) → Sequence[_pytest.nodes.Item][source]¶ -
perform_collect
(args: Optional[Sequence[str]] = None, genitems: bool = True) → Sequence[Union[_pytest.nodes.Item, _pytest.nodes.Collector]] Perform the collection phase for this session.
This is called by the default
pytest_collection
hook implementation; see the documentation of this hook for more details. For testing purposes, it may also be called directly on a freshSession
.This function normally recursively expands any collectors collected from the session to their items, and only items are returned. For testing purposes, this may be suppressed by passing
genitems=False
, in which case the return value contains these collectors unexpanded, andsession.items
is empty.
-
exception
TestReport¶
-
final class
TestReport
[source]¶ Bases:
_pytest.reports.BaseReport
Basic test report object (also used for setup and teardown calls if they fail).
-
location
: Optional[Tuple[str, Optional[int], str]]¶ A (filesystempath, lineno, domaininfo) tuple indicating the actual location of a test item - it might be different from the collected one e.g. if a method is inherited from a different module.
-
keywords
¶ A name -> value dictionary containing all keywords and markers associated with a test invocation.
-
outcome
¶ Test outcome, always one of “passed”, “failed”, “skipped”.
-
longrepr
: Union[None, _pytest._code.code.ExceptionInfo[BaseException], Tuple[str, int, str], str, _pytest._code.code.TerminalRepr]¶ None or a failure representation.
-
user_properties
¶ User properties is a list of tuples (name, value) that holds user defined properties of the test.
-
sections
: List[Tuple[str, str]]¶ List of pairs
(str, str)
of extra information which needs to marshallable. Used by pytest to add captured text fromstdout
andstderr
, but may be used by other plugins to add arbitrary information to reports.
-
duration
¶ Time it took to run just the test.
-
classmethod
from_item_and_call
(item: _pytest.nodes.Item, call: CallInfo[None]) → TestReport[source]¶ Create and fill a TestReport with standard item and call info.
-
caplog
¶ Return captured log lines, if log capturing is enabled.
New in version 3.5.
-
capstderr
¶ Return captured text from stderr, if capturing is enabled.
New in version 3.0.
-
capstdout
¶ Return captured text from stdout, if capturing is enabled.
New in version 3.0.
-
count_towards_summary
¶ Experimental Whether this report should be counted towards the totals shown at the end of the test session: “1 passed, 1 failure, etc”.
Note
This function is considered experimental, so beware that it is subject to changes even in patch releases.
-
head_line
¶ Experimental The head line shown with longrepr output for this report, more commonly during traceback representation during failures:
________ Test.foo ________
In the example above, the head_line is “Test.foo”.
Note
This function is considered experimental, so beware that it is subject to changes even in patch releases.
-
longreprtext
¶ Read-only property that returns the full string representation of
longrepr
.New in version 3.0.
-
_Result¶
Result object used within hook wrappers, see _Result in the pluggy documentation
for more information.
Global Variables¶
pytest treats some global variables in a special manner when defined in a test module or
conftest.py
files.
-
collect_ignore
¶
Tutorial: Customizing test collection
Can be declared in conftest.py files to exclude test directories or modules.
Needs to be list[str]
.
collect_ignore = ["setup.py"]
-
collect_ignore_glob
¶
Tutorial: Customizing test collection
Can be declared in conftest.py files to exclude test directories or modules
with Unix shell-style wildcards. Needs to be list[str]
where str
can
contain glob patterns.
collect_ignore_glob = ["*_ignore.py"]
-
pytest_plugins
¶
Tutorial: Requiring/Loading plugins in a test module or conftest file
Can be declared at the global level in test modules and conftest.py files to register additional plugins.
Can be either a str
or Sequence[str]
.
pytest_plugins = "myapp.testsupport.myplugin"
pytest_plugins = ("myapp.testsupport.tools", "myapp.testsupport.regression")
-
pytestmark
¶
Tutorial: Marking whole classes or modules
Can be declared at the global level in test modules to apply one or more marks to all test functions and methods. Can be either a single mark or a list of marks (applied in left-to-right order).
import pytest
pytestmark = pytest.mark.webtest
import pytest
pytestmark = [pytest.mark.integration, pytest.mark.slow]
Environment Variables¶
Environment variables that can be used to change pytest’s behavior.
-
PYTEST_ADDOPTS
¶
This contains a command-line (parsed by the py:mod:shlex
module) that will be prepended to the command line given
by the user, see Builtin configuration file options for more information.
-
PYTEST_CURRENT_TEST
¶
This is not meant to be set by users, but is set by pytest internally with the name of the current test so other processes can inspect it, see PYTEST_CURRENT_TEST environment variable for more information.
-
PYTEST_DEBUG
¶
When set, pytest will print tracing and debug information.
-
PYTEST_DISABLE_PLUGIN_AUTOLOAD
¶
When set, disables plugin auto-loading through setuptools entrypoints. Only explicitly specified plugins will be loaded.
-
PYTEST_PLUGINS
¶
Contains comma-separated list of modules that should be loaded as plugins:
export PYTEST_PLUGINS=mymodule.plugin,xdist
-
PY_COLORS
¶
When set to 1
, pytest will use color in terminal output.
When set to 0
, pytest will not use color.
PY_COLORS
takes precedence over NO_COLOR
and FORCE_COLOR
.
-
NO_COLOR
¶
When set (regardless of value), pytest will not use color in terminal output.
PY_COLORS
takes precedence over NO_COLOR
, which takes precedence over FORCE_COLOR
.
See no-color.org for other libraries supporting this community standard.
-
FORCE_COLOR
¶
When set (regardless of value), pytest will use color in terminal output.
PY_COLORS
and NO_COLOR
take precedence over FORCE_COLOR
.
Warnings¶
Custom warnings generated in some situations such as improper usage or deprecated features.
-
class
PytestWarning
¶ Bases:
UserWarning
Base class for all warnings emitted by pytest.
-
class
PytestAssertRewriteWarning
¶ Bases:
pytest.PytestWarning
Warning emitted by the pytest assert rewrite module.
-
class
PytestCacheWarning
¶ Bases:
pytest.PytestWarning
Warning emitted by the cache plugin in various situations.
-
class
PytestCollectionWarning
¶ Bases:
pytest.PytestWarning
Warning emitted when pytest is not able to collect a file or symbol in a module.
-
class
PytestConfigWarning
¶ Bases:
pytest.PytestWarning
Warning emitted for configuration issues.
-
class
PytestDeprecationWarning
¶ Bases:
pytest.PytestWarning
,DeprecationWarning
Warning class for features that will be removed in a future version.
-
class
PytestExperimentalApiWarning
¶ Bases:
pytest.PytestWarning
,FutureWarning
Warning category used to denote experiments in pytest.
Use sparingly as the API might change or even be removed completely in a future version.
-
class
PytestUnhandledCoroutineWarning
¶ Bases:
pytest.PytestWarning
Warning emitted for an unhandled coroutine.
A coroutine was encountered when collecting test functions, but was not handled by any async-aware plugin. Coroutine test functions are not natively supported.
-
class
PytestUnknownMarkWarning
¶ Bases:
pytest.PytestWarning
Warning emitted on use of unknown markers.
See Marking test functions with attributes for details.
-
class
PytestUnraisableExceptionWarning
¶ Bases:
pytest.PytestWarning
An unraisable exception was reported.
Unraisable exceptions are exceptions raised in
__del__
implementations and similar situations when the exception cannot be raised as normal.
-
class
PytestUnhandledThreadExceptionWarning
¶ Bases:
pytest.PytestWarning
An unhandled exception occurred in a
Thread
.Such exceptions don’t propagate normally.
Consult the Internal pytest warnings section in the documentation for more information.
Configuration Options¶
Here is a list of builtin configuration options that may be written in a pytest.ini
, pyproject.toml
, tox.ini
or setup.cfg
file, usually located at the root of your repository. To see each file format in details, see
Configuration file formats.
Warning
Usage of setup.cfg
is not recommended except for very simple use cases. .cfg
files use a different parser than pytest.ini
and tox.ini
which might cause hard to track
down problems.
When possible, it is recommended to use the latter files, or pyproject.toml
, to hold your pytest configuration.
Configuration options may be overwritten in the command-line by using -o/--override-ini
, which can also be
passed multiple times. The expected format is name=value
. For example:
pytest -o console_output_style=classic -o cache_dir=/tmp/mycache
-
addopts
¶ Add the specified
OPTS
to the set of command line arguments as if they had been specified by the user. Example: if you have this ini file content:# content of pytest.ini [pytest] addopts = --maxfail=2 -rf # exit after 2 failures, report fail info
issuing
pytest test_hello.py
actually means:pytest --maxfail=2 -rf test_hello.py
Default is to add no options.
-
cache_dir
¶ Sets a directory where stores content of cache plugin. Default directory is
.pytest_cache
which is created in rootdir. Directory may be relative or absolute path. If setting relative path, then directory is created relative to rootdir. Additionally path may contain environment variables, that will be expanded. For more information about cache plugin please refer to Cache: working with cross-testrun state.
-
confcutdir
¶ Sets a directory where search upwards for
conftest.py
files stops. By default, pytest will stop searching forconftest.py
files upwards frompytest.ini
/tox.ini
/setup.cfg
of the project if any, or up to the file-system root.
-
console_output_style
¶ Sets the console output style while running tests:
classic
: classic pytest output.progress
: like classic pytest output, but with a progress indicator.count
: like progress, but shows progress as the number of tests completed instead of a percent.
The default is
progress
, but you can fallback toclassic
if you prefer or the new mode is causing unexpected problems:# content of pytest.ini [pytest] console_output_style = classic
-
doctest_encoding
¶ Default encoding to use to decode text files with docstrings. See how pytest handles doctests.
-
doctest_optionflags
¶ One or more doctest flag names from the standard
doctest
module. See how pytest handles doctests.
-
empty_parameter_set_mark
¶ Allows to pick the action for empty parametersets in parameterization
skip
skips tests with an empty parameterset (default)xfail
marks tests with an empty parameterset as xfail(run=False)fail_at_collect
raises an exception if parametrize collects an empty parameter set
# content of pytest.ini [pytest] empty_parameter_set_mark = xfail
Note
The default value of this option is planned to change to
xfail
in future releases as this is considered less error prone, see #3155 for more details.
-
faulthandler_timeout
¶ Dumps the tracebacks of all threads if a test takes longer than
X
seconds to run (including fixture setup and teardown). Implemented using the faulthandler.dump_traceback_later function, so all caveats there apply.# content of pytest.ini [pytest] faulthandler_timeout=5
For more information please refer to Fault Handler.
-
filterwarnings
¶ Sets a list of filters and actions that should be taken for matched warnings. By default all warnings emitted during the test session will be displayed in a summary at the end of the test session.
# content of pytest.ini [pytest] filterwarnings = error ignore::DeprecationWarning
This tells pytest to ignore deprecation warnings and turn all other warnings into errors. For more information please refer to Warnings Capture.
-
junit_duration_report
¶ New in version 4.1.
Configures how durations are recorded into the JUnit XML report:
total
(the default): duration times reported include setup, call, and teardown times.call
: duration times reported include only call times, excluding setup and teardown.
[pytest] junit_duration_report = call
-
junit_family
¶ New in version 4.2.
Changed in version 6.1: Default changed to
xunit2
.Configures the format of the generated JUnit XML file. The possible options are:
xunit1
(orlegacy
): produces old style output, compatible with the xunit 1.0 format.xunit2
: produces xunit 2.0 style output, which should be more compatible with latest Jenkins versions. This is the default.
[pytest] junit_family = xunit2
-
junit_logging
¶ New in version 3.5.
Changed in version 5.4:
log
,all
,out-err
options added.Configures if captured output should be written to the JUnit XML file. Valid values are:
log
: write onlylogging
captured output.system-out
: write capturedstdout
contents.system-err
: write capturedstderr
contents.out-err
: write both capturedstdout
andstderr
contents.all
: write capturedlogging
,stdout
andstderr
contents.no
(the default): no captured output is written.
[pytest] junit_logging = system-out
-
junit_log_passing_tests
¶ New in version 4.6.
If
junit_logging != "no"
, configures if the captured output should be written to the JUnit XML file for passing tests. Default isTrue
.[pytest] junit_log_passing_tests = False
-
junit_suite_name
¶ To set the name of the root test suite xml item, you can configure the
junit_suite_name
option in your config file:[pytest] junit_suite_name = my_suite
-
log_auto_indent
¶ Allow selective auto-indentation of multiline log messages.
Supports command line option
--log-auto-indent [value]
and config optionlog_auto_indent = [value]
to set the auto-indentation behavior for all logging.[value]
can be:True or “On” - Dynamically auto-indent multiline log messages
False or “Off” or 0 - Do not auto-indent multiline log messages (the default behavior)
[positive integer] - auto-indent multiline log messages by [value] spaces
[pytest] log_auto_indent = False
Supports passing kwarg
extra={"auto_indent": [value]}
to calls tologging.log()
to specify auto-indentation behavior for a specific entry in the log.extra
kwarg overrides the value specified on the command line or in the config.
-
log_cli
¶ Enable log display during test run (also known as “live logging”). The default is
False
.[pytest] log_cli = True
-
log_cli_date_format
¶ Sets a
time.strftime()
-compatible string that will be used when formatting dates for live logging.[pytest] log_cli_date_format = %Y-%m-%d %H:%M:%S
For more information, see Live Logs.
-
log_cli_format
¶ Sets a
logging
-compatible string used to format live logging messages.[pytest] log_cli_format = %(asctime)s %(levelname)s %(message)s
For more information, see Live Logs.
-
log_cli_level
¶ Sets the minimum log message level that should be captured for live logging. The integer value or the names of the levels can be used.
[pytest] log_cli_level = INFO
For more information, see Live Logs.
-
log_date_format
¶ Sets a
time.strftime()
-compatible string that will be used when formatting dates for logging capture.[pytest] log_date_format = %Y-%m-%d %H:%M:%S
For more information, see Logging.
-
log_file
¶ Sets a file name relative to the
pytest.ini
file where log messages should be written to, in addition to the other logging facilities that are active.[pytest] log_file = logs/pytest-logs.txt
For more information, see Logging.
-
log_file_date_format
¶ Sets a
time.strftime()
-compatible string that will be used when formatting dates for the logging file.[pytest] log_file_date_format = %Y-%m-%d %H:%M:%S
For more information, see Logging.
-
log_file_format
¶ Sets a
logging
-compatible string used to format logging messages redirected to the logging file.[pytest] log_file_format = %(asctime)s %(levelname)s %(message)s
For more information, see Logging.
-
log_file_level
¶ Sets the minimum log message level that should be captured for the logging file. The integer value or the names of the levels can be used.
[pytest] log_file_level = INFO
For more information, see Logging.
-
log_format
¶ Sets a
logging
-compatible string used to format captured logging messages.[pytest] log_format = %(asctime)s %(levelname)s %(message)s
For more information, see Logging.
-
log_level
¶ Sets the minimum log message level that should be captured for logging capture. The integer value or the names of the levels can be used.
[pytest] log_level = INFO
For more information, see Logging.
-
markers
¶ When the
--strict-markers
or--strict
command-line arguments are used, only known markers - defined in code by core pytest or some plugin - are allowed.You can list additional markers in this setting to add them to the whitelist, in which case you probably want to add
--strict-markers
toaddopts
to avoid future regressions:[pytest] addopts = --strict-markers markers = slow serial
Note
The use of
--strict-markers
is highly preferred.--strict
was kept for backward compatibility only and may be confusing for others as it only applies to markers and not to other options.
-
minversion
¶ Specifies a minimal pytest version required for running tests.
# content of pytest.ini [pytest] minversion = 3.0 # will fail if we run with pytest-2.8
-
norecursedirs
¶ Set the directory basename patterns to avoid when recursing for test discovery. The individual (fnmatch-style) patterns are applied to the basename of a directory to decide if to recurse into it. Pattern matching characters:
* matches everything ? matches any single character [seq] matches any character in seq [!seq] matches any char not in seq
Default patterns are
'*.egg'
,'.*'
,'_darcs'
,'build'
,'CVS'
,'dist'
,'node_modules'
,'venv'
,'{arch}'
. Setting anorecursedirs
replaces the default. Here is an example of how to avoid certain directories:[pytest] norecursedirs = .svn _build tmp*
This would tell
pytest
to not look into typical subversion or sphinx-build directories or into anytmp
prefixed directory.Additionally,
pytest
will attempt to intelligently identify and ignore a virtualenv by the presence of an activation script. Any directory deemed to be the root of a virtual environment will not be considered during test collection unless‑‑collect‑in‑virtualenv
is given. Note also thatnorecursedirs
takes precedence over‑‑collect‑in‑virtualenv
; e.g. if you intend to run tests in a virtualenv with a base directory that matches'.*'
you must overridenorecursedirs
in addition to using the‑‑collect‑in‑virtualenv
flag.
-
python_classes
¶ One or more name prefixes or glob-style patterns determining which classes are considered for test collection. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any class prefixed with
Test
as a test collection. Here is an example of how to collect tests from classes that end inSuite
:[pytest] python_classes = *Suite
Note that
unittest.TestCase
derived classes are always collected regardless of this option, asunittest
’s own collection framework is used to collect those tests.
-
python_files
¶ One or more Glob-style file patterns determining which python files are considered as test modules. Search for multiple glob patterns by adding a space between patterns:
[pytest] python_files = test_*.py check_*.py example_*.py
Or one per line:
[pytest] python_files = test_*.py check_*.py example_*.py
By default, files matching
test_*.py
and*_test.py
will be considered test modules.
-
python_functions
¶ One or more name prefixes or glob-patterns determining which test functions and methods are considered tests. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any function prefixed with
test
as a test. Here is an example of how to collect test functions and methods that end in_test
:[pytest] python_functions = *_test
Note that this has no effect on methods that live on a
unittest .TestCase
derived class, asunittest
’s own collection framework is used to collect those tests.See Changing naming conventions for more detailed examples.
-
required_plugins
¶ A space separated list of plugins that must be present for pytest to run. Plugins can be listed with or without version specifiers directly following their name. Whitespace between different version specifiers is not allowed. If any one of the plugins is not found, emit an error.
[pytest] required_plugins = pytest-django>=3.0.0,<4.0.0 pytest-html pytest-xdist>=1.0.0
-
testpaths
¶ Sets list of directories that should be searched for tests when no specific directories, files or test ids are given in the command line when executing pytest from the rootdir directory. Useful when all project tests are in a known location to speed up test collection and to avoid picking up undesired tests by accident.
[pytest] testpaths = testing doc
This tells pytest to only look for tests in
testing
anddoc
directories when executing from the root directory.
-
usefixtures
¶ List of fixtures that will be applied to all test functions; this is semantically the same to apply the
@pytest.mark.usefixtures
marker to all test functions.[pytest] usefixtures = clean_db
-
xfail_strict
¶ If set to
True
, tests marked with@pytest.mark.xfail
that actually succeed will by default fail the test suite. For more information, see strict parameter.[pytest] xfail_strict = True
Command-line Flags¶
All the command-line flags can be obtained by running pytest --help
:
$ pytest --help
usage: pytest [options] [file_or_dir] [file_or_dir] [...]
positional arguments:
file_or_dir
general:
-k EXPRESSION only run tests which match the given substring
expression. An expression is a python evaluatable
expression where all names are substring-matched
against test names and their parent classes.
Example: -k 'test_method or test_other' matches all
test functions and classes whose name contains
'test_method' or 'test_other', while -k 'not
test_method' matches those that don't contain
'test_method' in their names. -k 'not test_method
and not test_other' will eliminate the matches.
Additionally keywords are matched to classes and
functions containing extra names in their
'extra_keyword_matches' set, as well as functions
which have names assigned directly to them. The
matching is case-insensitive.
-m MARKEXPR only run tests matching given mark expression.
For example: -m 'mark1 and not mark2'.
--markers show markers (builtin, plugin and per-project ones).
-x, --exitfirst exit instantly on first error or failed test.
--fixtures, --funcargs
show available fixtures, sorted by plugin appearance
(fixtures with leading '_' are only shown with '-v')
--fixtures-per-test show fixtures per test
--pdb start the interactive Python debugger on errors or
KeyboardInterrupt.
--pdbcls=modulename:classname
start a custom interactive Python debugger on
errors. For example:
--pdbcls=IPython.terminal.debugger:TerminalPdb
--trace Immediately break when running each test.
--capture=method per-test capturing method: one of fd|sys|no|tee-sys.
-s shortcut for --capture=no.
--runxfail report the results of xfail tests as if they were
not marked
--lf, --last-failed rerun only the tests that failed at the last run (or
all if none failed)
--ff, --failed-first run all tests, but run the last failures first.
This may re-order tests and thus lead to repeated
fixture setup/teardown.
--nf, --new-first run tests from new files first, then the rest of the
tests sorted by file mtime
--cache-show=[CACHESHOW]
show cache contents, don't perform collection or
tests. Optional argument: glob (default: '*').
--cache-clear remove all cache contents at start of test run.
--lfnf={all,none}, --last-failed-no-failures={all,none}
which tests to run with no previously (known)
failures.
--sw, --stepwise exit on test failure and continue from last failing
test next time
--sw-skip, --stepwise-skip
ignore the first failing test but stop on the next
failing test
reporting:
--durations=N show N slowest setup/test durations (N=0 for all).
--durations-min=N Minimal duration in seconds for inclusion in slowest
list. Default 0.005
-v, --verbose increase verbosity.
--no-header disable header
--no-summary disable summary
-q, --quiet decrease verbosity.
--verbosity=VERBOSE set verbosity. Default is 0.
-r chars show extra test summary info as specified by chars:
(f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed,
(p)assed, (P)assed with output, (a)ll except passed
(p/P), or (A)ll. (w)arnings are enabled by default
(see --disable-warnings), 'N' can be used to reset
the list. (default: 'fE').
--disable-warnings, --disable-pytest-warnings
disable warnings summary
-l, --showlocals show locals in tracebacks (disabled by default).
--tb=style traceback print mode
(auto/long/short/line/native/no).
--show-capture={no,stdout,stderr,log,all}
Controls how captured stdout/stderr/log is shown on
failed tests. Default is 'all'.
--full-trace don't cut any tracebacks (default is to cut).
--color=color color terminal output (yes/no/auto).
--code-highlight={yes,no}
Whether code should be highlighted (only if --color
is also enabled)
--pastebin=mode send failed|all info to bpaste.net pastebin service.
--junit-xml=path create junit-xml style report file at given path.
--junit-prefix=str prepend prefix to classnames in junit-xml output
pytest-warnings:
-W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS
set which warnings to report, see -W option of
python itself.
--maxfail=num exit after first num failures or errors.
--strict-config any warnings encountered while parsing the `pytest`
section of the configuration file raise errors.
--strict-markers markers not registered in the `markers` section of
the configuration file raise errors.
--strict (deprecated) alias to --strict-markers.
-c file load configuration from `file` instead of trying to
locate one of the implicit configuration files.
--continue-on-collection-errors
Force test execution even if collection errors
occur.
--rootdir=ROOTDIR Define root directory for tests. Can be relative
path: 'root_dir', './root_dir',
'root_dir/another_dir/'; absolute path:
'/home/user/root_dir'; path with variables:
'$HOME/root_dir'.
collection:
--collect-only, --co only collect tests, don't execute them.
--pyargs try to interpret all arguments as python packages.
--ignore=path ignore path during collection (multi-allowed).
--ignore-glob=path ignore path pattern during collection (multi-
allowed).
--deselect=nodeid_prefix
deselect item (via node id prefix) during collection
(multi-allowed).
--confcutdir=dir only load conftest.py's relative to specified dir.
--noconftest Don't load any conftest.py files.
--keep-duplicates Keep duplicate tests.
--collect-in-virtualenv
Don't ignore tests in a local virtualenv directory
--import-mode={prepend,append,importlib}
prepend/append to sys.path when importing test
modules and conftest files, default is to prepend.
--doctest-modules run doctests in all .py modules
--doctest-report={none,cdiff,ndiff,udiff,only_first_failure}
choose another output format for diffs on doctest
failure
--doctest-glob=pat doctests file matching pattern, default: test*.txt
--doctest-ignore-import-errors
ignore doctest ImportErrors
--doctest-continue-on-failure
for a given doctest, continue to run after the first
failure
test session debugging and configuration:
--basetemp=dir base temporary directory for this test run.(warning:
this directory is removed if it exists)
-V, --version display pytest version and information about
plugins.When given twice, also display information
about plugins.
-h, --help show help message and configuration info
-p name early-load given plugin module name or entry point
(multi-allowed).
To avoid loading of plugins, use the `no:` prefix,
e.g. `no:doctest`.
--trace-config trace considerations of conftest.py files.
--debug store internal tracing debug information in
'pytestdebug.log'.
-o OVERRIDE_INI, --override-ini=OVERRIDE_INI
override ini option with "option=value" style, e.g.
`-o xfail_strict=True -o cache_dir=cache`.
--assert=MODE Control assertion debugging tools.
'plain' performs no assertion debugging.
'rewrite' (the default) rewrites assert statements
in test modules on import to provide assert
expression information.
--setup-only only setup fixtures, do not execute tests.
--setup-show show setup of fixtures while executing tests.
--setup-plan show what fixtures and tests would be executed but
don't execute anything.
logging:
--log-level=LEVEL level of messages to catch/display.
Not set by default, so it depends on the root/parent
log handler's effective level, where it is "WARNING"
by default.
--log-format=LOG_FORMAT
log format as used by the logging module.
--log-date-format=LOG_DATE_FORMAT
log date format as used by the logging module.
--log-cli-level=LOG_CLI_LEVEL
cli logging level.
--log-cli-format=LOG_CLI_FORMAT
log format as used by the logging module.
--log-cli-date-format=LOG_CLI_DATE_FORMAT
log date format as used by the logging module.
--log-file=LOG_FILE path to a file when logging will be written to.
--log-file-level=LOG_FILE_LEVEL
log file logging level.
--log-file-format=LOG_FILE_FORMAT
log format as used by the logging module.
--log-file-date-format=LOG_FILE_DATE_FORMAT
log date format as used by the logging module.
--log-auto-indent=LOG_AUTO_INDENT
Auto-indent multiline messages passed to the logging
module. Accepts true|on, false|off or an integer.
[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found:
markers (linelist): markers for test functions
empty_parameter_set_mark (string):
default marker for empty parametersets
norecursedirs (args): directory patterns to avoid for recursion
testpaths (args): directories to search for tests when no files or
directories are given in the command line.
filterwarnings (linelist):
Each line specifies a pattern for
warnings.filterwarnings. Processed after
-W/--pythonwarnings.
usefixtures (args): list of default fixtures to be used with this
project
python_files (args): glob-style file patterns for Python test module
discovery
python_classes (args):
prefixes or glob names for Python test class
discovery
python_functions (args):
prefixes or glob names for Python test function and
method discovery
disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool):
disable string escape non-ascii characters, might
cause unwanted side effects(use at your own risk)
console_output_style (string):
console output: "classic", or with additional
progress information ("progress" (percentage) |
"count").
xfail_strict (bool): default for the strict parameter of xfail markers
when not given explicitly (default: False)
enable_assertion_pass_hook (bool):
Enables the pytest_assertion_pass hook.Make sure to
delete any previously generated pyc cache files.
junit_suite_name (string):
Test suite name for JUnit report
junit_logging (string):
Write captured log messages to JUnit report: one of
no|log|system-out|system-err|out-err|all
junit_log_passing_tests (bool):
Capture log information for passing tests to JUnit
report:
junit_duration_report (string):
Duration time to report: one of total|call
junit_family (string):
Emit XML for schema: one of legacy|xunit1|xunit2
doctest_optionflags (args):
option flags for doctests
doctest_encoding (string):
encoding used for doctest files
cache_dir (string): cache directory path.
log_level (string): default value for --log-level
log_format (string): default value for --log-format
log_date_format (string):
default value for --log-date-format
log_cli (bool): enable log display during test run (also known as
"live logging").
log_cli_level (string):
default value for --log-cli-level
log_cli_format (string):
default value for --log-cli-format
log_cli_date_format (string):
default value for --log-cli-date-format
log_file (string): default value for --log-file
log_file_level (string):
default value for --log-file-level
log_file_format (string):
default value for --log-file-format
log_file_date_format (string):
default value for --log-file-date-format
log_auto_indent (string):
default value for --log-auto-indent
faulthandler_timeout (string):
Dump the traceback of all threads if a test takes
more than TIMEOUT seconds to finish.
addopts (args): extra command line options
minversion (string): minimally required pytest version
required_plugins (args):
plugins that must be present for pytest to run
environment variables:
PYTEST_ADDOPTS extra command line options
PYTEST_PLUGINS comma-separated plugins to load during startup
PYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loading
PYTEST_DEBUG set to enable debug tracing of pytest's internals
to see available markers type: pytest --markers
to see available fixtures type: pytest --fixtures
(shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option