Reference¶
This page contains the full reference to pytest’s API.
Functions¶
pytest.approx¶
-
approx
(expected, rel=None, abs=None, nan_ok=False)[source]¶ Assert that two numbers (or two sets of numbers) are equal to each other within some tolerance.
Due to the intricacies of floating-point arithmetic, numbers that we would intuitively expect to be equal are not always so:
>>> 0.1 + 0.2 == 0.3 False
This problem is commonly encountered when writing tests, e.g. when making sure that floating-point values are what you expect them to be. One way to deal with this problem is to assert that two floating-point numbers are equal to within some appropriate tolerance:
>>> abs((0.1 + 0.2) - 0.3) < 1e-6 True
However, comparisons like this are tedious to write and difficult to understand. Furthermore, absolute comparisons like the one above are usually discouraged because there’s no tolerance that works well for all situations.
1e-6
is good for numbers around1
, but too small for very big numbers and too big for very small ones. It’s better to express the tolerance as a fraction of the expected value, but relative comparisons like that are even more difficult to write correctly and concisely.The
approx
class performs floating-point comparisons using a syntax that’s as intuitive as possible:>>> from pytest import approx >>> 0.1 + 0.2 == approx(0.3) True
The same syntax also works for sequences of numbers:
>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6)) True
Dictionary values:
>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6}) True
numpy
arrays:>>> import numpy as np >>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) True
And for a
numpy
array against a scalar:>>> import numpy as np >>> np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) True
By default,
approx
considers numbers within a relative tolerance of1e-6
(i.e. one part in a million) of its expected value to be equal. This treatment would lead to surprising results if the expected value was0.0
, because nothing but0.0
itself is relatively close to0.0
. To handle this case less surprisingly,approx
also considers numbers within an absolute tolerance of1e-12
of its expected value to be equal. Infinity and NaN are special cases. Infinity is only considered equal to itself, regardless of the relative tolerance. NaN is not considered equal to anything by default, but you can make it be equal to itself by setting thenan_ok
argument to True. (This is meant to facilitate comparing arrays that use NaN to mean “no data”.)Both the relative and absolute tolerances can be changed by passing arguments to the
approx
constructor:>>> 1.0001 == approx(1) False >>> 1.0001 == approx(1, rel=1e-3) True >>> 1.0001 == approx(1, abs=1e-3) True
If you specify
abs
but notrel
, the comparison will not consider the relative tolerance at all. In other words, two numbers that are within the default relative tolerance of1e-6
will still be considered unequal if they exceed the specified absolute tolerance. If you specify bothabs
andrel
, the numbers will be considered equal if either tolerance is met:>>> 1 + 1e-8 == approx(1) True >>> 1 + 1e-8 == approx(1, abs=1e-12) False >>> 1 + 1e-8 == approx(1, rel=1e-6, abs=1e-12) True
If you’re thinking about using
approx
, then you might want to know how it compares to other good ways of comparing floating-point numbers. All of these algorithms are based on relative and absolute tolerances and should agree for the most part, but they do have meaningful differences:math.isclose(a, b, rel_tol=1e-9, abs_tol=0.0)
: True if the relative tolerance is met w.r.t. eithera
orb
or if the absolute tolerance is met. Because the relative tolerance is calculated w.r.t. botha
andb
, this test is symmetric (i.e. neithera
norb
is a “reference value”). You have to specify an absolute tolerance if you want to compare to0.0
because there is no tolerance by default. Only available in python>=3.5. More information…numpy.isclose(a, b, rtol=1e-5, atol=1e-8)
: True if the difference betweena
andb
is less that the sum of the relative tolerance w.r.t.b
and the absolute tolerance. Because the relative tolerance is only calculated w.r.t.b
, this test is asymmetric and you can think ofb
as the reference value. Support for comparing sequences is provided bynumpy.allclose
. More information…unittest.TestCase.assertAlmostEqual(a, b)
: True ifa
andb
are within an absolute tolerance of1e-7
. No relative tolerance is considered and the absolute tolerance cannot be changed, so this function is not appropriate for very large or very small numbers. Also, it’s only available in subclasses ofunittest.TestCase
and it’s ugly because it doesn’t follow PEP8. More information…a == pytest.approx(b, rel=1e-6, abs=1e-12)
: True if the relative tolerance is met w.r.t.b
or if the absolute tolerance is met. Because the relative tolerance is only calculated w.r.t.b
, this test is asymmetric and you can think ofb
as the reference value. In the special case that you explicitly specify an absolute tolerance but not a relative tolerance, only the absolute tolerance is considered.
Warning
Changed in version 3.2.
In order to avoid inconsistent behavior,
TypeError
is raised for>
,>=
,<
and<=
comparisons. The example below illustrates the problem:assert approx(0.1) > 0.1 + 1e-10 # calls approx(0.1).__gt__(0.1 + 1e-10) assert 0.1 + 1e-10 > approx(0.1) # calls approx(0.1).__lt__(0.1 + 1e-10)
In the second example one expects
approx(0.1).__le__(0.1 + 1e-10)
to be called. But instead,approx(0.1).__lt__(0.1 + 1e-10)
is used to comparison. This is because the call hierarchy of rich comparisons follows a fixed behavior. More information…
pytest.skip¶
-
skip
(msg[, allow_module_level=False])[source]¶ skip an executing test with the given message. Note: it’s usually better to use the pytest.mark.skipif marker to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. See the pytest_skipping plugin for details.
Parameters: allow_module_level (bool) – allows this function to be called at module level, skipping the rest of the module. Default to False.
pytest.param¶
-
param
(*values[, id][, marks])[source]¶ Specify a parameter in pytest.mark.parametrize calls or parametrized fixtures.
@pytest.mark.parametrize("test_input,expected", [ ("3+5", 8), pytest.param("6*9", 42, marks=pytest.mark.xfail), ]) def test_eval(test_input, expected): assert eval(test_input) == expected
Parameters: - values – variable args of the values of the parameter set, in order.
- marks – a single mark or a list of marks to be applied to this parameter set.
- id (str) – the id to attribute to this parameter set.
pytest.raises¶
Tutorial: Assertions about expected exceptions.
-
with
raises
(expected_exception: Exception[, match][, message]) as excinfo[source]¶ Assert that a code block/function call raises
expected_exception
and raise a failure exception otherwise.Parameters: - message – if specified, provides a custom failure message if the exception is not raised
- match – if specified, asserts that the exception matches a text or regex
This helper produces a
ExceptionInfo()
object (see below).You may use this function as a context manager:
>>> with raises(ZeroDivisionError): ... 1/0
Changed in version 2.10.
In the context manager form you may use the keyword argument
message
to specify a custom failure message:>>> with raises(ZeroDivisionError, message="Expecting ZeroDivisionError"): ... pass Traceback (most recent call last): ... Failed: Expecting ZeroDivisionError
Note
When using
pytest.raises
as a context manager, it’s worthwhile to note that normal context manager rules apply and that the exception raised must be the final line in the scope of the context manager. Lines of code after that, within the scope of the context manager will not be executed. For example:>>> value = 15 >>> with raises(ValueError) as exc_info: ... if value > 10: ... raise ValueError("value must be <= 10") ... assert exc_info.type == ValueError # this will not execute
Instead, the following approach must be taken (note the difference in scope):
>>> with raises(ValueError) as exc_info: ... if value > 10: ... raise ValueError("value must be <= 10") ... >>> assert exc_info.type == ValueError
Since version
3.1
you can use the keyword argumentmatch
to assert that the exception matches a text or regex:>>> with raises(ValueError, match='must be 0 or None'): ... raise ValueError("value must be 0 or None") >>> with raises(ValueError, match=r'must be \d+$'): ... raise ValueError("value must be 42")
Legacy forms
The forms below are fully supported but are discouraged for new code because the context manager form is regarded as more readable and less error-prone.
It is possible to specify a callable by passing a to-be-called lambda:
>>> raises(ZeroDivisionError, lambda: 1/0) <ExceptionInfo ...>
or you can specify an arbitrary callable with arguments:
>>> def f(x): return 1/x ... >>> raises(ZeroDivisionError, f, 0) <ExceptionInfo ...> >>> raises(ZeroDivisionError, f, x=0) <ExceptionInfo ...>
It is also possible to pass a string to be evaluated at runtime:
>>> raises(ZeroDivisionError, "f(0)") <ExceptionInfo ...>
The string will be evaluated using the same
locals()
andglobals()
at the moment of theraises
call.Consult the API of
excinfo
objects:ExceptionInfo
.Note
Similar to caught exception objects in Python, explicitly clearing local references to returned
ExceptionInfo
objects can help the Python interpreter speed up its garbage collection.Clearing those references breaks a reference cycle (
ExceptionInfo
–> caught exception –> frame stack raising the exception –> current frame stack –> local variables –>ExceptionInfo
) which makes Python keep all objects referenced from that cycle (including all local variables in the current frame) alive until the next cyclic garbage collection run. See the official Pythontry
statement documentation for more detailed information.
pytest.deprecated_call¶
Tutorial: Ensuring a function triggers a deprecation warning.
-
with
deprecated_call
()[source]¶ context manager that can be used to ensure a block of code triggers a
DeprecationWarning
orPendingDeprecationWarning
:>>> import warnings >>> def api_call_v2(): ... warnings.warn('use v3 of this api', DeprecationWarning) ... return 200 >>> with deprecated_call(): ... assert api_call_v2() == 200
deprecated_call
can also be used by passing a function and*args
and*kwargs
, in which case it will ensure callingfunc(*args, **kwargs)
produces one of the warnings types above.
pytest.register_assert_rewrite¶
Tutorial: Assertion Rewriting.
-
register_assert_rewrite
(*names)[source]¶ Register one or more module names to be rewritten on import.
This function will make sure that this module or all modules inside the package will get their assert statements rewritten. Thus you should make sure to call this before the module is actually imported, usually in your __init__.py if you are a plugin using a package.
Raises: TypeError – if the given module names are not strings.
pytest.warns¶
Tutorial: Asserting warnings with the warns function
-
with
warns
(expected_warning: Exception[, match])[source]¶ Assert that code raises a particular class of warning.
Specifically, the parameter
expected_warning
can be a warning class or sequence of warning classes, and the inside thewith
block must issue a warning of that class or classes.This helper produces a list of
warnings.WarningMessage
objects, one for each warning raised.This function can be used as a context manager, or any of the other ways
pytest.raises
can be used:>>> with warns(RuntimeWarning): ... warnings.warn("my warning", RuntimeWarning)
In the context manager form you may use the keyword argument
match
to assert that the exception matches a text or regex:>>> with warns(UserWarning, match='must be 0 or None'): ... warnings.warn("value must be 0 or None", UserWarning) >>> with warns(UserWarning, match=r'must be \d+$'): ... warnings.warn("value must be 42", UserWarning) >>> with warns(UserWarning, match=r'must be \d+$'): ... warnings.warn("this is not here", UserWarning) Traceback (most recent call last): ... Failed: DID NOT WARN. No warnings of type ...UserWarning... was emitted...
Marks¶
Marks can be used apply meta data to test functions (but not fixtures), which can then be accessed by fixtures or plugins.
pytest.mark.filterwarnings¶
Tutorial: @pytest.mark.filterwarnings.
Add warning filters to marked test items.
-
pytest.mark.
filterwarnings
(filter)¶ Parameters: filter (str) – A warning specification string, which is composed of contents of the tuple
(action, message, category, module, lineno)
as specified in The Warnings filter section of the Python documentation, separated by":"
. Optional fields can be omitted.For example:
@pytest.mark.warnings("ignore:.*usage will be deprecated.*:DeprecationWarning") def test_foo(): ...
pytest.mark.parametrize¶
Tutorial: Parametrizing fixtures and test functions.
-
Metafunc.
parametrize
(argnames, argvalues, indirect=False, ids=None, scope=None)[source]¶ Add new invocations to the underlying test function using the list of argvalues for the given argnames. Parametrization is performed during the collection phase. If you need to setup expensive resources see about setting indirect to do it rather at test setup time.
Parameters: - argnames – a comma-separated string denoting one or more argument names, or a list/tuple of argument strings.
- argvalues – The list of argvalues determines how often a test is invoked with different argument values. If only one argname was specified argvalues is a list of values. If N argnames were specified, argvalues must be a list of N-tuples, where each tuple-element specifies a value for its respective argname.
- indirect – The list of argnames or boolean. A list of arguments’ names (subset of argnames). If True the list contains all names from the argnames. Each argvalue corresponding to an argname in this list will be passed as request.param to its respective argname fixture function so that it can perform more expensive setups during the setup phase of a test rather than at collection time.
- ids – list of string ids, or a callable. If strings, each is corresponding to the argvalues so that they are part of the test id. If None is given as id of specific test, the automatically generated id for that argument will be used. If callable, it should take one argument (a single argvalue) and return a string or return None. If None, the automatically generated id for that argument will be used. If no ids are provided they will be generated automatically from the argvalues.
- scope – if specified it denotes the scope of the parameters. The scope is used for grouping tests by parameter instances. It will also override any fixture-function defined scope, allowing to set a dynamic scope using test context or configuration.
pytest.mark.skipif¶
Tutorial: Skipping test functions.
Skip a test function if a condition is True
.
-
pytest.mark.
skipif
(condition, *, reason=None)¶ Parameters: - condition (bool or str) –
True/False
if the condition should be skipped or a condition string. - reason (str) – Reason why the test function is being skipped.
- condition (bool or str) –
pytest.mark.usefixtures¶
Tutorial: Using fixtures from classes, modules or projects.
Mark a test function as using the given fixture names.
Warning
This mark can be used with test functions only, having no affect when applied to a fixture function.
-
pytest.mark.
usefixtures
(*names)¶ Parameters: args – the names of the fixture to use, as strings
pytest.mark.xfail¶
Tutorial: XFail: mark test functions as expected to fail.
Marks a test function as expected to fail.
-
pytest.mark.
xfail
(condition=None, *, reason=None, raises=None, run=True, strict=False)¶ Parameters: - condition (bool or str) –
True/False
if the condition should be marked as xfail or a condition string. - reason (str) – Reason why the test function is marked as xfail.
- raises (Exception) – Exception subclass expected to be raised by the test function; other exceptions will fail the test.
- run (bool) – If the test function should actually be executed. If
False
, the function will always xfail and will not be executed (useful a function is segfaulting). - strict (bool) –
- If
False
(the default) the function will be shown in the terminal output asxfailed
if it fails and asxpass
if it passes. In both cases this will not cause the test suite to fail as a whole. This is particularly useful to mark flaky tests (tests that random at fail) to be tackled later. - If
True
, the function will be shown in the terminal output asxfailed
if it fails, but if it unexpectedly passes then it will fail the test suite. This is particularly useful to mark functions that are always failing and there should be a clear indication if they unexpectedly start to pass (for example a new release of a library fixes a known bug).
- If
- condition (bool or str) –
custom marks¶
Marks are created dynamically using the factory object pytest.mark
and applied as a decorator.
For example:
@pytest.mark.timeout(10, "slow", method="thread")
def test_function():
...
Will create and attach a Mark
object to the collected
Item
, which can then be accessed by fixtures or hooks with
Node.iter_markers
. The mark
object will have the following attributes:
mark.args == (10, "slow")
mark.kwargs == {"method": "thread"}
Fixtures¶
Tutorial: pytest fixtures: explicit, modular, scalable.
Fixtures are requested by test functions or other fixtures by declaring them as argument names.
Example of a test requiring a fixture:
def test_output(capsys):
print("hello")
out, err = capsys.readouterr()
assert out == "hello\n"
Example of a fixture requiring another fixture:
@pytest.fixture
def db_session(tmpdir):
fn = tmpdir / "db.file"
return connect(str(fn))
For more details, consult the full fixtures docs.
@pytest.fixture¶
-
@
fixture
(scope='function', params=None, autouse=False, ids=None, name=None)[source]¶ Decorator to mark a fixture factory function.
This decorator can be used, with or without parameters, to define a fixture function.
The name of the fixture function can later be referenced to cause its invocation ahead of running tests: test modules or classes can use the
pytest.mark.usefixtures(fixturename)
marker.Test functions can directly use fixture names as input arguments in which case the fixture instance returned from the fixture function will be injected.
Fixtures can provide their values to test functions using
return
oryield
statements. When usingyield
the code block after theyield
statement is executed as teardown code regardless of the test outcome, and must yield exactly once.Parameters: - scope –
the scope for which this fixture is shared, one of
"function"
(default),"class"
,"module"
,"package"
or"session"
."package"
is considered experimental at this time. - params – an optional list of parameters which will cause multiple invocations of the fixture function and all of the tests using it.
- autouse – if True, the fixture func is activated for all tests that can see it. If False (the default) then an explicit reference is needed to activate the fixture.
- ids – list of string ids each corresponding to the params so that they are part of the test id. If no ids are provided they will be generated automatically from the params.
- name – the name of the fixture. This defaults to the name of the
decorated function. If a fixture is used in the same module in
which it is defined, the function name of the fixture will be
shadowed by the function arg that requests the fixture; one way
to resolve this is to name the decorated function
fixture_<fixturename>
and then use@pytest.fixture(name='<fixturename>')
.
- scope –
config.cache¶
Tutorial: Cache: working with cross-testrun state.
The config.cache
object allows other plugins and fixtures
to store and retrieve values across test runs. To access it from fixtures
request pytestconfig
into your fixture and get it with pytestconfig.cache
.
Under the hood, the cache plugin uses the simple
dumps
/loads
API of the json
stdlib module.
-
Cache.
get
(key, default)[source]¶ return cached value for the given key. If no value was yet cached or the value cannot be read, the specified default is returned.
Parameters: - key – must be a
/
separated value. Usually the first name is the name of your plugin or your application. - default – must be provided in case of a cache-miss or invalid cache values.
- key – must be a
-
Cache.
set
(key, value)[source]¶ save value for the given key.
Parameters: - key – must be a
/
separated value. Usually the first name is the name of your plugin or your application. - value – must be of any combination of basic python types, including nested types like e. g. lists of dictionaries.
- key – must be a
-
Cache.
makedir
(name)[source]¶ return a directory path object with the given name. If the directory does not yet exist, it will be created. You can use it to manage files likes e. g. store/retrieve database dumps across test sessions.
Parameters: name – must be a string not containing a /
separator. Make sure the name contains your plugin or application identifiers to prevent clashes with other cache users.
capsys¶
Tutorial: Capturing of the stdout/stderr output.
-
capsys
()[source]¶ Enable capturing of writes to
sys.stdout
andsys.stderr
and make captured output available viacapsys.readouterr()
method calls which return a(out, err)
namedtuple.out
anderr
will betext
objects.Returns an instance of
CaptureFixture
.Example:
def test_output(capsys): print("hello") captured = capsys.readouterr() assert captured.out == "hello\n"
-
class
CaptureFixture
[source]¶ Object returned by
capsys()
,capsysbinary()
,capfd()
andcapfdbinary()
fixtures.
capsysbinary¶
Tutorial: Capturing of the stdout/stderr output.
-
capsysbinary
()[source]¶ Enable capturing of writes to
sys.stdout
andsys.stderr
and make captured output available viacapsys.readouterr()
method calls which return a(out, err)
tuple.out
anderr
will bebytes
objects.Returns an instance of
CaptureFixture
.Example:
def test_output(capsysbinary): print("hello") captured = capsysbinary.readouterr() assert captured.out == b"hello\n"
capfd¶
Tutorial: Capturing of the stdout/stderr output.
-
capfd
()[source]¶ Enable capturing of writes to file descriptors
1
and2
and make captured output available viacapfd.readouterr()
method calls which return a(out, err)
tuple.out
anderr
will betext
objects.Returns an instance of
CaptureFixture
.Example:
def test_system_echo(capfd): os.system('echo "hello"') captured = capsys.readouterr() assert captured.out == "hello\n"
capfdbinary¶
Tutorial: Capturing of the stdout/stderr output.
-
capfdbinary
()[source]¶ Enable capturing of write to file descriptors 1 and 2 and make captured output available via
capfdbinary.readouterr
method calls which return a(out, err)
tuple.out
anderr
will bebytes
objects.Returns an instance of
CaptureFixture
.Example:
def test_system_echo(capfdbinary): os.system('echo "hello"') captured = capfdbinary.readouterr() assert captured.out == b"hello\n"
doctest_namespace¶
Tutorial: Doctest integration for modules and test files.
-
doctest_namespace
()[source]¶ Fixture that returns a
dict
that will be injected into the namespace of doctests.Usually this fixture is used in conjunction with another
autouse
fixture:@pytest.fixture(autouse=True) def add_np(doctest_namespace): doctest_namespace["np"] = numpy
For more details: The ‘doctest_namespace’ fixture.
request¶
Tutorial: Pass different values to a test function, depending on command line options.
The request
fixture is a special fixture providing information of the requesting test function.
-
class
FixtureRequest
[source]¶ A request for a fixture from a test or fixture function.
A request object gives access to the requesting test context and has an optional
param
attribute in case the fixture is parametrized indirectly.-
fixturename
= None¶ fixture for which this request is being performed
-
scope
= None¶ Scope string, one of “function”, “class”, “module”, “session”
-
node
¶ underlying collection node (depends on current request scope)
-
config
¶ the pytest config object associated with this request.
-
function
¶ test function object if the request has a per-function scope.
-
cls
¶ class (can be None) where the test function was collected.
-
instance
¶ instance (can be None) on which test function was collected.
-
module
¶ python module object where the test function was collected.
-
fspath
¶ the file system path of the test module which collected this test.
-
keywords
¶ keywords/markers dictionary for the underlying node.
-
session
¶ pytest session object.
-
addfinalizer
(finalizer)[source]¶ add finalizer/teardown function to be called after the last test within the requesting test context finished execution.
-
applymarker
(marker)[source]¶ Apply a marker to a single test function invocation. This method is useful if you don’t want to have a keyword/marker on all function invocations.
Parameters: marker – a _pytest.mark.MarkDecorator
object created by a call topytest.mark.NAME(...)
.
-
cached_setup
(setup, teardown=None, scope='module', extrakey=None)[source]¶ (deprecated) Return a testing resource managed by
setup
&teardown
calls.scope
andextrakey
determine when theteardown
function will be called so that subsequent calls tosetup
would recreate the resource. With pytest-2.3 you often do not needcached_setup()
as you can directly declare a scope on a fixture function and register a finalizer throughrequest.addfinalizer()
.Parameters: - teardown – function receiving a previously setup resource.
- setup – a no-argument function creating a resource.
- scope – a string value out of
function
,class
,module
orsession
indicating the caching lifecycle of the resource. - extrakey – added to internal caching key of (funcargname, scope).
-
getfixturevalue
(argname)[source]¶ Dynamically run a named fixture function.
Declaring fixtures via function argument is recommended where possible. But if you can only decide whether to use another fixture at test setup time, you may use this function to retrieve it inside a fixture or test function body.
-
pytestconfig¶
-
pytestconfig
()[source]¶ Session-scoped fixture that returns the
_pytest.config.Config
object.Example:
def test_foo(pytestconfig): if pytestconfig.getoption("verbose"): ...
record_property¶
Tutorial: record_property.
-
record_property
()[source]¶ Add an extra properties the calling test. User properties become part of the test report and are available to the configured reporters, like JUnit XML. The fixture is callable with
(name, value)
, with value being automatically xml-encoded.Example:
def test_function(record_property): record_property("example_key", 1)
caplog¶
Tutorial: Logging.
-
caplog
()[source]¶ Access and control log capturing.
Captured logs are available through the following methods:
* caplog.text -> string containing formatted log output * caplog.records -> list of logging.LogRecord instances * caplog.record_tuples -> list of (logger_name, level, message) tuples * caplog.clear() -> clear captured records and formatted log output string
This returns a
_pytest.logging.LogCaptureFixture
instance.
-
class
LogCaptureFixture
(item)[source]¶ Provides access and control of log capturing.
-
handler
¶ Return type: LogCaptureHandler
-
get_records
(when)[source]¶ Get the logging records for one of the possible test phases.
Parameters: when (str) – Which test phase to obtain the records from. Valid values are: “setup”, “call” and “teardown”. Return type: List[logging.LogRecord] Returns: the list of captured records at the given stage New in version 3.4.
-
text
¶ Returns the log text.
-
records
¶ Returns the list of log records.
-
record_tuples
¶ Returns a list of a striped down version of log records intended for use in assertion comparison.
The format of the tuple is:
(logger_name, log_level, message)
-
messages
¶ Returns a list of format-interpolated log messages.
Unlike ‘records’, which contains the format string and parameters for interpolation, log messages in this list are all interpolated. Unlike ‘text’, which contains the output from the handler, log messages in this list are unadorned with levels, timestamps, etc, making exact comparisions more reliable.
Note that traceback or stack info (from
logging.exception()
or the exc_info or stack_info arguments to the logging functions) is not included, as this is added by the formatter in the handler.New in version 3.7.
-
set_level
(level, logger=None)[source]¶ Sets the level for capturing of logs. The level will be restored to its previous value at the end of the test.
Parameters: Changed in version 3.4: The levels of the loggers changed by this function will be restored to their initial values at the end of the test.
-
monkeypatch¶
Tutorial: Monkeypatching/mocking modules and environments.
-
monkeypatch
()[source]¶ The returned
monkeypatch
fixture provides these helper methods to modify objects, dictionaries or os.environ:monkeypatch.setattr(obj, name, value, raising=True) monkeypatch.delattr(obj, name, raising=True) monkeypatch.setitem(mapping, name, value) monkeypatch.delitem(obj, name, raising=True) monkeypatch.setenv(name, value, prepend=False) monkeypatch.delenv(name, raising=True) monkeypatch.syspath_prepend(path) monkeypatch.chdir(path)
All modifications will be undone after the requesting test function or fixture has finished. The
raising
parameter determines if a KeyError or AttributeError will be raised if the set/deletion operation has no target.This returns a
MonkeyPatch
instance.
-
class
MonkeyPatch
[source]¶ Object returned by the
monkeypatch
fixture keeping a record of setattr/item/env/syspath changes.-
with
context
()[source]¶ Context manager that returns a new
MonkeyPatch
object which undoes any patching done inside thewith
block upon exit:import functools def test_partial(monkeypatch): with monkeypatch.context() as m: m.setattr(functools, "partial", 3)
Useful in situations where it is desired to undo some patches before the test ends, such as mocking
stdlib
functions that might break pytest itself if mocked (for examples of this see #3290.
-
setattr
(target, name, value=<notset>, raising=True)[source]¶ Set attribute value on target, memorizing the old value. By default raise AttributeError if the attribute did not exist.
For convenience you can specify a string as
target
which will be interpreted as a dotted import path, with the last part being the attribute name. Example:monkeypatch.setattr("os.getcwd", lambda: "/")
would set thegetcwd
function of theos
module.The
raising
value determines if the setattr should fail if the attribute is not already present (defaults to True which means it will raise).
-
delattr
(target, name=<notset>, raising=True)[source]¶ Delete attribute
name
fromtarget
, by default raise AttributeError it the attribute did not previously exist.If no
name
is specified andtarget
is a string it will be interpreted as a dotted import path with the last part being the attribute name.If
raising
is set to False, no exception will be raised if the attribute is missing.
-
delitem
(dic, name, raising=True)[source]¶ Delete
name
from dict. Raise KeyError if it doesn’t exist.If
raising
is set to False, no exception will be raised if the key is missing.
-
setenv
(name, value, prepend=None)[source]¶ Set environment variable
name
tovalue
. Ifprepend
is a character, read the current environment variable value and prepend thevalue
adjoined with theprepend
character.
-
delenv
(name, raising=True)[source]¶ Delete
name
from the environment. Raise KeyError it does not exist.If
raising
is set to False, no exception will be raised if the environment variable is missing.
-
chdir
(path)[source]¶ Change the current working directory to the specified path. Path can be a string or a py.path.local object.
-
undo
()[source]¶ Undo previous changes. This call consumes the undo stack. Calling it a second time has no effect unless you do more monkeypatching after the undo call.
There is generally no need to call undo(), since it is called automatically during tear-down.
Note that the same monkeypatch fixture is used across a single test function invocation. If monkeypatch is used both by the test function itself and one of the test fixtures, calling undo() will undo all of the changes made in both functions.
-
with
testdir¶
This fixture provides a Testdir
instance useful for black-box testing of test files, making it ideal to
test plugins.
To use it, include in your top-most conftest.py
file:
pytest_plugins = 'pytester'
-
class
Testdir
[source]¶ Temporary test directory with tools to test/run pytest itself.
This is based on the
tmpdir
fixture but provides a number of methods which aid with testing pytest itself. Unlesschdir()
is used all methods will usetmpdir
as their current working directory.Attributes:
Tmpdir: The py.path.local
instance of the temporary directory.Plugins: A list of plugins to use with parseconfig()
andrunpytest()
. Initially this is an empty list but plugins can be added to the list. The type of items to add to the list depends on the method using them so refer to them for details.-
finalize
()[source]¶ Clean up global state artifacts.
Some methods modify the global interpreter state and this tries to clean this up. It does not remove the temporary directory however so it can be looked at after the test run has finished.
-
makefile
(ext, *args, **kwargs)[source]¶ Create new file(s) in the testdir.
Parameters: - ext (str) – The extension the file(s) should use, including the dot, e.g. .py.
- args (list[str]) – All args will be treated as strings and joined using newlines. The result will be written as contents to the file. The name of the file will be based on the test function requesting this fixture.
- kwargs – Each keyword is the name of a file, while the value of it will be written as contents of the file.
Examples:
testdir.makefile(".txt", "line1", "line2") testdir.makefile(".ini", pytest="[pytest]\naddopts=-rs\n")
-
syspathinsert
(path=None)[source]¶ Prepend a directory to sys.path, defaults to
tmpdir
.This is undone automatically when this object dies at the end of each test.
-
mkpydir
(name)[source]¶ Create a new python package.
This creates a (sub)directory with an empty
__init__.py
file so it gets recognised as a python package.
-
class
Session
(config)¶ -
exception
Failed
¶ signals a stop as failed test run.
-
exception
Interrupted
¶ signals an interrupted test run.
-
for ... in
collect
()¶ returns a list of children (items and collectors) for this collection node.
-
exception
-
getnode
(config, arg)[source]¶ Return the collection node of a file.
Parameters: - config –
_pytest.config.Config
instance, seeparseconfig()
andparseconfigure()
to create the configuration - arg – a
py.path.local
instance of the file
- config –
-
getpathnode
(path)[source]¶ Return the collection node of a file.
This is like
getnode()
but usesparseconfigure()
to create the (configured) pytest Config instance.Parameters: path – a py.path.local
instance of the file
-
genitems
(colitems)[source]¶ Generate all test items from a collection node.
This recurses into the collection node and returns a list of all the test items contained within.
-
runitem
(source)[source]¶ Run the “test_func” Item.
The calling test instance (class containing the test method) must provide a
.getrunner()
method which should return a runner which can run the test protocol for a single item, e.g._pytest.runner.runtestprotocol()
.
-
inline_runsource
(source, *cmdlineargs)[source]¶ Run a test module in process using
pytest.main()
.This run writes “source” into a temporary file and runs
pytest.main()
on it, returning aHookRecorder
instance for the result.Parameters: - source – the source code of the test module
- cmdlineargs – any extra command line arguments to use
Returns: HookRecorder
instance of the result
-
inline_genitems
(*args)[source]¶ Run
pytest.main(['--collectonly'])
in-process.Runs the
pytest.main()
function to run all of pytest inside the test process itself likeinline_run()
, but returns a tuple of the collected items and aHookRecorder
instance.
-
inline_run
(*args, **kwargs)[source]¶ Run
pytest.main()
in-process, returning a HookRecorder.Runs the
pytest.main()
function to run all of pytest inside the test process itself. This means it can return aHookRecorder
instance which gives more detailed results from that run than can be done by matching stdout/stderr fromrunpytest()
.Parameters: - args – command line arguments to pass to
pytest.main()
- plugin – (keyword-only) extra plugin instances the
pytest.main()
instance should use
Returns: a
HookRecorder
instance- args – command line arguments to pass to
-
runpytest_inprocess
(*args, **kwargs)[source]¶ Return result of running pytest in-process, providing a similar interface to what self.runpytest() provides.
-
runpytest
(*args, **kwargs)[source]¶ Run pytest inline or in a subprocess, depending on the command line option “–runpytest” and return a
RunResult
.
-
parseconfig
(*args)[source]¶ Return a new pytest Config instance from given commandline args.
This invokes the pytest bootstrapping code in _pytest.config to create a new
_pytest.core.PluginManager
and call the pytest_cmdline_parse hook to create a new_pytest.config.Config
instance.If
plugins
has been populated they should be plugin modules to be registered with the PluginManager.
-
parseconfigure
(*args)[source]¶ Return a new pytest configured Config instance.
This returns a new
_pytest.config.Config
instance likeparseconfig()
, but also calls the pytest_configure hook.
-
getitem
(source, funcname='test_func')[source]¶ Return the test item for a test function.
This writes the source to a python file and runs pytest’s collection on the resulting module, returning the test item for the requested function name.
Parameters: - source – the module source
- funcname – the name of the test function for which to return a test item
-
getitems
(source)[source]¶ Return all test items collected from the module.
This writes the source to a python file and runs pytest’s collection on the resulting module, returning all test items contained within.
-
getmodulecol
(source, configargs=(), withinit=False)[source]¶ Return the module collection node for
source
.This writes
source
to a file usingmakepyfile()
and then runs the pytest collection on it, returning the collection node for the test module.Parameters: - source – the source code of the module to collect
- configargs – any extra arguments to pass to
parseconfigure()
- withinit – whether to also write an
__init__.py
file to the same directory to ensure it is a package
-
collect_by_name
(modcol, name)[source]¶ Return the collection node for name from the module collection.
This will search a module collection node for a collection node matching the given name.
Parameters: - modcol – a module collection node; see
getmodulecol()
- name – the name of the node to return
- modcol – a module collection node; see
-
popen
(cmdargs, stdout, stderr, **kw)[source]¶ Invoke subprocess.Popen.
This calls subprocess.Popen making sure the current working directory is in the PYTHONPATH.
You probably want to use
run()
instead.
-
run
(*cmdargs)[source]¶ Run a command with arguments.
Run a process using subprocess.Popen saving the stdout and stderr.
Returns a
RunResult
.
-
runpython
(script)[source]¶ Run a python script using sys.executable as interpreter.
Returns a
RunResult
.
-
runpytest_subprocess
(*args, **kwargs)[source]¶ Run pytest as a subprocess with given arguments.
Any plugins added to the
plugins
list will added using the-p
command line option. Additionally--basetemp
is used put any temporary files and directories in a numbered directory prefixed with “runpytest-” so they do not conflict with the normal numbered pytest location for temporary files and directories.Returns a
RunResult
.
-
-
class
RunResult
[source]¶ The result of running a command.
Attributes:
Ret: the return value Outlines: list of lines captured from stdout Errlines: list of lines captures from stderr Stdout: LineMatcher
of stdout, usestdout.str()
to reconstruct stdout or the commonly usedstdout.fnmatch_lines()
methodStderr: LineMatcher
of stderrDuration: duration in seconds
-
class
LineMatcher
[source]¶ Flexible matching of text.
This is a convenience class to test large texts like the output of commands.
The constructor takes a list of lines without their trailing newlines, i.e.
text.splitlines()
.-
fnmatch_lines_random
(lines2)[source]¶ Check lines exist in the output using in any order.
Lines are checked using
fnmatch.fnmatch
. The argument is a list of lines which have to occur in the output, in any order.
-
re_match_lines_random
(lines2)[source]¶ Check lines exist in the output using
re.match
, in any order.The argument is a list of lines which have to occur in the output, in any order.
-
get_lines_after
(fnline)[source]¶ Return all lines following the given line in the text.
The given line can contain glob wildcards.
-
recwarn¶
Tutorial: Asserting warnings with the warns function
-
recwarn
()[source]¶ Return a
WarningsRecorder
instance that records all warnings emitted by test functions.See http://docs.python.org/library/warnings.html for information on warning categories.
-
class
WarningsRecorder
[source]¶ A context manager to record raised warnings.
Adapted from warnings.catch_warnings.
-
list
¶ The list of recorded warnings.
-
Each recorded warning is an instance of warnings.WarningMessage
.
Note
RecordedWarning
was changed from a plain class to a namedtuple in pytest 3.1
Note
DeprecationWarning
and PendingDeprecationWarning
are treated
differently; see Ensuring a function triggers a deprecation warning.
tmpdir¶
Tutorial: Temporary directories and files
-
tmpdir
()[source]¶ Return a temporary directory path object which is unique to each test function invocation, created as a sub directory of the base temporary directory. The returned object is a py.path.local path object.
tmpdir_factory¶
Tutorial: The ‘tmpdir_factory’ fixture
tmpdir_factory
instances have the following methods:
Hooks¶
Tutorial: Writing plugins.
Reference to all hooks which can be implemented by conftest.py files and plugins.
Bootstrapping hooks¶
Bootstrapping hooks called for plugins registered early enough (internal and setuptools plugins).
-
pytest_load_initial_conftests
(early_config, parser, args)[source]¶ implements the loading of initial conftest files ahead of command line option parsing.
Note
This hook will not be called for
conftest.py
files, only for setuptools plugins.Parameters: - early_config (_pytest.config.Config) – pytest config object
- args (list[str]) – list of arguments passed on the command line
- parser (_pytest.config.Parser) – to add command line options
-
pytest_cmdline_preparse
(config, args)[source]¶ (Deprecated) modify command line arguments before option parsing.
This hook is considered deprecated and will be removed in a future pytest version. Consider using
pytest_load_initial_conftests()
instead.Note
This hook will not be called for
conftest.py
files, only for setuptools plugins.Parameters: - config (_pytest.config.Config) – pytest config object
- args (list[str]) – list of arguments passed on the command line
-
pytest_cmdline_parse
(pluginmanager, args)[source]¶ return initialized config object, parsing the specified args.
Stops at first non-None result, see firstresult: stop at first non-None result
Note
This hook will not be called for
conftest.py
files, only for setuptools plugins.Parameters: - pluginmanager (_pytest.config.PytestPluginManager) – pytest plugin manager
- args (list[str]) – list of arguments passed on the command line
-
pytest_cmdline_main
(config)[source]¶ called for performing the main command line action. The default implementation will invoke the configure hooks and runtest_mainloop.
Note
This hook will not be called for
conftest.py
files, only for setuptools plugins.Stops at first non-None result, see firstresult: stop at first non-None result
Parameters: config (_pytest.config.Config) – pytest config object
Initialization hooks¶
Initialization hooks called for plugins and conftest.py
files.
-
pytest_addoption
(parser)[source]¶ register argparse-style options and ini-style config values, called once at the beginning of a test run.
Note
This function should be implemented only in plugins or
conftest.py
files situated at the tests root directory due to how pytest discovers plugins during startup.Parameters: parser (_pytest.config.Parser) – To add command line options, call parser.addoption(...)
. To add ini-file values callparser.addini(...)
.Options can later be accessed through the
config
object, respectively:config.getoption(name)
to retrieve the value of a command line option.config.getini(name)
to retrieve a value read from an ini-style file.
The config object is passed around on many internal objects via the
.config
attribute or can be retrieved as thepytestconfig
fixture.Note
This hook is incompatible with
hookwrapper=True
.
-
pytest_addhooks
(pluginmanager)[source]¶ called at plugin registration time to allow adding new hooks via a call to
pluginmanager.add_hookspecs(module_or_class, prefix)
.Parameters: pluginmanager (_pytest.config.PytestPluginManager) – pytest plugin manager Note
This hook is incompatible with
hookwrapper=True
.
-
pytest_configure
(config)[source]¶ Allows plugins and conftest files to perform initial configuration.
This hook is called for every plugin and initial conftest file after command line options have been parsed.
After that, the hook is called for other conftest files as they are imported.
Note
This hook is incompatible with
hookwrapper=True
.Parameters: config (_pytest.config.Config) – pytest config object
-
pytest_unconfigure
(config)[source]¶ called before test process is exited.
Parameters: config (_pytest.config.Config) – pytest config object
-
pytest_sessionstart
(session)[source]¶ called after the
Session
object has been created and before performing collection and entering the run test loop.Parameters: session (_pytest.main.Session) – the pytest session object
-
pytest_sessionfinish
(session, exitstatus)[source]¶ called after whole test run finished, right before returning the exit status to the system.
Parameters: - session (_pytest.main.Session) – the pytest session object
- exitstatus (int) – the status which pytest will return to the system
Test running hooks¶
All runtest related hooks receive a pytest.Item
object.
-
pytest_runtestloop
(session)[source]¶ called for performing the main runtest loop (after collection finished).
Stops at first non-None result, see firstresult: stop at first non-None result
Parameters: session (_pytest.main.Session) – the pytest session object
-
pytest_runtest_protocol
(item, nextitem)[source]¶ implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks.
Parameters: - item – test item for which the runtest protocol is performed.
- nextitem – the scheduled-to-be-next test item (or None if this
is the end my friend). This argument is passed on to
pytest_runtest_teardown()
.
Return boolean: True if no further hook implementations should be invoked.
Stops at first non-None result, see firstresult: stop at first non-None result
-
pytest_runtest_logstart
(nodeid, location)[source]¶ signal the start of running a single test item.
This hook will be called before
pytest_runtest_setup()
,pytest_runtest_call()
andpytest_runtest_teardown()
hooks.Parameters: - nodeid (str) – full id of the item
- location – a triple of
(filename, linenum, testname)
-
pytest_runtest_logfinish
(nodeid, location)[source]¶ signal the complete finish of running a single test item.
This hook will be called after
pytest_runtest_setup()
,pytest_runtest_call()
andpytest_runtest_teardown()
hooks.Parameters: - nodeid (str) – full id of the item
- location – a triple of
(filename, linenum, testname)
-
pytest_runtest_teardown
(item, nextitem)[source]¶ called after
pytest_runtest_call
.Parameters: nextitem – the scheduled-to-be-next test item (None if no further test item is scheduled). This argument can be used to perform exact teardowns, i.e. calling just enough finalizers so that nextitem only needs to call setup-functions.
-
pytest_runtest_makereport
(item, call)[source]¶ return a
_pytest.runner.TestReport
object for the givenpytest.Item
and_pytest.runner.CallInfo
.Stops at first non-None result, see firstresult: stop at first non-None result
For deeper understanding you may look at the default implementation of
these hooks in _pytest.runner
and maybe also
in _pytest.pdb
which interacts with _pytest.capture
and its input/output capturing in order to immediately drop
into interactive debugging when a test failure occurs.
The _pytest.terminal
reported specifically uses
the reporting hook to print information about a test run.
Collection hooks¶
pytest
calls the following hooks for collecting files and directories:
-
pytest_collection
(session)[source]¶ Perform the collection protocol for the given session.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters: session (_pytest.main.Session) – the pytest session object
-
pytest_ignore_collect
(path, config)[source]¶ return True to prevent considering this path for collection. This hook is consulted for all files and directories prior to calling more specific hooks.
Stops at first non-None result, see firstresult: stop at first non-None result
Parameters: - path (str) – the path to analyze
- config (_pytest.config.Config) – pytest config object
-
pytest_collect_directory
(path, parent)[source]¶ called before traversing a directory for collection files.
Stops at first non-None result, see firstresult: stop at first non-None result
Parameters: path (str) – the path to analyze
-
pytest_collect_file
(path, parent)[source]¶ return collection Node or None for the given path. Any new node needs to have the specified
parent
as a parent.Parameters: path (str) – the path to collect
For influencing the collection of objects in Python modules you can use the following hook:
-
pytest_pycollect_makeitem
(collector, name, obj)[source]¶ return custom item/collector for a python object in a module, or None.
Stops at first non-None result, see firstresult: stop at first non-None result
-
pytest_make_parametrize_id
(config, val, argname)[source]¶ Return a user-friendly string representation of the given
val
that will be used by @pytest.mark.parametrize calls. Return None if the hook doesn’t know aboutval
. The parameter name is available asargname
, if required.Stops at first non-None result, see firstresult: stop at first non-None result
Parameters: - config (_pytest.config.Config) – pytest config object
- val – the parametrized value
- argname (str) – the automatic parameter name produced by pytest
After collection is complete, you can modify the order of items, delete or otherwise amend the test items:
-
pytest_collection_modifyitems
(session, config, items)[source]¶ called after collection has been performed, may filter or re-order the items in-place.
Parameters: - session (_pytest.main.Session) – the pytest session object
- config (_pytest.config.Config) – pytest config object
- items (List[_pytest.nodes.Item]) – list of item objects
Reporting hooks¶
Session related reporting hooks:
-
pytest_report_header
(config, startdir)[source]¶ return a string or list of strings to be displayed as header info for terminal reporting.
Parameters: - config (_pytest.config.Config) – pytest config object
- startdir – py.path object with the starting dir
Note
This function should be implemented only in plugins or
conftest.py
files situated at the tests root directory due to how pytest discovers plugins during startup.
-
pytest_report_collectionfinish
(config, startdir, items)[source]¶ New in version 3.2.
return a string or list of strings to be displayed after collection has finished successfully.
This strings will be displayed after the standard “collected X items” message.
Parameters: - config (_pytest.config.Config) – pytest config object
- startdir – py.path object with the starting dir
- items – list of pytest items that are going to be executed; this list should not be modified.
-
pytest_report_teststatus
(report)[source]¶ return result-category, shortletter and verbose word for reporting.
Stops at first non-None result, see firstresult: stop at first non-None result
-
pytest_terminal_summary
(terminalreporter, exitstatus)[source]¶ Add a section to terminal summary reporting.
Parameters: - terminalreporter (_pytest.terminal.TerminalReporter) – the internal terminal reporter object
- exitstatus (int) – the exit status that will be reported back to the OS
New in version 3.5: The
config
parameter.
-
pytest_fixture_setup
(fixturedef, request)[source]¶ performs fixture setup execution.
Returns: The return value of the call to the fixture function Stops at first non-None result, see firstresult: stop at first non-None result
Note
If the fixture function returns None, other implementations of this hook function will continue to be called, according to the behavior of the firstresult: stop at first non-None result option.
-
pytest_fixture_post_finalizer
(fixturedef, request)[source]¶ called after fixture teardown, but before the cache is cleared so the fixture result cache
fixturedef.cached_result
can still be accessed.
And here is the central hook for reporting about test execution:
-
pytest_runtest_logreport
(report)[source]¶ process a test setup/call/teardown report relating to the respective phase of executing a test.
You can also use this hook to customize assertion representation for some types:
-
pytest_assertrepr_compare
(config, op, left, right)[source]¶ return explanation for comparisons in failing assert expressions.
Return None for no custom explanation, otherwise return a list of strings. The strings will be joined by newlines but any newlines in a string will be escaped. Note that all but the first line will be indented slightly, the intention is for the first line to be a summary.
Parameters: config (_pytest.config.Config) – pytest config object
Debugging/Interaction hooks¶
There are few hooks which can be used for special reporting or interaction with exceptions:
-
pytest_exception_interact
(node, call, report)[source]¶ called when an exception was raised which can potentially be interactively handled.
This hook is only called if an exception was raised that is not an internal exception like
skip.Exception
.
-
pytest_enter_pdb
(config)[source]¶ called upon pdb.set_trace(), can be used by plugins to take special action just before the python debugger enters in interactive mode.
Parameters: config (_pytest.config.Config) – pytest config object
Objects¶
Full reference to objects accessible from fixtures or hooks.
Collector¶
Config¶
-
class
Config
[source]¶ access to configuration values, pluginmanager and plugin hooks.
-
option
= None¶ access to command line option as attributes. (deprecated), use
getoption()
instead
-
pluginmanager
= None¶ a pluginmanager instance
-
add_cleanup
(func)[source]¶ Add a function to be called when the config object gets out of use (usually coninciding with pytest_unconfigure).
-
warn
(code, message, fslocation=None, nodeid=None)[source]¶ generate a warning for this test session.
-
addinivalue_line
(name, line)[source]¶ add a line to an ini-file option. The option must have been declared but might not yet be set in which case the line becomes the the first line in its value.
-
getini
(name)[source]¶ return configuration value from an ini file. If the specified name hasn’t been registered through a prior
parser.addini
call (usually from a plugin), a ValueError is raised.
-
getoption
(name, default=<NOTSET>, skip=False)[source]¶ return command line option value.
Parameters: - name – name of the option. You may also specify
the literal
--OPT
option instead of the “dest” option name. - default – default value if no option of that name exists.
- skip – if True raise pytest.skip if option does not exists or has a None value.
- name – name of the option. You may also specify
the literal
-
ExceptionInfo¶
-
class
ExceptionInfo
(tup=None, exprinfo=None)[source]¶ wraps sys.exc_info() objects and offers help for navigating the traceback.
-
type
= None¶ the exception class
-
value
= None¶ the exception instance
-
tb
= None¶ the exception raw traceback
-
typename
= None¶ the exception type name
-
traceback
= None¶ the exception traceback (_pytest._code.Traceback instance)
-
exconly
(tryshort=False)[source]¶ return the exception as a string
when ‘tryshort’ resolves to True, and the exception is a _pytest._code._AssertionError, only the actual exception part of the exception representation is returned (so ‘AssertionError: ‘ is removed from the beginning)
-
getrepr
(showlocals=False, style='long', abspath=False, tbfilter=True, funcargs=False, truncate_locals=True)[source]¶ return str()able representation of this exception info. showlocals: show locals per traceback entry style: long|short|no|native traceback style tbfilter: hide entries (where __tracebackhide__ is true)
in case of style==native, tbfilter and showlocals is ignored.
-
FSCollector¶
-
class
FSCollector
[source]¶ Bases:
_pytest.nodes.Collector
Function¶
-
class
Function
[source]¶ Bases:
_pytest.python.FunctionMixin
,_pytest.nodes.Item
,_pytest.compat.FuncargnamesCompatAttr
a Function Item is responsible for setting up and executing a Python test function.
-
originalname
= None¶ original function name, without any decorations (for example parametrization adds a
"[...]"
suffix to function names).New in version 3.0.
-
function
¶ underlying python ‘function’ object
-
Item¶
-
class
Item
[source]¶ Bases:
_pytest.nodes.Node
a basic test invocation item. Note that for a single function there might be multiple test invocation items.
-
user_properties
= None¶ user properties is a list of tuples (name, value) that holds user defined properties for this test.
-
MarkDecorator¶
-
class
MarkDecorator
(mark)[source]¶ A decorator for test functions and test classes. When applied it will create
MarkInfo
objects which may be retrieved by hooks as item keywords. MarkDecorator instances are often created like this:mark1 = pytest.mark.NAME # simple MarkDecorator mark2 = pytest.mark.NAME(name1=value) # parametrized MarkDecorator
and can then be applied as decorators to test functions:
@mark2 def test_function(): pass
- When a MarkDecorator instance is called it does the following:
- If called with a single class as its only positional argument and no additional keyword arguments, it attaches itself to the class so it gets applied automatically to all test cases found in that class.
- If called with a single function as its only positional argument and no additional keyword arguments, it attaches a MarkInfo object to the function, containing all the arguments already stored internally in the MarkDecorator.
- When called in any other case, it performs a ‘fake construction’ call, i.e. it returns a new MarkDecorator instance with the original MarkDecorator’s content updated with the arguments passed to this call.
Note: The rules above prevent MarkDecorator objects from storing only a single function or class reference as their positional argument with no additional keyword or positional arguments.
-
name
¶ alias for mark.name
-
args
¶ alias for mark.args
-
kwargs
¶ alias for mark.kwargs
MarkGenerator¶
-
class
MarkGenerator
[source]¶ Factory for
MarkDecorator
objects - exposed as apytest.mark
singleton instance. Example:import pytest @pytest.mark.slowtest def test_function(): pass
will set a ‘slowtest’
MarkInfo
object on thetest_function
object.
Mark¶
Metafunc¶
-
class
Metafunc
(definition, fixtureinfo, config, cls=None, module=None)[source]¶ Metafunc objects are passed to the
pytest_generate_tests
hook. They help to inspect a test function and to generate tests according to test configuration or values specified in the class or module where a test function is defined.-
config
= None¶ access to the
_pytest.config.Config
object for the test session
-
module
= None¶ the module object where the test function is defined in.
-
function
= None¶ underlying python test function
-
fixturenames
= None¶ set of fixture names required by the test function
-
cls
= None¶ class object where the test function is defined in or
None
.
-
parametrize
(argnames, argvalues, indirect=False, ids=None, scope=None)[source] Add new invocations to the underlying test function using the list of argvalues for the given argnames. Parametrization is performed during the collection phase. If you need to setup expensive resources see about setting indirect to do it rather at test setup time.
Parameters: - argnames – a comma-separated string denoting one or more argument names, or a list/tuple of argument strings.
- argvalues – The list of argvalues determines how often a test is invoked with different argument values. If only one argname was specified argvalues is a list of values. If N argnames were specified, argvalues must be a list of N-tuples, where each tuple-element specifies a value for its respective argname.
- indirect – The list of argnames or boolean. A list of arguments’ names (subset of argnames). If True the list contains all names from the argnames. Each argvalue corresponding to an argname in this list will be passed as request.param to its respective argname fixture function so that it can perform more expensive setups during the setup phase of a test rather than at collection time.
- ids – list of string ids, or a callable. If strings, each is corresponding to the argvalues so that they are part of the test id. If None is given as id of specific test, the automatically generated id for that argument will be used. If callable, it should take one argument (a single argvalue) and return a string or return None. If None, the automatically generated id for that argument will be used. If no ids are provided they will be generated automatically from the argvalues.
- scope – if specified it denotes the scope of the parameters. The scope is used for grouping tests by parameter instances. It will also override any fixture-function defined scope, allowing to set a dynamic scope using test context or configuration.
-
addcall
(funcargs=None, id=<object object>, param=<object object>)[source]¶ Add a new call to the underlying test function during the collection phase of a test run.
Deprecated since version 3.3: Use
parametrize()
instead.Note that request.addcall() is called during the test collection phase prior and independently to actual test execution. You should only use addcall() if you need to specify multiple arguments of a test function.
Parameters: - funcargs – argument keyword dictionary used when invoking the test function.
- id – used for reporting and identification purposes. If you don’t supply an id an automatic unique id will be generated.
- param – a parameter which will be exposed to a later fixture function
invocation through the
request.param
attribute.
-
Node¶
-
class
Node
[source]¶ base class for Collector and Item the test collection tree. Collector subclasses have children, Items are terminal nodes.
-
name
= None¶ a unique name within the scope of the parent node
-
parent
= None¶ the parent collector node.
-
config
= None¶ the pytest config object
-
session
= None¶ the session this node is part of
-
fspath
= None¶ filesystem path where this node was collected from (can be None)
-
keywords
= None¶ keywords/markers collected from all scopes
-
own_markers
= None¶ the marker objects belonging to this node
-
extra_keyword_matches
= None¶ allow adding of extra keywords to use for matching
-
ihook
¶ fspath sensitive hook proxy used to call pytest hooks
-
nodeid
¶ a ::-separated string denoting its collection tree address.
-
listchain
()[source]¶ return list of all parent collectors up to self, starting from root of collection tree.
-
add_marker
(marker, append=True)[source]¶ dynamically add a marker object to the node.
Parameters: marker ( str
orpytest.mark.*
object) –append=True
whether to append the marker, ifFalse
insert at position0
.
-
iter_markers
(name=None)[source]¶ Parameters: name – if given, filter the results by the name attribute iterate over all markers of the node
-
for ... in
iter_markers_with_node
(name=None)[source]¶ Parameters: name – if given, filter the results by the name attribute iterate over all markers of the node returns sequence of tuples (node, mark)
-
get_closest_marker
(name, default=None)[source]¶ return the first marker matching the name, from closest (for example function) to farther level (for example module level).
Parameters: - default – fallback return value of no marker was found
- name – name to filter by
-
get_marker
(name)[source]¶ get a marker object from this node or None if the node doesn’t have a marker with that name.
Deprecated since version 3.6: This function has been deprecated in favor of
Node.get_closest_marker
andNode.iter_markers
, see Updating code for more details.
-
Parser¶
-
class
Parser
[source]¶ Parser for command line arguments and ini-file values.
Variables: extra_info – dict of generic param -> value to display in case there’s an error processing the command line arguments. -
getgroup
(name, description='', after=None)[source]¶ get (or create) a named option Group.
Name: name of the option group. Description: long description for –help output. After: name of other group, used for ordering –help output. The returned group object has an
addoption
method with the same signature asparser.addoption
but will be shown in the respective group in the output ofpytest. --help
.
-
addoption
(*opts, **attrs)[source]¶ register a command line option.
Opts: option names, can be short or long options. Attrs: same attributes which the add_option()
function of the argparse library accepts.After command line parsing options are available on the pytest config object via
config.option.NAME
whereNAME
is usually set by passing adest
attribute, for exampleaddoption("--long", dest="NAME", ...)
.
-
parse_known_args
(args, namespace=None)[source]¶ parses and returns a namespace object with known arguments at this point.
-
parse_known_and_unknown_args
(args, namespace=None)[source]¶ parses and returns a namespace object with known arguments, and the remaining arguments unknown at this point.
-
addini
(name, help, type=None, default=None)[source]¶ register an ini-file option.
Name: name of the ini-variable Type: type of the variable, can be pathlist
,args
,linelist
orbool
.Default: default value if no ini-file option exists but is queried. The value of ini-variables can be retrieved via a call to
config.getini(name)
.
-
PluginManager¶
-
class
PluginManager
[source]¶ Core Pluginmanager class which manages registration of plugin objects and 1:N hook calling.
You can register new hooks by calling
add_hookspec(module_or_class)
. You can register plugin objects (which contain hooks) by callingregister(plugin)
. The Pluginmanager is initialized with a prefix that is searched for in the names of the dict of registered plugin objects.For debugging purposes you can call
enable_tracing()
which will subsequently send debug information to the trace helper.-
register
(plugin, name=None)[source]¶ Register a plugin and return its canonical name or None if the name is blocked from registering. Raise a ValueError if the plugin is already registered.
-
unregister
(plugin=None, name=None)[source]¶ unregister a plugin object and all its contained hook implementations from internal data structures.
-
add_hookspecs
(module_or_class)[source]¶ add new hook specifications defined in the given module_or_class. Functions are recognized if they have been decorated accordingly.
-
get_canonical_name
(plugin)[source]¶ Return canonical name for a plugin object. Note that a plugin may be registered under a different name which was specified by the caller of register(plugin, name). To obtain the name of an registered plugin use
get_name(plugin)
instead.
-
check_pending
()[source]¶ Verify that all hooks which have not been verified against a hook specification are optional, otherwise raise PluginValidationError
-
load_setuptools_entrypoints
(entrypoint_name)[source]¶ Load modules from querying the specified setuptools entrypoint name. Return the number of loaded plugins.
-
list_plugin_distinfo
()[source]¶ return list of distinfo/plugin tuples for all setuptools registered plugins.
-
add_hookcall_monitoring
(before, after)[source]¶ add before/after tracing functions for all hooks and return an undo function which, when called, will remove the added tracers.
before(hook_name, hook_impls, kwargs)
will be called ahead of all hook calls and receive a hookcaller instance, a list of HookImpl instances and the keyword arguments for the hook call.after(outcome, hook_name, hook_impls, kwargs)
receives the same arguments asbefore
but also a_Result`
object which represents the result of the overall hook call.
-
PytestPluginManager¶
-
class
PytestPluginManager
[source]¶ Bases:
pluggy.manager.PluginManager
Overwrites
pluggy.PluginManager
to add pytest-specific functionality:- loading plugins from the command line,
PYTEST_PLUGINS
env variable andpytest_plugins
global variables found in plugins being loaded; conftest.py
loading during start-up;
-
addhooks
(module_or_class)[source]¶ Deprecated since version 2.8.
Use
pluggy.PluginManager.add_hookspecs
instead.
- loading plugins from the command line,
Session¶
-
class
Session
[source]¶ Bases:
_pytest.nodes.FSCollector
-
exception
Interrupted
¶ Bases:
KeyboardInterrupt
signals an interrupted test run.
-
exception
TestReport¶
-
class
TestReport
[source]¶ Basic test report object (also used for setup and teardown calls if they fail).
-
nodeid
= None¶ normalized collection node id
-
location
= None¶ a (filesystempath, lineno, domaininfo) tuple indicating the actual location of a test item - it might be different from the collected one e.g. if a method is inherited from a different module.
-
keywords
= None¶ a name -> value dictionary containing all keywords and markers associated with a test invocation.
-
outcome
= None¶ test outcome, always one of “passed”, “failed”, “skipped”.
-
longrepr
= None¶ None or a failure representation.
-
when
= None¶ one of ‘setup’, ‘call’, ‘teardown’ to indicate runtest phase.
-
user_properties
= None¶ user properties is a list of tuples (name, value) that holds user defined properties of the test
-
sections
= None¶ list of pairs
(str, str)
of extra information which needs to marshallable. Used by pytest to add captured text fromstdout
andstderr
, but may be used by other plugins to add arbitrary information to reports.
-
duration
= None¶ time it took to run just the test
-
caplog
¶ Return captured log lines, if log capturing is enabled
New in version 3.5.
-
capstderr
¶ Return captured text from stderr, if capturing is enabled
New in version 3.0.
-
capstdout
¶ Return captured text from stdout, if capturing is enabled
New in version 3.0.
-
longreprtext
¶ Read-only property that returns the full string representation of
longrepr
.New in version 3.0.
-
_Result¶
-
class
_Result
(result, excinfo)[source]¶ -
result
¶ Get the result(s) for this hook call (DEPRECATED in favor of
get_result()
).
-
Special Variables¶
pytest treats some global variables in a special manner when defined in a test module.
pytest_plugins¶
Tutorial: Requiring/Loading plugins in a test module or conftest file
Can be declared at the global level in test modules and conftest.py files to register additional plugins.
Can be either a str
or Sequence[str]
.
pytest_plugins = "myapp.testsupport.myplugin"
pytest_plugins = ("myapp.testsupport.tools", "myapp.testsupport.regression")
pytest_mark¶
Tutorial: Marking whole classes or modules
Can be declared at the global level in test modules to apply one or more marks to all test functions and methods. Can be either a single mark or a sequence of marks.
import pytest
pytestmark = pytest.mark.webtest
import pytest
pytestmark = (pytest.mark.integration, pytest.mark.slow)
PYTEST_DONT_REWRITE (module docstring)¶
The text PYTEST_DONT_REWRITE
can be add to any module docstring to disable
assertion rewriting for that module.
Environment Variables¶
Environment variables that can be used to change pytest’s behavior.
PYTEST_ADDOPTS¶
This contains a command-line (parsed by the py:mod:shlex module) that will be prepended to the command line given by the user, see How to change command line options defaults for more information.
PYTEST_DEBUG¶
When set, pytest will print tracing and debug information.
PYTEST_PLUGINS¶
Contains comma-separated list of modules that should be loaded as plugins:
export PYTEST_PLUGINS=mymodule.plugin,xdist
PYTEST_CURRENT_TEST¶
This is not meant to be set by users, but is set by pytest internally with the name of the current test so other processes can inspect it, see PYTEST_CURRENT_TEST environment variable for more information.
Configuration Options¶
Here is a list of builtin configuration options that may be written in a pytest.ini
, tox.ini
or setup.cfg
file, usually located at the root of your repository. All options must be under a [pytest]
section
([tool:pytest]
for setup.cfg
files).
Configuration file options may be overwritten in the command-line by using -o/--override
, which can also be
passed multiple times. The expected format is name=value
. For example:
pytest -o console_output_style=classic -o cache_dir=/tmp/mycache
-
addopts
¶ Add the specified
OPTS
to the set of command line arguments as if they had been specified by the user. Example: if you have this ini file content:# content of pytest.ini [pytest] addopts = --maxfail=2 -rf # exit after 2 failures, report fail info
issuing
pytest test_hello.py
actually means:pytest --maxfail=2 -rf test_hello.py
Default is to add no options.
-
cache_dir
¶ New in version 3.2.
Sets a directory where stores content of cache plugin. Default directory is
.pytest_cache
which is created in rootdir. Directory may be relative or absolute path. If setting relative path, then directory is created relative to rootdir. Additionally path may contain environment variables, that will be expanded. For more information about cache plugin please refer to Cache: working with cross-testrun state.
-
confcutdir
¶ Sets a directory where search upwards for
conftest.py
files stops. By default, pytest will stop searching forconftest.py
files upwards frompytest.ini
/tox.ini
/setup.cfg
of the project if any, or up to the file-system root.
-
console_output_style
¶ New in version 3.3.
Sets the console output style while running tests:
classic
: classic pytest output.progress
: like classic pytest output, but with a progress indicator.
The default is
progress
, but you can fallback toclassic
if you prefer or the new mode is causing unexpected problems:# content of pytest.ini [pytest] console_output_style = classic
-
doctest_encoding
¶ New in version 3.1.
Default encoding to use to decode text files with docstrings. See how pytest handles doctests.
-
doctest_optionflags
¶ One or more doctest flag names from the standard
doctest
module. See how pytest handles doctests.
-
empty_parameter_set_mark
¶ New in version 3.4.
Allows to pick the action for empty parametersets in parameterization
skip
skips tests with an empty parameterset (default)xfail
marks tests with an empty parameterset as xfail(run=False)
# content of pytest.ini [pytest] empty_parameter_set_mark = xfail
Note
The default value of this option is planned to change to
xfail
in future releases as this is considered less error prone, see #3155 for more details.
-
filterwarnings
¶ New in version 3.1.
Sets a list of filters and actions that should be taken for matched warnings. By default all warnings emitted during the test session will be displayed in a summary at the end of the test session.
# content of pytest.ini [pytest] filterwarnings = error ignore::DeprecationWarning
This tells pytest to ignore deprecation warnings and turn all other warnings into errors. For more information please refer to Warnings Capture.
-
junit_suite_name
¶ New in version 3.1.
To set the name of the root test suite xml item, you can configure the
junit_suite_name
option in your config file:[pytest] junit_suite_name = my_suite
-
log_cli_date_format
¶ New in version 3.3.
Sets a
time.strftime()
-compatible string that will be used when formatting dates for live logging.[pytest] log_cli_date_format = %Y-%m-%d %H:%M:%S
For more information, see Live Logs.
-
log_cli_format
¶ New in version 3.3.
Sets a
logging
-compatible string used to format live logging messages.[pytest] log_cli_format = %(asctime)s %(levelname)s %(message)s
For more information, see Live Logs.
-
log_cli_level
¶ New in version 3.3.
Sets the minimum log message level that should be captured for live logging. The integer value or the names of the levels can be used.
[pytest] log_cli_level = INFO
For more information, see Live Logs.
-
log_date_format
¶ New in version 3.3.
Sets a
time.strftime()
-compatible string that will be used when formatting dates for logging capture.[pytest] log_date_format = %Y-%m-%d %H:%M:%S
For more information, see Logging.
-
log_file
¶ New in version 3.3.
Sets a file name relative to the
pytest.ini
file where log messages should be written to, in addition to the other logging facilities that are active.[pytest] log_file = logs/pytest-logs.txt
For more information, see Logging.
-
log_file_date_format
¶ New in version 3.3.
Sets a
time.strftime()
-compatible string that will be used when formatting dates for the logging file.[pytest] log_file_date_format = %Y-%m-%d %H:%M:%S
For more information, see Logging.
-
log_file_format
¶ New in version 3.3.
Sets a
logging
-compatible string used to format logging messages redirected to the logging file.[pytest] log_file_format = %(asctime)s %(levelname)s %(message)s
For more information, see Logging.
-
log_file_level
¶ New in version 3.3.
Sets the minimum log message level that should be captured for the logging file. The integer value or the names of the levels can be used.
[pytest] log_file_level = INFO
For more information, see Logging.
-
log_format
¶ New in version 3.3.
Sets a
logging
-compatible string used to format captured logging messages.[pytest] log_format = %(asctime)s %(levelname)s %(message)s
For more information, see Logging.
-
log_level
¶ New in version 3.3.
Sets the minimum log message level that should be captured for logging capture. The integer value or the names of the levels can be used.
[pytest] log_level = INFO
For more information, see Logging.
-
log_print
¶ New in version 3.3.
If set to
False
, will disable displaying captured logging messages for failed tests.[pytest] log_print = False
For more information, see Logging.
-
markers
¶ List of markers that are allowed in test functions, enforced when
--strict
command-line argument is used. You can use a marker name per line, indented from the option name.[pytest] markers = slow serial
-
minversion
¶ Specifies a minimal pytest version required for running tests.
# content of pytest.ini [pytest] minversion = 3.0 # will fail if we run with pytest-2.8
-
norecursedirs
¶ Set the directory basename patterns to avoid when recursing for test discovery. The individual (fnmatch-style) patterns are applied to the basename of a directory to decide if to recurse into it. Pattern matching characters:
* matches everything ? matches any single character [seq] matches any character in seq [!seq] matches any char not in seq
Default patterns are
'.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg', 'venv'
. Setting anorecursedirs
replaces the default. Here is an example of how to avoid certain directories:[pytest] norecursedirs = .svn _build tmp*
This would tell
pytest
to not look into typical subversion or sphinx-build directories or into anytmp
prefixed directory.Additionally,
pytest
will attempt to intelligently identify and ignore a virtualenv by the presence of an activation script. Any directory deemed to be the root of a virtual environment will not be considered during test collection unless‑‑collect‑in‑virtualenv
is given. Note also thatnorecursedirs
takes precedence over‑‑collect‑in‑virtualenv
; e.g. if you intend to run tests in a virtualenv with a base directory that matches'.*'
you must overridenorecursedirs
in addition to using the‑‑collect‑in‑virtualenv
flag.
-
python_classes
¶ One or more name prefixes or glob-style patterns determining which classes are considered for test collection. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any class prefixed with
Test
as a test collection. Here is an example of how to collect tests from classes that end inSuite
:[pytest] python_classes = *Suite
Note that
unittest.TestCase
derived classes are always collected regardless of this option, asunittest
’s own collection framework is used to collect those tests.
-
python_files
¶ One or more Glob-style file patterns determining which python files are considered as test modules. Search for multiple glob patterns by adding a space between patterns:
.. code-block:: ini
[pytest] python_files = test_*.py check_*.py example_*.pyBy default, pytest will consider any file matching with
test_*.py
and*_test.py
globs as a test module.
-
python_functions
¶ One or more name prefixes or glob-patterns determining which test functions and methods are considered tests. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any function prefixed with
test
as a test. Here is an example of how to collect test functions and methods that end in_test
:[pytest] python_functions = *_test
Note that this has no effect on methods that live on a
unittest .TestCase
derived class, asunittest
’s own collection framework is used to collect those tests.See Changing naming conventions for more detailed examples.
-
testpaths
¶ New in version 2.8.
Sets list of directories that should be searched for tests when no specific directories, files or test ids are given in the command line when executing pytest from the rootdir directory. Useful when all project tests are in a known location to speed up test collection and to avoid picking up undesired tests by accident.
[pytest] testpaths = testing doc
This tells pytest to only look for tests in
testing
anddoc
directories when executing from the root directory.
-
usefixtures
¶ List of fixtures that will be applied to all test functions; this is semantically the same to apply the
@pytest.mark.usefixtures
marker to all test functions.[pytest] usefixtures = clean_db
-
xfail_strict
¶ If set to
True
, tests marked with@pytest.mark.xfail
that actually succeed will by default fail the test suite. For more information, see strict parameter.[pytest] xfail_strict = True