Replies: 4 comments 2 replies
-
The work-around I came up with for now, to keep all using the nice parametrizations in pytest, is to make the fixture return a generator (which can be cached w/o problems for my use case). Original: @pytest.fixture(scope="function", params=list(itertools.product([1, 3], [0, 1])))
def obj(C, D, request):
A = request.param[0]
B = request.param[1]
obj = mymodule.Obj(A, B, C, D)
return obj # don't want to cache this between tests Uncached fixture: @pytest.fixture(scope="function", params=list(itertools.product([1, 3], [0, 1])))
def make_obj(C, D, request):
def create():
A = request.param[0]
B = request.param[1]
obj = mymodule.Obj(A, B, C, D)
return obj
return create (in the test, the first thing I call is |
Beta Was this translation helpful? Give feedback.
-
It's absolutely unclear what you mean Fixtures in function scope are not cached And if you want to rely on |
Beta Was this translation helpful? Give feedback.
-
Did you have a chance to see the example in the link in the description? # conftest.py
import gc
import sys
import numpy as np
import pytest
@pytest.fixture(scope="function") # can I disable pytest caching for this yield?
def obj():
v = np.array([1, 2, 3])
print("startup: ", sys.getrefcount(v) - 1) # refs: 1
yield v
gc.collect()
print("\nteardown: ", sys.getrefcount(v) - 1) # refs: 3 - why?
del v # still refs remaining # test_foo.py
import sys
def test_foo(obj):
print("++ test_foo ++")
print("in test:", sys.getrefcount(obj) - 1)
Oh really? That's confusing to me, because in the linked example there are still 3 references (in CPython) on the function-scope yielded object:
Can you please be a bit more detailed what you mean by that? |
Beta Was this translation helpful? Give feedback.
-
Thanks for this discussion! The issue of the reference count is very sneaky and can cause some very subtle issues, particularly for people trying to control garbage collection. For example, this comes up often when testing multiple models that run on a GPU. This simple strategy does not work: @pytest.fixture(scope="class")
def model():
yield load_model()
gc.collect()
torch.cuda.empty_cache() At first glance, one would expect the garbage collection to remove all references to the model and thus allow torch to free up the GPU memory. However, as @RonnyPfannschmidt pointed out, there are still references to the model! This means that if you run two GPU models back-to-back, Python might not have garbage-collected (and thus allowed torch to clear the GPU memory) by the time the second model loads. (See, for example, this discussion). There seem to be two approaches that do work. First, you can garbage-collect before loading each model. This deletes all references to tensors on the GPU and that allows torch to reclaim that memory. For example: # Approach 1:
@pytest.fixture(scope="class")
def model():
gc.collect()
yield load_model() The second approach would be to delete all the tensors (or objects containing tensors) from the model and then garbage-collect. For example: # Approach 2:
@pytest.fixture(scope="class")
def model():
model = load_model()
yield model
del model.text_model
del model.vision_model
gc.collect() Approach 2 has the advantage of clearing the memory after each test, though it's more boilerplate and it depends on each model. |
Beta Was this translation helpful? Give feedback.
-
Is there a way to deactivate caching in a fixture in pytest? Or is there a recommended way to write generators of objects that are uncached in pytest but still use decorators like
parametrize
?This is based a somewhat on the original question on behavior of caches in #5642 (comment) and on how to use generators across tests that should yield objects that shall not be cached.
Beta Was this translation helpful? Give feedback.
All reactions