Prerequisites
- Basic understanding of programming concepts ๐
- Python installation (3.8+) ๐
- VS Code or preferred IDE ๐ป
What you'll learn
- Understand the concept fundamentals ๐ฏ
- Apply the concept in real projects ๐๏ธ
- Debug common issues ๐
- Write clean, Pythonic code โจ
๐ Pytest Plugins: Extending Functionality
Welcome, testing champion! ๐ Ever wished you could supercharge your pytest setup with custom features? Thatโs exactly what pytest plugins are for! Theyโre like power-ups for your testing suite, letting you add new capabilities, customize behavior, and share testing tools across projects. Letโs dive into this exciting world! ๐
๐ฏ Introduction
Imagine youโre playing your favorite video game ๐ฎ, and you can install mods that add new weapons, maps, or abilities. Pytest plugins work the same way! They extend pytestโs functionality, adding new features that make your testing life easier and more powerful.
In this tutorial, youโll learn:
- What pytest plugins are and why theyโre awesome ๐
- How to find and use existing plugins ๐
- Creating your own plugins from scratch ๐ ๏ธ
- Best practices for plugin development ๐ก
- Real-world plugin examples thatโll blow your mind ๐คฏ
๐ Understanding Pytest Plugins
What Are Pytest Plugins? ๐ค
Pytest plugins are Python modules that extend pytestโs functionality. They can:
- Add new command-line options ๐ฅ๏ธ
- Create custom fixtures ๐ง
- Modify test collection and execution ๐โโ๏ธ
- Generate custom reports ๐
- And so much more!
Think of them as LEGO blocks ๐งฑ โ you can snap them together to build exactly the testing framework you need!
The Plugin Ecosystem ๐
Pytest has a vibrant ecosystem with hundreds of plugins available. Some popular ones include:
pytest-cov
โ Coverage reporting ๐pytest-xdist
โ Parallel test execution โกpytest-mock
โ Enhanced mocking ๐ญpytest-timeout
โ Test timeout management โฐpytest-django
โ Django integration ๐ฆ
๐ง Basic Plugin Usage
Installing Plugins ๐ฆ
Installing pytest plugins is super easy with pip:
# ๐ฅ Installing popular pytest plugins
pip install pytest-cov # ๐ Coverage reports
pip install pytest-xdist # โก Parallel testing
pip install pytest-timeout # โฐ Timeout control
pip install pytest-mock # ๐ญ Better mocking
Using Installed Plugins ๐ฎ
Once installed, most plugins work automatically! Letโs see them in action:
# test_with_plugins.py ๐งช
import time
import pytest
def slow_function():
"""๐ A function that takes forever"""
time.sleep(2)
return "Finally done!"
@pytest.mark.timeout(1) # โฐ Using pytest-timeout
def test_slow_function_times_out():
"""This test will fail due to timeout! โฑ๏ธ"""
with pytest.raises(pytest.TimeoutError):
slow_function()
def test_fast_function(mocker): # ๐ญ Using pytest-mock
"""Mock the slow function to be fast! ๐"""
mock_slow = mocker.patch('__main__.slow_function')
mock_slow.return_value = "Instantly done!"
result = slow_function()
assert result == "Instantly done!"
# ๐โโ๏ธ Run with coverage: pytest --cov=.
# โก Run in parallel: pytest -n 4
๐ก Creating Your First Plugin
Plugin Basics ๐จ
Letโs create a simple plugin that adds emoji to test output! ๐
# pytest_emoji.py ๐จ
"""A pytest plugin that adds emoji to test results!"""
import pytest
def pytest_configure(config):
"""๐ง Plugin initialization"""
print("\n๐ Emoji plugin activated! Let's make testing fun!")
@pytest.hookimpl(tryfirst=True)
def pytest_runtest_protocol(item, nextitem):
"""๐ฏ Run test with emoji feedback"""
print(f"\n๐งช Running: {item.name}")
# Run the actual test
outcome = yield
# Check result and add emoji
if outcome.get_result() is None:
print("โ
PASSED!")
else:
print("โ FAILED!")
return outcome
# ๐ To use: pytest -p pytest_emoji
Creating a Fixture Plugin ๐ง
Letโs build a plugin that provides useful testing fixtures:
# pytest_test_data.py ๐
"""Plugin providing test data fixtures"""
import pytest
import random
from datetime import datetime, timedelta
@pytest.fixture
def random_user():
"""๐ง Generate random user data"""
first_names = ["Alice", "Bob", "Charlie", "Diana", "Eve"]
last_names = ["Smith", "Johnson", "Williams", "Brown", "Jones"]
return {
"first_name": random.choice(first_names),
"last_name": random.choice(last_names),
"age": random.randint(18, 80),
"email": f"{random.choice(first_names).lower()}@example.com",
"is_active": random.choice([True, False])
}
@pytest.fixture
def sample_products():
"""๐๏ธ Generate sample e-commerce products"""
products = [
{"name": "Laptop", "price": 999.99, "category": "Electronics"},
{"name": "Coffee Maker", "price": 79.99, "category": "Kitchen"},
{"name": "Running Shoes", "price": 129.99, "category": "Sports"},
{"name": "Book", "price": 14.99, "category": "Education"},
{"name": "Headphones", "price": 199.99, "category": "Electronics"}
]
return products
@pytest.fixture
def time_machine():
"""โฐ Fixture for time-based testing"""
class TimeMachine:
def __init__(self):
self.current_time = datetime.now()
def travel_days(self, days):
"""๐ Travel forward or backward in time"""
self.current_time += timedelta(days=days)
return self.current_time
def reset(self):
"""โฎ๏ธ Reset to present time"""
self.current_time = datetime.now()
return self.current_time
return TimeMachine()
# ๐งช Using our fixtures in tests
def test_user_creation(random_user):
"""Test with random user data ๐ง"""
assert random_user["age"] >= 18
assert "@" in random_user["email"]
print(f"Testing user: {random_user['first_name']} {random_user['last_name']}")
def test_shopping_cart(sample_products):
"""Test e-commerce functionality ๐"""
total = sum(p["price"] for p in sample_products)
assert total > 0
assert len(sample_products) == 5
๐ Advanced Plugin Concepts
Custom Markers Plugin ๐ท๏ธ
Create custom test markers for better organization:
# pytest_custom_markers.py ๐ท๏ธ
"""Plugin for custom test markers"""
import pytest
import time
def pytest_configure(config):
"""๐ Register custom markers"""
config.addinivalue_line(
"markers", "slow: marks tests as slow (deselect with '-m \"not slow\"')"
)
config.addinivalue_line(
"markers", "integration: marks tests as integration tests"
)
config.addinivalue_line(
"markers", "smoke: marks tests for smoke testing"
)
@pytest.hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
"""๐ฆ Custom setup based on markers"""
markers = [marker.name for marker in item.iter_markers()]
if "slow" in markers:
print(f"\n๐ Running slow test: {item.name}")
if "integration" in markers:
print(f"\n๐ Running integration test: {item.name}")
if "smoke" in markers:
print(f"\n๐จ Running smoke test: {item.name}")
# ๐งช Example tests using custom markers
@pytest.mark.slow
def test_data_processing():
"""Heavy data processing test ๐"""
time.sleep(1) # Simulate slow operation
data = [i ** 2 for i in range(1000000)]
assert len(data) == 1000000
@pytest.mark.integration
def test_api_connection():
"""Test external API integration ๐"""
# Simulate API call
response = {"status": "success", "data": [1, 2, 3]}
assert response["status"] == "success"
@pytest.mark.smoke
def test_basic_functionality():
"""Quick smoke test ๐จ"""
assert 1 + 1 == 2
assert "hello".upper() == "HELLO"
Report Enhancement Plugin ๐
Create beautiful test reports:
# pytest_fancy_report.py ๐
"""Plugin for enhanced test reporting"""
import pytest
from datetime import datetime
class TestReport:
"""๐ Custom test report collector"""
def __init__(self):
self.passed = []
self.failed = []
self.skipped = []
self.start_time = None
self.end_time = None
report = TestReport()
def pytest_sessionstart(session):
"""๐ Test session started"""
report.start_time = datetime.now()
print("\n" + "="*50)
print("๐ TEST SESSION STARTED")
print(f"๐
Time: {report.start_time.strftime('%Y-%m-%d %H:%M:%S')}")
print("="*50)
def pytest_runtest_logreport(report):
"""๐ Log individual test results"""
if report.when == "call":
if report.passed:
TestReport.passed.append(report.nodeid)
elif report.failed:
TestReport.failed.append(report.nodeid)
elif report.skipped:
TestReport.skipped.append(report.nodeid)
def pytest_sessionfinish(session, exitstatus):
"""๐ Test session finished"""
report.end_time = datetime.now()
duration = (report.end_time - report.start_time).total_seconds()
print("\n" + "="*50)
print("๐ TEST RESULTS SUMMARY")
print("="*50)
print(f"โ
Passed: {len(TestReport.passed)}")
print(f"โ Failed: {len(TestReport.failed)}")
print(f"โญ๏ธ Skipped: {len(TestReport.skipped)}")
print(f"โฑ๏ธ Duration: {duration:.2f} seconds")
print("="*50)
if TestReport.failed:
print("\nโ Failed tests:")
for test in TestReport.failed:
print(f" - {test}")
if exitstatus == 0:
print("\n๐ All tests passed! Great job!")
else:
print("\n๐ช Keep going! You'll fix those failures!")
โ ๏ธ Common Pitfalls and Solutions
โ Wrong: Plugin Name Conflicts
# โ BAD: Using common names that might conflict
# my_plugin.py
def pytest_configure(config):
config.my_data = {"key": "value"} # ๐ฑ Might overwrite!
# โ BAD: Not checking if attribute exists
def pytest_unconfigure(config):
del config.my_data # ๐ฅ AttributeError if not set!
โ Right: Safe Plugin Development
# โ
GOOD: Using unique namespaces
# my_awesome_plugin.py
def pytest_configure(config):
"""๐ง Safe plugin configuration"""
# Use unique attribute names
if not hasattr(config, '_my_awesome_plugin_data'):
config._my_awesome_plugin_data = {"key": "value"}
def pytest_unconfigure(config):
"""๐งน Safe cleanup"""
# Check before deleting
if hasattr(config, '_my_awesome_plugin_data'):
del config._my_awesome_plugin_data
# โ
GOOD: Proper error handling
@pytest.hookimpl(tryfirst=True)
def pytest_runtest_protocol(item, nextitem):
"""๐ก๏ธ Safe test execution"""
try:
print(f"๐งช Running: {item.name}")
outcome = yield
return outcome
except Exception as e:
print(f"โ ๏ธ Plugin error: {e}")
# Let test continue normally
return
๐ ๏ธ Best Practices
1. Plugin Structure ๐
# my_pytest_plugin/
# โโโ __init__.py
# โโโ plugin.py # ๐ฏ Main plugin code
# โโโ fixtures.py # ๐ง Custom fixtures
# โโโ hooks.py # ๐ช Hook implementations
# โโโ conftest.py # โ๏ธ Plugin configuration
# plugin.py
"""๐ฏ Main plugin entry point"""
from .fixtures import *
from .hooks import *
def pytest_configure(config):
"""๐ง Plugin initialization"""
print("โจ My awesome plugin loaded!")
# fixtures.py
"""๐ง Custom fixtures"""
import pytest
@pytest.fixture(scope="session")
def database_connection():
"""๐๏ธ Shared database connection"""
conn = create_connection()
yield conn
conn.close()
# hooks.py
"""๐ช Custom hooks"""
import pytest
@pytest.hookimpl
def pytest_collection_modifyitems(items):
"""๐ Modify test collection"""
# Sort tests by name
items.sort(key=lambda x: x.name)
2. Plugin Testing ๐งช
Always test your plugins!
# test_my_plugin.py
"""๐งช Testing our custom plugin"""
def test_plugin_loads(testdir):
"""Test that plugin loads correctly"""
testdir.makepyfile("""
def test_example():
assert True
""")
result = testdir.runpytest("-p", "my_pytest_plugin")
result.assert_outcomes(passed=1)
result.stdout.fnmatch_lines(["*My awesome plugin loaded!*"])
def test_custom_fixture(testdir):
"""Test our custom fixtures work"""
testdir.makepyfile("""
def test_with_fixture(random_user):
assert "email" in random_user
assert random_user["age"] >= 18
""")
result = testdir.runpytest()
result.assert_outcomes(passed=1)
3. Distribution ๐ฆ
Share your plugin with the world!
# setup.py
from setuptools import setup, find_packages
setup(
name="pytest-awesome",
version="1.0.0",
packages=find_packages(),
entry_points={
"pytest11": [
"awesome = pytest_awesome.plugin",
],
},
install_requires=["pytest>=6.0"],
classifiers=[
"Framework :: Pytest",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
],
)
# ๐ค Publishing
# python setup.py sdist bdist_wheel
# twine upload dist/*
๐งช Hands-On Exercise
Ready to build your own plugin? Letโs create a performance monitoring plugin! ๐โโ๏ธ
Challenge: Create a pytest plugin that:
- Tracks test execution time โฑ๏ธ
- Warns about slow tests (>1 second) ๐
- Generates a performance report ๐
- Provides a fixture for benchmarking ๐
Try it yourself first! When ready, check the solution below.
๐ก Click to see the solution
# pytest_performance.py ๐โโ๏ธ
"""Performance monitoring plugin for pytest"""
import pytest
import time
from collections import defaultdict
class PerformanceMonitor:
"""๐ Tracks test performance metrics"""
def __init__(self):
self.test_times = defaultdict(float)
self.slow_tests = []
self.threshold = 1.0 # seconds
def record_time(self, test_name, duration):
"""โฑ๏ธ Record test execution time"""
self.test_times[test_name] = duration
if duration > self.threshold:
self.slow_tests.append((test_name, duration))
# Global monitor instance
monitor = PerformanceMonitor()
def pytest_configure(config):
"""๐ง Plugin configuration"""
config._performance_monitor = monitor
print("\n๐โโ๏ธ Performance monitoring enabled!")
@pytest.hookimpl
def pytest_runtest_protocol(item, nextitem):
"""โฑ๏ธ Time each test execution"""
start_time = time.time()
# Run the test
outcome = yield
# Calculate duration
duration = time.time() - start_time
monitor.record_time(item.nodeid, duration)
# Warn about slow tests
if duration > monitor.threshold:
print(f"\nโ ๏ธ SLOW TEST: {item.name} took {duration:.2f}s")
return outcome
@pytest.fixture
def benchmark():
"""๐ Benchmarking fixture"""
class Benchmark:
def __init__(self):
self.times = []
def __call__(self, func, *args, **kwargs):
"""โฑ๏ธ Benchmark a function"""
start = time.time()
result = func(*args, **kwargs)
duration = time.time() - start
self.times.append(duration)
return result
@property
def avg_time(self):
"""๐ Average execution time"""
return sum(self.times) / len(self.times) if self.times else 0
@property
def min_time(self):
"""โก Fastest execution"""
return min(self.times) if self.times else 0
@property
def max_time(self):
"""๐ Slowest execution"""
return max(self.times) if self.times else 0
return Benchmark()
def pytest_sessionfinish(session, exitstatus):
"""๐ Generate performance report"""
print("\n" + "="*60)
print("๐ PERFORMANCE REPORT")
print("="*60)
if monitor.test_times:
# Sort by execution time
sorted_times = sorted(
monitor.test_times.items(),
key=lambda x: x[1],
reverse=True
)
print("\nโฑ๏ธ Test Execution Times:")
for test, duration in sorted_times[:10]: # Top 10
emoji = "๐" if duration > monitor.threshold else "โก"
print(f"{emoji} {test}: {duration:.3f}s")
if monitor.slow_tests:
print(f"\nโ ๏ธ Found {len(monitor.slow_tests)} slow tests!")
print("Consider optimizing these tests or marking them with @pytest.mark.slow")
print("="*60)
# ๐งช Example test using the plugin
def test_with_benchmark(benchmark):
"""Test using our benchmark fixture ๐"""
def slow_calculation(n):
"""๐งฎ Some complex calculation"""
return sum(i**2 for i in range(n))
# Benchmark the function 5 times
for _ in range(5):
result = benchmark(slow_calculation, 10000)
print(f"\n๐ Benchmark results:")
print(f" โก Min: {benchmark.min_time:.4f}s")
print(f" ๐ Avg: {benchmark.avg_time:.4f}s")
print(f" ๐ Max: {benchmark.max_time:.4f}s")
assert result > 0
assert benchmark.avg_time < 1.0 # Should be fast!
Great job! Youโve created a powerful performance monitoring plugin! ๐
๐ Key Takeaways
Youโve mastered pytest plugins! Hereโs what you learned:
- Plugin Power ๐ช: Plugins extend pytest with custom features
- Easy Installation ๐ฆ: Most plugins work with just
pip install
- Hook System ๐ช: Pytestโs hooks let you customize every aspect
- Custom Fixtures ๐ง: Share reusable test utilities
- Enhanced Reports ๐: Make test output beautiful and informative
- Best Practices โจ: Structure, test, and distribute your plugins
Remember:
- Start simple, then add complexity ๐ฑ
- Test your plugins thoroughly ๐งช
- Share useful plugins with the community ๐ค
- Use existing plugins when possible ๐
๐ค Next Steps
Congratulations, plugin architect! ๐๏ธ Youโre now equipped to:
- Use popular pytest plugins effectively ๐ ๏ธ
- Create custom plugins for your needs ๐จ
- Share your testing tools with others ๐ค
Your testing journey continues with:
- Next Tutorial: Coverage reports and metrics ๐
- Practice: Create a plugin for your project ๐ป
- Explore: Check out pytestโs plugin directory ๐
- Share: Publish your awesome plugins! ๐
Keep building amazing testing tools! Your future self (and your team) will thank you! ๐
Happy plugin development! ๐โจ