Prerequisites
- Basic understanding of programming concepts 📝
- Python installation (3.8+) 🐍
- VS Code or preferred IDE 💻
What you'll learn
- Understand the concept fundamentals 🎯
- Apply the concept in real projects 🏗️
- Debug common issues 🐛
- Write clean, Pythonic code ✨
🎯 Introduction
Welcome to the exciting world of continuous testing and test automation in Python! 🎉 In this guide, we’ll explore how to build robust automated testing pipelines that catch bugs before they reach production.
You’ll discover how continuous testing can transform your development workflow. Whether you’re building web applications 🌐, APIs 🔌, or data pipelines 📊, automated testing is your safety net that lets you deploy with confidence.
By the end of this tutorial, you’ll be automating tests like a pro and sleeping better at night knowing your code is well-tested! Let’s dive in! 🏊♂️
📚 Understanding Continuous Testing
🤔 What is Continuous Testing?
Continuous testing is like having a diligent robot assistant 🤖 that checks your code every time you make changes. Think of it as a quality control inspector at a factory assembly line, catching defects before products ship to customers.
In Python terms, continuous testing means automatically running your test suite whenever code changes occur. This means you can:
- ✨ Catch bugs within minutes of writing them
- 🚀 Deploy with confidence knowing tests pass
- 🛡️ Maintain code quality across team members
💡 Why Use Test Automation?
Here’s why developers love automated testing:
- Early Bug Detection 🐛: Find issues before they multiply
- Faster Development ⚡: No manual testing for every change
- Documentation 📖: Tests show how code should work
- Refactoring Safety 🔧: Change code without fear
Real-world example: Imagine building an e-commerce API 🛒. With continuous testing, every endpoint is automatically tested whenever you push code, ensuring customers never see broken checkout flows!
🔧 Basic Syntax and Usage
📝 Simple Test Example with pytest
Let’s start with a friendly example:
# 👋 Hello, automated testing!
import pytest
# 🎨 The function we want to test
def calculate_discount(price, discount_percent):
"""Calculate discounted price""" # 💰
if discount_percent < 0 or discount_percent > 100:
raise ValueError("Discount must be between 0 and 100")
return price * (1 - discount_percent / 100)
# 🧪 Our test function
def test_calculate_discount():
# ✅ Test normal discount
assert calculate_discount(100, 20) == 80
# 🎯 Test edge cases
assert calculate_discount(100, 0) == 100
assert calculate_discount(100, 100) == 0
# 💥 Test error handling
with pytest.raises(ValueError):
calculate_discount(100, 150)
💡 Explanation: Notice how we test both happy paths and edge cases! The pytest.raises
checks that our function properly handles invalid inputs.
🎯 Setting Up Continuous Testing
Here’s how to automate your tests:
# 🏗️ pytest.ini configuration
[pytest]
# 🚀 Auto-discover test files
testpaths = tests
python_files = test_*.py *_test.py
python_classes = Test*
python_functions = test_*
# 🎨 Coverage settings
addopts =
--cov=src
--cov-report=html
--cov-report=term-missing
-v
# 🔄 Watch for changes and run tests (pytest-watch)
# Install: pip install pytest-watch
# Run: ptw -- --cov=src
# 📊 Example test output
"""
🧪 Running tests...
tests/test_calculator.py::test_calculate_discount ✅ PASSED
tests/test_calculator.py::test_bulk_discount ✅ PASSED
---------- coverage report ----------
Name Stmts Miss Cover
-----------------------------------------
src/calculator.py 15 0 100% ✨
-----------------------------------------
TOTAL 15 0 100%
"""
💡 Practical Examples
🛒 Example 1: E-commerce API Testing
Let’s build automated tests for a shopping cart API:
# 🛍️ Shopping cart with automated tests
import pytest
from datetime import datetime
class ShoppingCart:
def __init__(self):
self.items = [] # 📦
self.discount_code = None # 🎟️
def add_item(self, product_id, name, price, quantity=1):
"""Add item to cart""" # ➕
if price < 0:
raise ValueError("Price cannot be negative! 💰")
if quantity < 1:
raise ValueError("Quantity must be at least 1! 📦")
self.items.append({
'id': product_id,
'name': name,
'price': price,
'quantity': quantity,
'added_at': datetime.now()
})
return f"Added {name} to cart! 🛒"
def apply_discount(self, code, percentage):
"""Apply discount code""" # 🎉
if self.discount_code:
raise ValueError("Discount already applied! 🚫")
if percentage < 0 or percentage > 50:
raise ValueError("Invalid discount percentage! ⚠️")
self.discount_code = {'code': code, 'percentage': percentage}
return f"Discount {code} applied! 🎊"
def calculate_total(self):
"""Calculate cart total""" # 💰
subtotal = sum(item['price'] * item['quantity'] for item in self.items)
if self.discount_code:
discount = subtotal * (self.discount_code['percentage'] / 100)
return subtotal - discount
return subtotal
# 🧪 Comprehensive test suite
class TestShoppingCart:
@pytest.fixture
def cart(self):
"""Create fresh cart for each test""" # 🆕
return ShoppingCart()
@pytest.fixture
def sample_products(self):
"""Sample product data""" # 📊
return [
{'id': '1', 'name': 'Python Book', 'price': 39.99, 'emoji': '📘'},
{'id': '2', 'name': 'Coffee Mug', 'price': 12.99, 'emoji': '☕'},
{'id': '3', 'name': 'Keyboard', 'price': 89.99, 'emoji': '⌨️'}
]
def test_add_single_item(self, cart, sample_products):
"""Test adding one item""" # ✅
product = sample_products[0]
result = cart.add_item(product['id'], product['name'], product['price'])
assert len(cart.items) == 1
assert cart.items[0]['name'] == 'Python Book'
assert "Added Python Book to cart! 🛒" in result
def test_add_multiple_items(self, cart, sample_products):
"""Test adding multiple items""" # 🛍️
for product in sample_products:
cart.add_item(product['id'], product['name'], product['price'])
assert len(cart.items) == 3
assert cart.calculate_total() == 142.97
def test_quantity_handling(self, cart):
"""Test quantity validation""" # 📦
cart.add_item('1', 'Test Item', 10.00, quantity=5)
assert cart.items[0]['quantity'] == 5
assert cart.calculate_total() == 50.00
# ❌ Test invalid quantity
with pytest.raises(ValueError, match="Quantity must be at least 1"):
cart.add_item('2', 'Another Item', 10.00, quantity=0)
def test_discount_application(self, cart, sample_products):
"""Test discount codes""" # 🎟️
# Add items first
for product in sample_products:
cart.add_item(product['id'], product['name'], product['price'])
# Apply discount
result = cart.apply_discount('SAVE20', 20)
assert "Discount SAVE20 applied! 🎊" in result
assert cart.calculate_total() == pytest.approx(114.38, 0.01)
# ❌ Test duplicate discount
with pytest.raises(ValueError, match="Discount already applied"):
cart.apply_discount('ANOTHER', 10)
@pytest.mark.parametrize("price,quantity,expected", [
(10.00, 1, 10.00),
(10.00, 5, 50.00),
(99.99, 2, 199.98),
(0.01, 100, 1.00),
])
def test_price_calculations(self, cart, price, quantity, expected):
"""Test various price calculations""" # 💰
cart.add_item('test', 'Test Item', price, quantity)
assert cart.calculate_total() == pytest.approx(expected, 0.01)
🎯 Try it yourself: Add tests for removing items and tax calculations!
🎮 Example 2: Game Score Testing with Mocking
Let’s test a game with external API calls:
# 🏆 Game leaderboard with mocked API calls
import pytest
from unittest.mock import Mock, patch
import requests
class GameLeaderboard:
def __init__(self, api_url):
self.api_url = api_url # 🌐
self.local_scores = [] # 📊
def submit_score(self, player_name, score, level):
"""Submit score to leaderboard""" # 🎯
if score < 0:
raise ValueError("Score cannot be negative! 😱")
score_data = {
'player': player_name,
'score': score,
'level': level,
'timestamp': datetime.now().isoformat()
}
# 🚀 Submit to API
try:
response = requests.post(
f"{self.api_url}/scores",
json=score_data
)
response.raise_for_status()
# 💾 Store locally too
self.local_scores.append(score_data)
return f"Score submitted! {player_name} scored {score} points! 🎉"
except requests.RequestException as e:
return f"Failed to submit score: {e} 😢"
def get_top_scores(self, limit=10):
"""Get top scores from API""" # 🏆
try:
response = requests.get(
f"{self.api_url}/scores/top",
params={'limit': limit}
)
response.raise_for_status()
return response.json()
except requests.RequestException:
# 🔄 Fallback to local scores
return sorted(
self.local_scores,
key=lambda x: x['score'],
reverse=True
)[:limit]
# 🧪 Test suite with mocking
class TestGameLeaderboard:
@pytest.fixture
def leaderboard(self):
"""Create leaderboard instance""" # 🎮
return GameLeaderboard("https://api.example.com")
@patch('requests.post')
def test_submit_score_success(self, mock_post, leaderboard):
"""Test successful score submission""" # ✅
# 🎭 Configure mock response
mock_response = Mock()
mock_response.status_code = 201
mock_response.raise_for_status = Mock()
mock_post.return_value = mock_response
result = leaderboard.submit_score("Alice", 1000, 5)
# 🔍 Verify API was called correctly
mock_post.assert_called_once()
call_args = mock_post.call_args
assert call_args[0][0] == "https://api.example.com/scores"
assert call_args[1]['json']['player'] == "Alice"
assert call_args[1]['json']['score'] == 1000
# ✨ Check local storage
assert len(leaderboard.local_scores) == 1
assert "Score submitted!" in result
@patch('requests.post')
def test_submit_score_api_failure(self, mock_post, leaderboard):
"""Test API failure handling""" # ❌
# 💥 Simulate API error
mock_post.side_effect = requests.ConnectionError("API is down!")
result = leaderboard.submit_score("Bob", 500, 3)
assert "Failed to submit score" in result
assert "😢" in result
# 📦 Local scores still stored
assert len(leaderboard.local_scores) == 1
@patch('requests.get')
def test_get_top_scores_with_fallback(self, mock_get, leaderboard):
"""Test fallback to local scores""" # 🔄
# Add some local scores
leaderboard.local_scores = [
{'player': 'Alice', 'score': 1000},
{'player': 'Bob', 'score': 500},
{'player': 'Charlie', 'score': 1500}
]
# 💥 Simulate API failure
mock_get.side_effect = requests.Timeout()
top_scores = leaderboard.get_top_scores(limit=2)
# 🏆 Should return local scores sorted
assert len(top_scores) == 2
assert top_scores[0]['player'] == 'Charlie'
assert top_scores[0]['score'] == 1500
🚀 Advanced Concepts
🧙♂️ Continuous Integration with GitHub Actions
When you’re ready to level up, automate testing in CI/CD:
# 🎯 .github/workflows/python-tests.yml
name: Python Tests 🧪
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.8', '3.9', '3.10', '3.11']
steps:
- uses: actions/checkout@v3
name: Checkout code 📥
- name: Set up Python ${{ matrix.python-version }} 🐍
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies 📦
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Run tests with coverage 🧪
run: |
pytest --cov=src --cov-report=xml --cov-report=html
- name: Upload coverage reports 📊
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
flags: unittests
name: codecov-umbrella
🏗️ Advanced Testing Patterns
For the brave developers, here’s test-driven development:
# 🚀 TDD approach - write tests first!
import pytest
from datetime import datetime, timedelta
# 🎯 Test for a feature that doesn't exist yet
class TestUserSubscription:
def test_subscription_lifecycle(self):
"""Test complete subscription flow""" # 🔄
# 1️⃣ Create user
user = User("[email protected]")
assert user.is_subscribed() is False
# 2️⃣ Subscribe user
subscription = user.subscribe(plan="premium", months=1)
assert user.is_subscribed() is True
assert subscription.plan == "premium"
assert subscription.expires_at > datetime.now()
# 3️⃣ Check features
assert user.can_access_feature("advanced_analytics") is True
assert user.can_access_feature("priority_support") is True
# 4️⃣ Test expiration
subscription.expires_at = datetime.now() - timedelta(days=1)
assert user.is_subscribed() is False
assert user.can_access_feature("advanced_analytics") is False
# 🏗️ Now implement the feature to make tests pass!
class User:
def __init__(self, email):
self.email = email # 📧
self.subscription = None # 🎟️
def is_subscribed(self):
"""Check if user has active subscription""" # ✅
if not self.subscription:
return False
return self.subscription.expires_at > datetime.now()
def subscribe(self, plan, months):
"""Create subscription""" # 🎉
self.subscription = Subscription(
plan=plan,
expires_at=datetime.now() + timedelta(days=30 * months)
)
return self.subscription
def can_access_feature(self, feature):
"""Check feature access""" # 🔒
if not self.is_subscribed():
return False
premium_features = ["advanced_analytics", "priority_support"]
return feature in premium_features
class Subscription:
def __init__(self, plan, expires_at):
self.plan = plan # 💎
self.expires_at = expires_at # ⏰
⚠️ Common Pitfalls and Solutions
😱 Pitfall 1: Testing Implementation, Not Behavior
# ❌ Wrong way - testing internal implementation
def test_bad_approach():
cart = ShoppingCart()
cart.add_item("1", "Book", 10.00)
# 😰 Don't test private attributes!
assert cart._internal_counter == 1
assert cart._items_dict.get("1") is not None
# ✅ Correct way - test behavior
def test_good_approach():
cart = ShoppingCart()
cart.add_item("1", "Book", 10.00)
# 🎯 Test what users care about!
assert cart.get_item_count() == 1
assert cart.calculate_total() == 10.00
🤯 Pitfall 2: Forgetting to Mock External Services
# ❌ Dangerous - real API calls in tests!
def test_weather_service_bad():
service = WeatherService()
# 💥 This hits real API and may fail!
weather = service.get_current_weather("London")
assert weather['temperature'] > -50
# ✅ Safe - mock external calls
@patch('requests.get')
def test_weather_service_good(mock_get):
# 🎭 Control the response
mock_get.return_value.json.return_value = {
'temperature': 20,
'condition': 'sunny'
}
service = WeatherService()
weather = service.get_current_weather("London")
# ✅ Predictable and fast!
assert weather['temperature'] == 20
assert weather['condition'] == 'sunny'
🛠️ Best Practices
- 🎯 Test One Thing: Each test should verify one behavior
- 📝 Clear Test Names:
test_user_can_login_with_valid_credentials
- 🛡️ Independent Tests: Tests shouldn’t depend on each other
- 🎨 Arrange-Act-Assert: Structure tests clearly
- ✨ Fast Tests: Mock external dependencies
🧪 Hands-On Exercise
🎯 Challenge: Build a Test Suite for a Task Manager
Create comprehensive tests for a task management system:
📋 Requirements:
- ✅ Tasks with title, description, status, and priority
- 🏷️ Labels for categorization
- 👤 User assignment
- 📅 Due dates with reminders
- 🔔 Notification system
🚀 Bonus Points:
- Add integration tests
- Implement test fixtures
- Create performance tests
💡 Solution
🔍 Click to see solution
# 🎯 Comprehensive task manager test suite!
import pytest
from datetime import datetime, timedelta
from unittest.mock import Mock, patch
class Task:
def __init__(self, title, description=""):
self.id = None # 🆔
self.title = title # 📝
self.description = description # 📄
self.status = "todo" # 📊
self.priority = "medium" # 🎯
self.labels = [] # 🏷️
self.assignee = None # 👤
self.due_date = None # 📅
self.created_at = datetime.now() # ⏰
def set_priority(self, priority):
"""Set task priority""" # 🎯
valid_priorities = ["low", "medium", "high", "urgent"]
if priority not in valid_priorities:
raise ValueError(f"Invalid priority: {priority}")
self.priority = priority
def is_overdue(self):
"""Check if task is overdue""" # ⏰
if not self.due_date:
return False
return datetime.now() > self.due_date and self.status != "done"
class TaskManager:
def __init__(self, notification_service=None):
self.tasks = [] # 📋
self.next_id = 1 # 🔢
self.notification_service = notification_service # 🔔
def create_task(self, title, description="", **kwargs):
"""Create new task""" # ✨
task = Task(title, description)
task.id = self.next_id
self.next_id += 1
# Set optional attributes
for key, value in kwargs.items():
if hasattr(task, key):
setattr(task, key, value)
self.tasks.append(task)
# 📧 Notify if assigned
if task.assignee and self.notification_service:
self.notification_service.send(
task.assignee,
f"New task assigned: {title} 📋"
)
return task
def get_tasks_by_status(self, status):
"""Filter tasks by status""" # 🔍
return [t for t in self.tasks if t.status == status]
def get_overdue_tasks(self):
"""Get all overdue tasks""" # ⚠️
return [t for t in self.tasks if t.is_overdue()]
# 🧪 Comprehensive test suite
class TestTaskManager:
@pytest.fixture
def task_manager(self):
"""Create task manager instance""" # 🏗️
return TaskManager()
@pytest.fixture
def mock_notification_service(self):
"""Create mock notification service""" # 🎭
return Mock()
def test_create_simple_task(self, task_manager):
"""Test basic task creation""" # ✅
task = task_manager.create_task(
"Write tests",
"Write comprehensive test suite"
)
assert task.id == 1
assert task.title == "Write tests"
assert task.status == "todo"
assert task.priority == "medium"
def test_create_task_with_attributes(self, task_manager):
"""Test task with all attributes""" # 🎨
due_date = datetime.now() + timedelta(days=7)
task = task_manager.create_task(
"Complete project",
priority="high",
assignee="[email protected]",
due_date=due_date,
labels=["urgent", "client"]
)
assert task.priority == "high"
assert task.assignee == "[email protected]"
assert task.due_date == due_date
assert "urgent" in task.labels
def test_task_notification(self, mock_notification_service):
"""Test notification on assignment""" # 🔔
manager = TaskManager(mock_notification_service)
task = manager.create_task(
"Review code",
assignee="[email protected]"
)
mock_notification_service.send.assert_called_once_with(
"[email protected]",
"New task assigned: Review code 📋"
)
def test_priority_validation(self, task_manager):
"""Test priority validation""" # 🚫
task = task_manager.create_task("Test task")
# ✅ Valid priority
task.set_priority("urgent")
assert task.priority == "urgent"
# ❌ Invalid priority
with pytest.raises(ValueError, match="Invalid priority"):
task.set_priority("super-urgent")
def test_overdue_detection(self, task_manager):
"""Test overdue task detection""" # ⏰
# Past due date
past_date = datetime.now() - timedelta(days=1)
overdue_task = task_manager.create_task(
"Overdue task",
due_date=past_date
)
# Future due date
future_date = datetime.now() + timedelta(days=1)
future_task = task_manager.create_task(
"Future task",
due_date=future_date
)
# Completed task with past due date
done_task = task_manager.create_task(
"Done task",
due_date=past_date
)
done_task.status = "done"
overdue = task_manager.get_overdue_tasks()
assert len(overdue) == 1
assert overdue[0].title == "Overdue task"
@pytest.mark.parametrize("status,expected_count", [
("todo", 2),
("in_progress", 1),
("done", 1),
("cancelled", 0),
])
def test_filter_by_status(self, task_manager, status, expected_count):
"""Test status filtering""" # 🔍
# Create tasks with different statuses
task_manager.create_task("Task 1").status = "todo"
task_manager.create_task("Task 2").status = "todo"
task_manager.create_task("Task 3").status = "in_progress"
task_manager.create_task("Task 4").status = "done"
filtered = task_manager.get_tasks_by_status(status)
assert len(filtered) == expected_count
def test_performance_many_tasks(self, task_manager):
"""Test performance with many tasks""" # 🚀
import time
# Create 1000 tasks
start_time = time.time()
for i in range(1000):
task_manager.create_task(f"Task {i}")
creation_time = time.time() - start_time
# Search should still be fast
start_time = time.time()
todo_tasks = task_manager.get_tasks_by_status("todo")
search_time = time.time() - start_time
assert len(todo_tasks) == 1000
assert creation_time < 1.0 # Should create 1000 tasks in < 1 second
assert search_time < 0.1 # Should search in < 0.1 seconds
🎓 Key Takeaways
You’ve learned so much! Here’s what you can now do:
- ✅ Set up continuous testing with pytest and pytest-watch 💪
- ✅ Write comprehensive test suites that catch bugs early 🛡️
- ✅ Mock external dependencies for reliable tests 🎯
- ✅ Integrate tests into CI/CD pipelines 🐛
- ✅ Apply TDD principles to write better code! 🚀
Remember: Tests are your safety net - write them before you need them! 🤝
🤝 Next Steps
Congratulations! 🎉 You’ve mastered continuous testing and test automation!
Here’s what to do next:
- 💻 Set up pytest-watch in your current project
- 🏗️ Write tests for your most critical features
- 📚 Move on to our next tutorial on Integration Testing
- 🌟 Share your testing success stories with others!
Remember: Every bug caught by tests is a bug that never reaches your users. Keep testing, keep shipping, and most importantly, have fun! 🚀
Happy testing! 🎉🚀✨