6.3 Automated Integration Testing
Introduction
After unit tests, the next level up the testing pyramid will be integration tests. These tests are means to ensure that the interactions between discrete components of the system are able to function together properly.
In testing, mocks and stubs are techniques used to isolate and control the behavior of components or dependencies during the execution of test cases. These help in creating predictable and controlled environments for testing. Stub objects and mock objects are also known as “test doubles” because they stand in for the real implementation for purposes of testing.
Integration Testing with Stubs
A stub is a simple object or function that stands in for a real implementation. It provides predetermined responses to method calls and is used to simulate the behavior of a real component.
Example in Python
Suppose you have a class that interacts with an external API, and you want to test the class without actually making API calls. You can use a stub to simulate the API calls and return predefined responses.
test_stub_example.py
# An interface for returning data from a client
class ExternalAPIClientInterface:
def get_data(self):
""" Returns data to the client. """
pass
# A concrete implementation of the ExternalAPIClientInterface which returns canned responses
class ExternalAPIClientStub(ExternalAPIClientInterface):
def get_data(self):
# Actual implementation makes API call
# For testing, we'll use a stub
return "Stubbed API response"
# Test code using the stub
def test_external_api_interaction():
external_api_stub = ExternalAPIClientStub()
# Now, when the test code calls get_data, it gets the stubbed response
assert external_api_stub.get_data() == "Stubbed API response"
In the example above, an interface called ExternalAPIClientInterface is created. This interface has a concrete implementation called ExternalAPIClientStub which returns a canned response to the call to get_data. The real implementation (not shown here) might set up some other data, connect to a database, etc. The stub implementation stands in for the concrete implementation for testing purposes.
Integration Testing with Mock Objects
Similar to a stub object, a mock is an object or function that mimics the behavior of a real component and allows you to set expectations on how it should be called. Mocks are used to verify interactions with dependencies and make assertions about how they were used.
Example in Python:
Consider a function that sends a request to a URL and returns the JSON response. This function uses another library to do the work, so a mock here allows us to execute the method without actually calling out to the URL.
from unittest.mock import Mock, patch
import requests
class MyAPIClient:
def make_request(self, url):
""" Uses the requests object to get information from a URL and returns the JSON response """
response = requests.get(url)
return response.json()
def test_my_api_client():
# Create a mock for the requests module
requests_mock = Mock()
# Patch the requests.get method with the mock.
# This way whenever get is called, the mock version of get is called instead
with patch('requests.get', side_effect=requests_mock.get):
# Create an instance of MyAPIClient
api_client = MyAPIClient()
# Define the behavior of the mock for a specific URL
url = 'https://api.example.com/data'
expected_response = {'key': 'value'}
requests_mock.get.return_value.json.return_value = expected_response
# Make a request using the API client
result = api_client.make_request(url)
# Verify that requests.get was called with the correct URL
requests_mock.get.assert_called_once_with(url)
# Verify that the API client processed the response correctly
assert result == expected_response
In this example, the MyAPIClient class has a method make_request that uses requests.get to make an HTTP request. In the test, we create a mock for the requests module and patch the requests.get method with the mock using patch. We then define the behavior of the mock for a specific URL, and when the make_request method is called, the mock is used instead of making a real HTTP request.
Integration Testing with Drivers
Here’s an example of an integration test using a driver. Let’s say that we have a simple Flask server that replies with a JSON greeting whenever someone hits the /api/greet endpoint. The expected response should be {"message":"Hello, Guest"}
# backend.py
from flask import Flask, jsonify
app = Flask(__name__)
@app.route('/api/greet')
def greet():
greeting = f"Hello, Guest!"
return jsonify({'message': greeting})
In addition to the backend server we have a front end client application that will call the greeting for us.
# frontend.py
import requests
def send_greeting_request(name):
url = 'http://localhost:5000/api/greet'
response = requests.get(url)
return response.json()
If we want to test to see if the server responds correctly, then we can use a driver in our integration test. We must import both the server and the client in this test, because we will start the server within the test, then call the test server from the test and evaluate the response.
# test_integration_greeting.py
import unittest
import threading
import time
from backend import app
from frontend import send_greeting_request
class IntegrationTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
global server
# Start the Flask app in a separate thread for testing
cls.server_thread = threading.Thread(target=app.run, kwargs={'port': 5000})
cls.server_thread.daemon = True
cls.server_thread.start()
# Allow time for the server to start
time.sleep(3)
@classmethod
def tearDownClass(cls):
cls.server_thread.join(1)
def test_greeting_integration(self):
# Call the frontend function, which sends a request to the running backend
result = send_greeting_request()
# Verify the result from the frontend function
self.assertEqual(result, {'message': 'Hello, Guest!'})
Notice that in this version, the IntegrationTest class uses class methods (setUpClass and tearDownClass) to start and stop the Flask app server for the backend. This allows us to perform a real integration test without mocking or stubbing the HTTP requests.
Running the actual server during tests has its trade-offs, such as potential port conflicts and longer test execution times. Depending on the use case, you might choose between using mocks/stubs or running actual servers based on your testing needs.
Test Data Management for Integration Tests
Managing test data for integration tests is an important part of ensuring the reliability and repeatability of your tests. Here are some best practices for managing test data in integration testing:
-
Use a Separate Test Database
- Create a dedicated database for running integration tests. This ensures that test data doesn't interfere with production or development data.
-
Automate Data Setup and Teardown:
- Automate the setup and teardown of test data. Provide scripts or tools that can initialize the test database with the required data before tests and clean up after tests.
-
Leverage Database Migrations:
- Use database migration tools to manage schema changes and versioning. This helps keep the test database schema in sync with your application.
-
Use Fixture Data:
- Create fixture data that represents a known state for your application. This can include predefined users, products, or any other entities relevant to your tests.
-
Randomize Data for Variety:
- When appropriate, randomize data to create diverse test scenarios. This helps uncover potential issues that may not be evident with fixed, predictable data.
-
Isolate Test Data:
- Ensure that each test case is isolated and doesn't rely on the state left by previous tests. This helps avoid dependencies between tests and ensures tests are independent and repeatable.
-
Consider Data Privacy and Security:
- Be mindful of sensitive data. If your application handles sensitive information, ensure that test data doesn't compromise privacy or security. Anonymize or obfuscate data when necessary.
-
Include Negative Test Scenarios:
- Design test data to cover negative scenarios, such as invalid inputs or edge cases. This helps ensure your application behaves correctly in unexpected situations.
-
Regularly Refresh Test Data:
- Periodically refresh test data to avoid data staleness. This is especially important for long-running test suites or tests that involve data changes over time.
-
Optimize Data Setup for Performance:
- Optimize data setup processes for performance, especially if the test suite involves a large amount of data. Consider using database snapshots or efficient data seeding strategies.
Useful Links: ←Unit 6.2 | Unit 6.4→ | Table of Contents | Canvas