6.4 Automated End-To-End Testing of User Interfaces
Introduction
End-To-End tests are at the top of our test pyramid. They are the hardest to automate and may take the longest to execute. They also tend to be the most brittle, meaning they are subject to frequent change and invalidation as the software evolves. Despite these difficulties making an investment into the automation of end-to-end tests pays big dividends over the long term.
Smoke Testing and Sanity Testing
Smoke testing and sanity testing testing are all different levels of software testing that serve distinct purposes within the end-to-end testing process. Here's how they are related:
Smoke Testing
- Smoke testing is performed to check if the critical functionalities of the software are working properly after a new build or major changes. It aims to identify major issues that could prevent further testing.
- Tests are broad but shallow. It covers the essential functionalities of the entire application.
- Tests are executed early in the testing process, often after a new build is deployed.
- Smoke testing can be considered a subset of end-to-end testing. It focuses on the most critical paths and components to ensure that the application is stable enough for more comprehensive testing, which may include end-to-end tests.
Sanity Testing
- Sanity testing is performed to validate specific functionalities or components of the application after changes or bug fixes. It ensures that recent modifications didn't introduce new issues and that targeted areas are still functional.
- Such testing is narrow and focused on specific functionalities or areas impacted by recent changes.
- This testing is typically performed after more comprehensive testing or specific changes.
- Similar to smoke testing, sanity testing can be considered a precursor to end-to-end testing. It checks specific functionalities to ensure they are working before executing broader end-to-end tests.
Smoke testing and sanity testing play a valuable role in shaping the scope and direction of automated end-to-end testing. They help in setting the stage for more comprehensive testing efforts, including automation, by identifying critical issues early and ensuring that the application is stable. Here's how smoke testing and sanity testing impact the extent of automated end-to-end testing:
Filtering Out Unstable Builds
- Smoke testing is often performed on a new build to quickly identify major issues. If the smoke test fails, it indicates that the build is unstable, and further testing, including automation, may be futile.
- Automated end-to-end testing is typically deferred until a stable build passes the smoke test, ensuring that automation efforts are focused on a reliable foundation.
Stabilizing Specific Functionalities
- Sanity testing is more focused and is often conducted after specific changes or bug fixes. It verifies that the recent modifications didn't introduce new issues and that targeted areas are still functional.
- Automated end-to-end testing may be limited until sanity tests confirm the stability of specific functionalities affected by recent changes. This prevents automated tests from running on potentially unstable components.
Optimizing Automation Resources
- Automated end-to-end testing can be resource-intensive. By performing smoke and sanity testing first, teams can make informed decisions about where to invest their automation efforts.
- The results of smoke and sanity tests guide teams in selectively automating critical paths or specific functionalities, optimizing automation resources for maximum impact.
Using Selenium WebDriver to Automate Smoke Tests
Selenium WebDriver is a popular choice for automated testing of web applications. After setting up the tests, it’s possible to run a test that opens a browser and looks at the web page for the presence of text or web components. WebDriver can also simulate user interactions with the page and evaluate the resulting updates to the page.
In this example, I'll demonstrate a simple smoke test for the login functionality on a web application.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
def perform_smoke_test():
# Specify the path to your ChromeDriver executable
driver_path = '/path/to/chromedriver'
# Create a new instance of the Chrome driver
driver = webdriver.Chrome(executable_path=driver_path)
try:
# Navigate to the login page
driver.get('https://example.com/login')
# Locate the username and password input fields and the login button
username_input = driver.find_element(By.NAME, 'username')
password_input = driver.find_element(By.NAME, 'password')
login_button = driver.find_element(By.XPATH, '//button[@type="submit"]')
# Enter valid credentials
username_input.send_keys('your_username')
password_input.send_keys('your_password')
# Click the login button
login_button.click()
# Verify if the login was successful by checking for elements on the post-login page
assert driver.find_element(By.XPATH, '//h1[text()="Welcome"]')
print("Smoke test passed: Login functionality is working correctly!")
except Exception as e:
print(f"Smoke test failed: {str(e)}")
finally:
# Close the browser window
driver.quit()
This example navigates to a login page, enters credentials, clicks the login button, and verifies the successful login by checking for the presence of a welcome message on the post-login page. Because this is only an example, the code will not run as-is, but merely illustrates how WebDriver automates the actions of a user on the web site and evaluates if the page looks as expected.
Selenium provides a plug-in for Chrome which is useful for automating these tests by using a recording-based scripting language. These tests can be created by recording scripts that include the actions expected, and making assertions about items on the page. Later, these scripts can be exported to the language of your choice using WebDriver. This makes the process of creating automated tests much easier.
Useful Links: ←Unit 6.3 | Unit 6.5→ | Table of Contents | Canvas