The Various Types of Testing in Software Development
In this article, we will explore the different types of testing related to software development. We’ll learn what they consist of and how they differ, including unit testing, integration testing, functional testing, acceptance testing, and more.
There are many types of software testing we can use to ensure our software continues to function correctly after introducing new changes to the codebase. Not all tests are the same, which is why we will explore the differences between the main types of software tests.
Introduction
Often, when working independently on small projects, we may not need (or have the budget) to write automated tests. However, as we move to larger projects and integrate into larger teams, we frequently hear about different types of tests.
It’s surprising, but as developers, we often focus on building new features without fully understanding what the QA team does. We might not ask due to time constraints or not wanting to admit how little we know about testing. As time passes, we remain unaware of the different types of testing. But that ends today.
Although we won’t be writing or running tests in this article, we will review the various types that exist and what they aim to achieve. This will help us understand their differences and not feel lost when these concepts come up in conversation.
People may have slightly different definitions, but the general idea remains the same. Also, teams sometimes organize to run groups of tests known as “test suites,” which include various types of tests. For example, you can run a test suite that encompasses integration tests and regression tests.
It’s also worth noting that some teams develop their own vocabulary and assign names to their test groups.
Returning to our topic, testing should be done at a reasonable level based on:
- The complexity of our application
- The amount of traffic our application receives
- The size of our team
Manual vs. Automated Testing
Firstly, it’s essential to know that there are manual and automated software tests. Manual tests are conducted by people who navigate and interact with the software using appropriate tools for each case. These tests are costly as they require a professional to set up an environment and execute the tests.
These tests are susceptible to human errors, such as typographical errors or missed steps. Automated tests, on the other hand, are performed by machines that execute a pre-written test script. These tests can range in complexity from verifying that a specific class method works correctly to ensuring a sequence of complex actions in the UI performs as expected and returns the expected results.
Automated tests are faster and more reliable than manual ones, but the quality of these tests depends on how well the test scripts are written. Automated testing is a key component of continuous integration and continuous delivery, and it’s an excellent way to scale your QA processes as you add new features to your application. Nevertheless, manual tests are still important for what is known as “exploratory testing” (which we will discuss later).
The Different Types of Tests
Let’s look at the various types of tests that exist (there are more, but these are the most important ones).
Unit Tests
Unit tests are low-level tests close to the source code of our application. This type of testing involves individually testing functions and/or methods of classes, components, and/or modules used by our software. Due to their specificity, they are generally the least expensive automated tests and can be quickly executed by a continuous integration server.
When planning and writing unit tests, we ideally isolate functionality to the point where it cannot be broken down further and then write tests based on that. Unit tests verify that the function or method name is appropriate, the parameter names and types are correct, and the return type and value are as expected.
Since unit tests should have no dependencies, API and external service calls are often replaced with mock functionality to ensure no interaction beyond the unit being tested. In many cases, even database queries are replaced so the test focuses on operating from the input values without relying on external sources.
If isolating database use in our unit tests isn’t feasible, it’s crucial to consider performance and optimize our queries. This is important because if our unit tests are long-running, it will be inconvenient to execute them and significantly slow down development times. Test Driven Development (TDD) refers to the use of unit tests. Unit tests are used as specifications for what our code should do.
Integration Tests
Integration tests verify that different modules and/or services used by our application work in harmony when working together. For example, they may test the interaction with one or multiple databases or ensure that microservices operate as expected.
Integration tests typically follow unit tests and are generally more expensive to execute as they require more parts of our application to be set up and running.
Functional Tests
Functional tests focus on the business requirements of an application. These tests verify the output of an action without considering the system’s intermediate states during execution. Sometimes, there’s confusion between integration tests and functional tests as both require multiple components to interact.
The difference is that an integration test might simply verify that database queries execute correctly, while a functional test would expect to show a specific value to a user according to product requirements.
End-to-End Tests
End-to-end tests replicate user behavior with the software in a complete application environment. These tests verify that the user flows work as expected and can be as simple as loading a webpage and logging in, or much more complex, such as verifying email notifications, online payments, etc. End-to-end tests are very useful but expensive to perform and can be challenging to maintain when automated. Therefore, it’s advisable to have a few key end-to-end tests and rely more on low-level tests (such as unit and integration tests) to quickly detect changes that negatively impact our application.
Regression Testing
Regression tests verify a set of scenarios that worked correctly in the past to ensure they continue to do so. We shouldn’t add new features to our regression test suite until the current regression tests pass. A failure in a regression test means that new functionality has negatively affected previously correct functionality, causing a “regression.”
A regression test failure could also indicate that we’ve reintroduced a bug that was previously fixed.
Smoke Testing
Smoke tests verify the basic functionality of an application. These tests are intended to be quick to execute and aim to ensure that the most critical system features work as expected. Smoke tests can be very useful right after building a new version of our application to decide if we’re ready to run more costly tests or right after deployment to ensure the application is functioning correctly in the new environment.
Smoke tests are a high-level set of automated tests selected strictly. They occur between integration and regression tests to verify that the main functionality of the site operates correctly. The term “smoke test” is said to originate from plumbing. If you could see smoke coming out of a pipe, it meant there were leaks and repairs were needed. Smoke tests are not specific but are significant tests at a more general level. Ideally, they should be executed daily in all environments. If a smoke test fails, it means there’s a severe problem with our software’s functionality, so we shouldn’t deploy new changes until the issues are addressed. If they fail in production, fixing them will be of the highest priority.
Acceptance Testing
Acceptance tests are formal tests performed to verify if a system meets its business requirements. These tests require the software to be operational and focus on replicating user behavior to reject changes if goals aren’t met. These goals may go beyond obtaining a specific response and measure system performance.
Acceptance tests are usually a set of manual tests performed after a development phase has concluded (so that we can quickly iterate if something is incorrect). They verify that the software features align with all initial specifications and acceptance criteria. Acceptance tests are often conducted after unit or integration tests to avoid progressing too far with the testing process and determining early if significant changes are needed. For these tests to be carried out correctly, it’s essential that project leaders define the acceptance criteria before starting the work. Additionally, any new requirements that arise during the process should be reflected in these acceptance criteria.
Performance Testing
Performance tests check how the system responds under high load. These tests are non-functional and can take various forms to understand the platform’s reliability, stability, and availability. For example, they can observe response times when a high number of requests are executed or see how the system behaves with a significant amount of data.
Performance tests are inherently expensive to implement and execute but can help us understand if new changes will degrade our system (such as making it slower or increasing resource consumption). Performance tests don’t fail in the same way as other tests. Instead, they aim to collect metrics and set objectives to achieve. It’s generally a good idea to perform these tests for new releases and/or significant refactorings in the code.
Why and How to Automate Our Tests
A person can run all the tests mentioned, but it would be very costly and counterproductive. As humans, we have a limited capacity to perform a large number of actions repeatably and reliably. But a machine can easily do this and ensure that our login form works correctly even on the 1000th attempt without complaining.
To automate our tests, we first need to write them using programming code with a testing framework that suits our application. PHPUnit, Mocha, and RSpec are examples of testing frameworks we can use to write automated tests in PHP, JavaScript, and Ruby, respectively. There are many options for each language.
If our tests can be started by running a script from the terminal, we can also run them using a continuous integration server or a cloud service dedicated to it. These tools can monitor our repositories and run our test suite every time new changes are uploaded.
Exploratory Testing
As we add more features and improvements to our code, the need to write tests to ensure our system works appropriately grows. Similarly, every time we fix a bug, it’s prudent to check that previously fixed bugs don’t reappear. Automation is key to making this possible, and writing tests will become part of our development workflow sooner or later.