There are many different kinds of tests, so many, in fact, that companies often have a dedicated department, called quality assurance (QA), made up of individuals who spend their day testing the software the company developers produce.
we can divide tests into two broad categories:white-box and black-box tests.
White-box tests are those that exercise the internals of the code; they inspect it down to a very fine level of detail. On the other hand, black-box tests are those that consider the software under test as if within a box, the internals of which are ignored. There is also an in-between category, called gray-box testing, which involves testing a system in the same way we do with the black-box approach, but having some knowledge about the algorithms and
data structures used to write the software and only partial access to its source code.
There are many different kinds of tests in these categories, each of which serves a different purpose.
Frontend tests: Make sure that the client side of your application is exposing the information that it should, all the links, the buttons, the advertising, everything that needs to be shown to the client. It may also verify that it is possible to walk a certain path through the user interface.
Scenario tests: Make use of stories (or scenarios) that help the tester work through a complex problem or test a part of the system.
Integration tests: Verify the behavior of the various components of your application when they are working together sending messages through
Smoke tests: Particularly useful when you deploy a new update on your application. They check whether the most essential, vital parts of your
application are still working as they should and that they are not on fire. This term comes from when engineers tested circuits by making sure
nothing was smoking.
Acceptance tests, or user acceptance testing (UAT): What a developer does with a product owner (for example, in a SCRUM environment) to
determine whether the work that was commissioned was carried out correctly.
Functional tests: Verify the features or functionalities of your software.
Destructive tests: Take down parts of your system, simulating a failure, to establish how well the remaining parts of the system perform. These
kinds of tests are performed extensively by companies that need to provide an extremely reliable service, such as Amazon and Netflix, for example.
Performance tests: Aim to verify how well the system performs under a specific load of data or traffic so that, for example, engineers can get a
better understanding of the bottlenecks in the system that could bring it to its knees in a heavy-load situation, or those that prevent scalability.
Usability tests, and the closely related user experience (UX) tests: Aim to check whether the user interface is simple and easy to understand and use.
They aim to provide input to the designers so that the user experience is improved.
Security and penetration tests: Aim to verify how well the system is protected against attacks and intrusions.
Unit tests: Help the developer to write the code in a robust and consistent way, providing the first line of feedback and defense against coding
mistakes, refactoring mistakes, and so on.
Regression tests: Provide the developer with useful information about a feature being compromised in the system after an update. Some of the
causes for a system being said to have a regression are an old bug coming back to life, an existing feature being compromised, or a new issue being
A test is typically composed of three sections:
Preparation: This is where you set up the scene. You prepare all the data, the objects, and the services you need in the places you need them so that
they are ready to be used.
Execution: This is where you execute the bit of logic that you're checking against. You perform an action using the data and the interfaces you have
set up in the preparation phase.
Verification: This is where you verify the results and make sure they are according to your expectations. You check the returned value of a function,
or that some data is in the database, some is not, some has changed, a request has been made, something has happened, a method has been
called, and so on.
Keep them as simple as possible.
Tests should verify one thing and one thing only.
Tests should not make any unnecessary assumption when verifying data.
Tests should exercise the what, rather than the how.
Tests should use the minimal set of fixtures needed to do the job.
Tests should run as fast as possible.
Tests should use up the least possible amount of resources.
A Jenkins box is a machine that runs Jenkins, software that is capable of, among many other things, running your tests
automatically. Jenkins is frequently used in companies where developers use practices such as continuous integration and extreme
Mock objects and patching
In Python, these fake objects are called mocks. The act of replacing a real object or function (or in general, any piece of data
structure) with a mock, is called patching. The mock library provides the patch tool, which can act as a function or class decorator, and even as a context manager that you can use to mock things out. Once you have replaced everything you don't need to run with suitable mocks, you can pass to the second phase of the test and run the code you are exercising. After the execution, you will be able to check those mocks to verify that your code has worked correctly.
An assertion is a function (or method) that you can use to verify equality between objects, as well as other conditions. When a condition is not met, the assertion will raise an exception that will make your test fail. You can find a list of assertions in the unittest module documentation; however, when using pytest, you will typically use the generic assert statement, which makes things even simpler.
It is a methodology that was rediscovered by Kent Beck, who wrote Test-Driven Development by Example, Addison Wesley, 2002.
First, the developer writes a test, and makes it run. The test is supposed to check a feature that is not yet part of the code. Maybe it is a new feature to be added, or something to be removed or amended. Running the test will make it fail and, because of this, this phase is called Red.
When the test has failed, the developer writes the minimal amount of code to make it pass. When running the test succeeds, we have the so-called Green phase. In this phase, it is okay to write code that cheats, just to make the test pass. This technique is called fake it 'till you make it. In a second moment, tests are enriched with different edge cases, and the cheating code then has to be rewritten with proper logic. Adding other test cases is called triangulation.
The last piece of the cycle is where the developer takes care of both the code and the tests (in separate times) and refactors them until they are in the desired state. This last phase is called Refactor.
The TDD mantra therefore is Red-Green-Refactor.
When you write your code before the tests, you have to take care of what the code has to do and how it has to do it, both at the same time.
There are several other benefits that come from the adoption of this technique:
You will refactor with much more confidence
The code will be more readable
The code will be more loosely coupled and easier to test and maintain
Writing tests first requires you to have a better understanding of the business requirements
Having everything unit tested means the code will be easier to debug