# !pip install pytest
Pytest for testing
Pytest for Unit-Testing Python code
Every line line of code is one more reason why your entire software might crash. We sometimes don’t realize the importance of testing our code until our code becomes a part of a production codebase. Pytest is the most widely used framework for unit testing in Python. Here’s how you can easily unit test your code with pytest.
Write the code and corresponding tests
As an example, let’s write a simple function to check if a number is even. This function returns Boolean - True or False. Let’s pass a test case and see if it returns the results that we expect.
%%writefile check_if_even.py
def check_if_even(a):
"""
Returns True if a is an even number
"""
return a % 2 == 0
def test_check_if_even():
"""
Define test cases
"""
# a = 2. Expected value is True
= 2
a = check_if_even(a)
is_even assert is_even == True
Overwriting check_if_even.py
Run the test
Pytest reads the Python scripts and understands that any function that starts with ‘test_’ is the test function.
!pytest check_if_even.py
============================= test session starts ==============================
platform linux -- Python 3.7.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /home/kedardabhadkar/Notes/data_wrangling/data_manipulation
plugins: anyio-2.0.2, Faker-8.5.1, dash-1.20.0
collected 1 item
check_if_even.py . [100%]
============================== 1 passed in 0.04s ===============================
Run multiple tests cases at once
%%writefile check_if_even.py
import pytest
= [
testdata 2, True),
(3, False),
(4, True),
(5, True) # We expect this test to fail
(
]
def check_if_even(a):
"""
Returns True if 'a' is an even number
"""
return a % 2 == 0
@pytest.mark.parametrize('sample, expected_output', testdata)
def test_check_if_even(sample, expected_output):
"""
Define test cases
"""
assert check_if_even(sample) == expected_output
Overwriting check_if_even.py
!pytest check_if_even.py
============================= test session starts ==============================
platform linux -- Python 3.7.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /home/kedardabhadkar/Notes/data_wrangling/data_manipulation
plugins: anyio-2.0.2, Faker-8.5.1, dash-1.20.0
collected 4 items
check_if_even.py ...F [100%]
=================================== FAILURES ===================================
__________________________ test_check_if_even[5-True] __________________________
sample = 5, expected_output = True
@pytest.mark.parametrize('sample, expected_output', testdata)
def test_check_if_even(sample, expected_output):
"""
Define test cases
"""
> assert check_if_even(sample) == expected_output
E assert False == True
E + where False = check_if_even(5)
check_if_even.py:23: AssertionError
=========================== short test summary info ============================
FAILED check_if_even.py::test_check_if_even[5-True] - assert False == True
========================= 1 failed, 3 passed in 0.14s ==========================
And as expected, the first 3 test cases passed and the last one failed!
How do we structure our code after integrating with pytest?
Although there are multiple ways in which you can structure your code, this is the way that I personally prefer it. This structure logically separates your tests from the rest of the source code.
/
project
├── src
│ └── check_if_even.py
└── tests └── test_code.py
When applied to our toy example, this is what each of these two scripts would look like
%%writefile project/src/check_if_even.py
def check_if_even(a):
"""
Returns True if 'a' is an even number
"""
return a % 2 == 0
Overwriting project/src/check_if_even.py
%%writefile project/tests/test_code.py
import pytest
import sys
import os
sys.path.append(__file__), os.path.pardir)))
os.path.abspath(os.path.join(os.path.dirname(
from src.check_if_even import check_if_even
= [
testdata 2, True),
(3, False),
(4, True),
(5, True) # We expect this test to fail
(
]
@pytest.mark.parametrize('sample, expected_output', testdata)
def test_check_if_even(sample, expected_output):
"""
Define test cases
"""
assert check_if_even(sample) == expected_output
Overwriting project/tests/test_code.py
!pytest project/tests/test_code.py
============================= test session starts ==============================
platform linux -- Python 3.7.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /home/kedardabhadkar/Notes/data_wrangling/data_manipulation
plugins: anyio-2.0.2, Faker-8.5.1, dash-1.20.0
collected 4 items
project/tests/test_code.py ...F [100%]
=================================== FAILURES ===================================
__________________________ test_check_if_even[5-True] __________________________
sample = 5, expected_output = True
@pytest.mark.parametrize('sample, expected_output', testdata)
def test_check_if_even(sample, expected_output):
"""
Define test cases
"""
> assert check_if_even(sample) == expected_output
E assert False == True
E + where False = check_if_even(5)
project/tests/test_code.py:25: AssertionError
=========================== short test summary info ============================
FAILED project/tests/test_code.py::test_check_if_even[5-True] - assert False ...
========================= 1 failed, 3 passed in 0.14s ==========================
Read more
[1] https://towardsdatascience.com/pytest-for-data-scientists-2990319e55e6
[2] https://docs.pytest.org/en/6.2.x/