Chapter 10: Test Reports and Code Coverage

Haiyue
16min

Chapter 10: Test Reports and Code Coverage

Learning Objectives
  • Master generation of multiple test report formats
  • Learn code coverage measurement and analysis
  • Understand test quality assessment methods
  • Master report configuration in continuous integration

Key Concepts

Test Report Types

Report TypePurposeOutput Format
Console ReportDevelopment debuggingText
HTML ReportVisual viewingHTML
XML ReportCI/CD integrationXML
JSON ReportProgrammatic processingJSON
JUnit ReportJenkins integrationXML

Code Coverage Metrics

  • Line coverage: Proportion of executed code lines
  • Branch coverage: Proportion of executed branches
  • Function coverage: Proportion of called functions
  • Condition coverage: Coverage of conditional expressions

Example Code

Basic Report Configuration

# pytest.ini
[tool:pytest]
addopts =
    -v
    --tb=short
    --strict-markers
    --strict-config
    --cov=src
    --cov-report=term-missing
    --cov-report=html:htmlcov
    --cov-report=xml:coverage.xml
    --html=reports/report.html
    --self-contained-html

Generate HTML Test Report

# Install dependency
pip install pytest-html

# Generate HTML report
pytest --html=reports/report.html --self-contained-html

# Generate report with custom CSS
pytest --html=reports/report.html --css=custom.css

Code Coverage Testing

# Install pytest-cov
# pip install pytest-cov

# Basic coverage commands
pytest --cov=src                          # Basic coverage
pytest --cov=src --cov-report=html        # HTML coverage report
pytest --cov=src --cov-report=term-missing # Show uncovered lines
pytest --cov=src --cov-fail-under=80      # Fail if coverage below 80%

# Multi-module coverage
pytest --cov=src --cov=tests --cov-report=html

# Exclude specific files
pytest --cov=src --cov-report=html --cov-config=.coveragerc

Coverage Configuration File

# .coveragerc
[run]
source = src
omit =
    */tests/*
    */venv/*
    */__pycache__/*
    */migrations/*
    */settings/*
    manage.py

[report]
exclude_lines =
    pragma: no cover
    def __repr__
    raise AssertionError
    raise NotImplementedError
    if __name__ == .__main__.:
    if TYPE_CHECKING:

[html]
directory = htmlcov
title = My Project Coverage Report

[xml]
output = coverage.xml

Custom Test Report Plugin

# conftest.py - Custom report hooks
import pytest
from datetime import datetime

def pytest_html_report_title(report):
    """Customize HTML report title"""
    report.title = "Project Test Report"

def pytest_html_results_summary(prefix, summary, postfix):
    """Customize results summary"""
    prefix.extend([
        "<p>Test execution time: {}</p>".format(datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
    ])

@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
    """Customize test result report"""
    outcome = yield
    report = outcome.get_result()

    # Add custom properties
    setattr(item, "rep_" + report.when, report)

    # Add extra information for failed tests
    if report.when == "call" and report.failed:
        # Can add screenshots, logs, etc.
        extra = getattr(report, 'extra', [])
        if extra:
            report.extra = extra

def pytest_html_results_table_header(cells):
    """Customize table header"""
    cells.insert(2, "<th>Description</th>")
    cells.insert(3, "<th>Execution Time</th>")

def pytest_html_results_table_row(report, cells):
    """Customize table row"""
    # Add test description
    if hasattr(report, 'description'):
        cells.insert(2, f"<td>{report.description}</td>")
    else:
        cells.insert(2, "<td>No description</td>")

    # Add execution time
    if hasattr(report, 'duration'):
        cells.insert(3, f"<td>{report.duration:.3f}s</td>")
    else:
        cells.insert(3, "<td>N/A</td>")

Performance Report Generation

# test_performance_reporting.py
import pytest
import time
from pytest_benchmark import fixture

# Install pytest-benchmark: pip install pytest-benchmark

def fibonacci(n):
    """Calculate Fibonacci sequence"""
    if n < 2:
        return n
    return fibonacci(n-1) + fibonacci(n-2)

def fibonacci_optimized(n, memo={}):
    """Optimized Fibonacci calculation"""
    if n in memo:
        return memo[n]
    if n < 2:
        return n
    memo[n] = fibonacci_optimized(n-1, memo) + fibonacci_optimized(n-2, memo)
    return memo[n]

class TestPerformance:
    """Performance test class"""

    def test_fibonacci_performance(self, benchmark):
        """Test Fibonacci performance"""
        result = benchmark(fibonacci, 20)
        assert result == 6765

    def test_fibonacci_optimized_performance(self, benchmark):
        """Test optimized Fibonacci performance"""
        result = benchmark(fibonacci_optimized, 20)
        assert result == 6765

    @pytest.mark.parametrize("n", [10, 15, 20])
    def test_fibonacci_scale(self, benchmark, n):
        """Test performance at different scales"""
        result = benchmark(fibonacci_optimized, n)
        assert result >= 0

    def test_slow_operation(self, benchmark):
        """Test slow operation"""
        def slow_function():
            time.sleep(0.1)  # Simulate slow operation
            return "completed"

        result = benchmark(slow_function)
        assert result == "completed"

# Run performance tests
# pytest --benchmark-only --benchmark-html=benchmark_report.html

Multi-Format Report Generation

# test_multi_format_reports.py
import pytest
import json
import xml.etree.ElementTree as ET

class TestReportFormats:
    """Test multiple report formats"""

    def test_json_output(self):
        """Test generating JSON data"""
        data = {"name": "test", "value": 42}
        assert isinstance(data, dict)
        assert data["name"] == "test"

    def test_xml_parsing(self):
        """XML parsing test"""
        xml_data = "<root><item>test</item></root>"
        root = ET.fromstring(xml_data)
        assert root.tag == "root"
        assert root.find("item").text == "test"

    @pytest.mark.slow
    def test_large_dataset(self):
        """Large dataset test"""
        large_list = list(range(10000))
        assert len(large_list) == 10000
        assert sum(large_list) == 49995000

# Commands to generate multiple format reports
# pytest --html=report.html --junitxml=junit.xml --json-report --json-report-file=report.json

CI/CD Integration Report

# .github/workflows/test-report.yml
name: Test and Report

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v3

    - name: Set up Python
      uses: actions/setup-python@v4
      with:
        python-version: '3.10'

    - name: Install dependencies
      run: |
        pip install -r requirements.txt
        pip install pytest pytest-cov pytest-html pytest-json-report

    - name: Run tests with coverage
      run: |
        pytest \
          --cov=src \
          --cov-report=html:htmlcov \
          --cov-report=xml:coverage.xml \
          --cov-report=term-missing \
          --html=reports/pytest_report.html \
          --self-contained-html \
          --json-report \
          --json-report-file=reports/report.json \
          --junitxml=reports/junit.xml

    - name: Upload coverage to Codecov
      uses: codecov/codecov-action@v3
      with:
        file: ./coverage.xml

    - name: Archive test results
      uses: actions/upload-artifact@v3
      if: always()
      with:
        name: test-results
        path: |
          reports/
          htmlcov/

    - name: Publish test results
      uses: dorny/test-reporter@v1
      if: always()
      with:
        name: PyTest Results
        path: reports/junit.xml
        reporter: java-junit

Coverage Analysis and Optimization

# src/calculator.py - Example code for coverage analysis
class Calculator:
    """Calculator class"""

    def add(self, a, b):
        """Addition"""
        return a + b

    def subtract(self, a, b):
        """Subtraction"""
        return a - b

    def multiply(self, a, b):
        """Multiplication"""
        if a == 0 or b == 0:  # Branch 1
            return 0
        return a * b  # Branch 2

    def divide(self, a, b):
        """Division"""
        if b == 0:  # Branch 1
            raise ValueError("Division by zero")
        return a / b  # Branch 2

    def power(self, base, exponent):
        """Power operation"""
        if exponent == 0:  # Branch 1
            return 1
        elif exponent < 0:  # Branch 2
            return 1 / self.power(base, abs(exponent))
        else:  # Branch 3
            return base * self.power(base, exponent - 1)

    def factorial(self, n):
        """Factorial"""
        if n < 0:  # Branch 1 - Boundary case
            raise ValueError("Factorial of negative number")
        elif n == 0 or n == 1:  # Branch 2 - Base case
            return 1
        else:  # Branch 3 - Recursive case
            return n * self.factorial(n - 1)

# Test file showing different coverage levels
# tests/test_calculator_coverage.py

class TestCalculatorCoverage:
    """Calculator coverage tests"""

    @pytest.fixture
    def calc(self):
        return Calculator()

    def test_basic_operations(self, calc):
        """Basic operations test - Partial coverage"""
        assert calc.add(2, 3) == 5
        assert calc.subtract(5, 3) == 2
        # Note: multiply and divide not tested here

    def test_multiply_with_zero(self, calc):
        """Multiply with zero test - Cover specific branch"""
        assert calc.multiply(0, 5) == 0
        assert calc.multiply(3, 0) == 0

    def test_multiply_normal(self, calc):
        """Normal multiplication test"""
        assert calc.multiply(3, 4) == 12

    def test_divide_success(self, calc):
        """Successful division test"""
        assert calc.divide(10, 2) == 5

    def test_divide_by_zero(self, calc):
        """Divide by zero test - Exception branch"""
        with pytest.raises(ValueError, match="Division by zero"):
            calc.divide(10, 0)

    def test_power_comprehensive(self, calc):
        """Comprehensive power test - High coverage"""
        # Test all branches
        assert calc.power(2, 0) == 1      # Exponent is 0
        assert calc.power(2, 3) == 8      # Positive exponent
        assert calc.power(2, -2) == 0.25  # Negative exponent

    def test_factorial_comprehensive(self, calc):
        """Comprehensive factorial test"""
        # Test all branches
        assert calc.factorial(0) == 1      # Boundary case
        assert calc.factorial(1) == 1      # Boundary case
        assert calc.factorial(5) == 120    # Normal case

        # Test exception case
        with pytest.raises(ValueError):
            calc.factorial(-1)

# Coverage report analysis script
# analyze_coverage.py
import json
import sys

def analyze_coverage_report(json_file):
    """Analyze coverage report"""
    try:
        with open(json_file, 'r') as f:
            data = json.load(f)

        files = data.get('files', {})

        print("📊 Coverage Analysis Report")
        print("=" * 50)

        total_lines = 0
        covered_lines = 0

        for filename, info in files.items():
            if filename.startswith('src/'):  # Only analyze source code
                lines = info.get('summary', {}).get('num_statements', 0)
                covered = info.get('summary', {}).get('covered_lines', 0)
                missing = info.get('summary', {}).get('missing_lines', 0)

                coverage = (covered / lines * 100) if lines > 0 else 0

                print(f"📁 {filename}")
                print(f"   Total lines: {lines}")
                print(f"   Covered lines: {covered}")
                print(f"   Uncovered lines: {missing}")
                print(f"   Coverage: {coverage:.1f}%")

                if missing > 0:
                    missing_lines = info.get('missing_lines', [])
                    print(f"   Uncovered lines: {missing_lines}")

                print()

                total_lines += lines
                covered_lines += covered

        overall_coverage = (covered_lines / total_lines * 100) if total_lines > 0 else 0
        print(f"🎯 Overall coverage: {overall_coverage:.1f}%")

        # Coverage grade assessment
        if overall_coverage >= 90:
            print("✅ Excellent coverage")
        elif overall_coverage >= 80:
            print("🟡 Good coverage")
        elif overall_coverage >= 70:
            print("🟠 Fair coverage")
        else:
            print("❌ Coverage needs improvement")

    except FileNotFoundError:
        print(f"❌ Coverage report file {json_file} not found")
        sys.exit(1)
    except json.JSONDecodeError:
        print(f"❌ Unable to parse JSON file {json_file}")
        sys.exit(1)

if __name__ == "__main__":
    if len(sys.argv) != 2:
        print("Usage: python analyze_coverage.py <coverage.json>")
        sys.exit(1)

    analyze_coverage_report(sys.argv[1])

Report Command Summary

# Basic test reports
pytest -v                                    # Verbose output
pytest --tb=short                           # Short error messages
pytest --tb=long                            # Detailed error messages

# HTML reports
pytest --html=report.html                   # Generate HTML report
pytest --html=report.html --self-contained-html  # Self-contained HTML

# Coverage reports
pytest --cov=src                            # Basic coverage
pytest --cov=src --cov-report=html          # HTML coverage
pytest --cov=src --cov-report=term-missing  # Show uncovered lines
pytest --cov=src --cov-report=xml           # XML coverage

# Multiple formats
pytest --html=report.html --junitxml=junit.xml --cov=src --cov-report=html

# Performance reports
pytest --benchmark-only                     # Only run performance tests
pytest --benchmark-html=benchmark.html      # Performance HTML report

# JSON reports
pytest --json-report --json-report-file=report.json
Report Best Practices
  1. Choose appropriate report format: HTML for development, XML for CI/CD
  2. Set coverage targets: Generally 80% or above is considered good
  3. Focus on uncovered code: Pay special attention to critical business logic
  4. Analyze reports regularly: Include reports in code review process
  5. Automate report generation: Automatically generate and publish reports in CI/CD
Important Notes
  1. Coverage doesn’t equal quality: High coverage doesn’t mean high test quality
  2. Avoid testing for coverage: Focus on meaningful test cases
  3. Performance impact: Coverage collection slightly affects test performance
  4. Report storage: Report files for large projects may be large, consider storage space

Test reports and code coverage are important tools for assessing test quality and project health. Proper use can significantly improve software quality.