Logging

Goals

Why Not Just Print?

What is wrong with using print to debug an application?

What does Python's logging module give you that print does not?

Adding a Logger to the Server

Add logging to server_stream.py so that the server records what it is doing. Add a configure_logging function that can be called once at startup.

Add log calls to the handlers. Use INFO when a sighting is inserted or deleted, WARNING when an optional field is left blank, and DEBUG when a detail page loads successfully.

Pass Values as Arguments, Not f-Strings

Write logger.info("count: %d", n) rather than logger.info(f"count: {n}"). The logging module skips string formatting entirely when the message level is below the active threshold, so the f-string approach does unnecessary work on every call even when the output is never shown.

i
import csv
import io
import logging
import sqlite3
from dataclasses import dataclass
from pathlib import Path
from typing import Annotated

from htpy import a, body, button, form, h1, head, html
from htpy import input as inp
from htpy import label, li, link, table, td, th, title, tr, ul
from litestar import Litestar, MediaType, get, post
from litestar.datastructures import UploadFile
from litestar.enums import RequestEncodingType
from litestar.exceptions import NotFoundException
from litestar.params import Body
from litestar.response import Redirect

from utils import HEADERS, KEYS, LABELS, fmt

LESSON_DIR = Path(__file__).parent
DB_PATH = LESSON_DIR.parent / "database" / "sightings.db"

logger = logging.getLogger(__name__)

INSERT_ROW = (
    "insert into sightings (species, sex, weight, color, datetime, latitude, longitude) "
    "values (?, ?, ?, ?, ?, ?, ?)"
)


def configure_logging():
    logging.basicConfig(
        level=logging.DEBUG,
        format="%(asctime)s %(levelname)-8s %(name)s %(message)s",
        handlers=[logging.StreamHandler()],
    )
i
    @get("/sighting/{sighting_id:int}", media_type=MediaType.HTML)
    async def detail(sighting_id: int) -> str:
        conn = sqlite3.connect(db_path)
        conn.row_factory = sqlite3.Row
        row = conn.execute(
            "select * from sightings where id = ?", [sighting_id]
        ).fetchone()
        conn.close()
        if row is None:
            logger.warning("sighting %d not found", sighting_id)
            raise NotFoundException(f"No sighting with ID {sighting_id}")
        logger.debug("sighting %d retrieved", sighting_id)
        return str(
            html(lang="en")[
                head[
                    title[f"Sighting {sighting_id}"],
                    link(rel="stylesheet", href="/style.css"),
                ],
                body[
                    table[
                        [
                            tr[
                                td(class_="label")[lbl],
                                td[fmt(row[key])],
                            ]
                            for key, lbl in LABELS.items()
                        ],
                    ],
                    a(href="/")["Back to all sightings"],
                    form(method="post", action=f"/delete/{sighting_id}")[
                        button(type="submit")["Delete this sighting"],
                    ],
                ],
            ]
        )
i
    @post("/add")
    async def add_sighting(
        data: Annotated[dict, Body(media_type=RequestEncodingType.URL_ENCODED)],
    ) -> Redirect:
        if not data["sex"]:
            logger.warning("new sighting added without sex")
        if not data["weight"]:
            logger.warning("new sighting added without weight")
        conn = sqlite3.connect(db_path)
        conn.execute(
            INSERT_ROW,
            [
                data["species"],
                data["sex"] or None,
                float(data["weight"]) if data["weight"] else None,
                data["color"],
                data["datetime"],
                float(data["latitude"]),
                float(data["longitude"]),
            ],
        )
        conn.commit()
        conn.close()
        logger.info("inserted sighting: species=%s", data["species"])
        return Redirect("/", status_code=303)

Start the server from the logging/ directory:

i
litestar run --app server_rotate:app

Open the app in a browser and add a sighting with the weight field blank. The terminal shows timestamped lines like these:

i
2026-05-11 10:32:04,812 INFO     server_stream home page: 47 sightings
2026-05-11 10:32:07,091 DEBUG    server_stream sighting 12 retrieved
2026-05-11 10:32:15,334 WARNING  server_stream new sighting added without weight
2026-05-11 10:32:15,335 INFO     server_stream inserted sighting: species=G. canadensis

Writing Logs to a File

Copy server_stream.py to server_rotate.py and update configure_logging to also write to a file alongside the terminal output.

What happens if the log file grows too large, and how do you handle it?

Catching Exceptions in the Log

Modify the CSV upload handler in server_rotate.py to catch exceptions and record the full traceback instead of crashing and returning a 500 response.

What does the log file contain after a CSV upload whose column names do not match what the handler expects?

2026-05-11 10:34:22,019 ERROR    server_rotate failed to insert row: {'Species': 'G. canadensis', ...}
Traceback (most recent call last):
  File "server_rotate.py", line 140, in upload_csv
    row["species"],
KeyError: 'species'
i
def configure_logging():
    logging.basicConfig(
        level=logging.DEBUG,
        format="%(asctime)s %(levelname)-8s %(name)s %(message)s",
        handlers=[
            logging.StreamHandler(),
            logging.handlers.RotatingFileHandler(
                LOG_PATH, maxBytes=1_000_000, backupCount=3
            ),
        ],
    )
i
    @post("/upload")
    async def upload_csv(
        data: Annotated[CsvUpload, Body(media_type=RequestEncodingType.MULTI_PART)],
    ) -> Redirect:
        content = await data.csv_file.read()
        reader = csv.DictReader(io.StringIO(content.decode("utf-8")))
        conn = sqlite3.connect(db_path)
        count = 0
        for row in reader:
            try:
                conn.execute(
                    INSERT_ROW,
                    [
                        row["species"],
                        row["sex"] or None,
                        float(row["weight"]) if row["weight"] else None,
                        row["color"],
                        row["datetime"],
                        float(row["latitude"]),
                        float(row["longitude"]),
                    ],
                )
                count += 1
            except Exception:
                logger.exception("failed to insert row: %s", dict(row))
        conn.commit()
        conn.close()
        logger.info("CSV upload complete: %d rows inserted", count)
        return Redirect("/", status_code=303)

See [python-logging2025] for the full Python logging documentation.

Check Understanding

A classmate adds logger.debug("starting upload") to the upload handler but sees nothing in the terminal. What are the two most likely causes?

Either configure_logging was never called, so no handler is attached to the root logger, or configure_logging was called with level=logging.INFO, which silences DEBUG messages. Both produce the same symptom: complete silence.

Two modules each call logging.getLogger("sightings") instead of logging.getLogger(__name__). When reading the log, how can you tell which module produced a given message?

You cannot. Both loggers share the same name, so every message shows "sightings" as the source regardless of which file it came from. Using logging.getLogger(__name__) in each module gives each logger a distinct name matching its filename, making every message traceable to its origin.

Why is logger.warning the right level when a sighting is submitted without a weight, rather than logger.info or logger.error?

INFO is for routine events during normal operation. ERROR is for failures that prevent the operation from completing. A missing optional weight is neither: the sighting was inserted successfully, but the data is incomplete in a way worth noticing. WARNING communicates "something unusual happened, but the application is still working."

After three weeks, the sightings.log file is four gigabytes and the disk is full. What one change to configure_logging prevents this from recurring?

Replace FileHandler with logging.handlers.RotatingFileHandler and set maxBytes and backupCount. The handler then caps the file at maxBytes bytes and keeps at most backupCount old copies, so total disk use stays bounded regardless of how long the server runs.

The code below is meant to log a failure, but the traceback never appears in the log file. What is wrong?

logger.error logs the message but discards the traceback. Replace it with logger.exception("upload failed: %s", e), which automatically appends the current exception's full traceback to the log entry.

except Exception as e:
    logger.error("upload failed: %s", e)

Exercises

Log Elapsed Time

Add timing to the detail handler. Use time.perf_counter() before and after the database query to measure elapsed time in milliseconds, and log it at the DEBUG level: "sighting %d retrieved in %.1f ms".

Filter Levels to Separate Files

Configure configure_logging with two RotatingFileHandler instances: one that records only WARNING and above to errors.log, and one that records everything to all.log. To restrict a handler to a minimum level, call handler.setLevel(logging.WARNING) before passing it into basicConfig. Submit the add form once with weight left blank and confirm the warning appears in both files.

Log Events to the Database

Add a table called event_log to the sightings database with columns for the timestamp, level name, logger name, and message. Ask an LLM to generate a logging handler that inserts each log record into this table as a row, then add the handler to configure_logging alongside the existing file handler. Start the server, load a few pages, and confirm the handler is working by running select * from event_log limit 5 in the SQLite shell.

Test That Warnings Are Emitted

Write a pytest test that submits the add form with weight left blank and confirms that a warning was recorded. pytest's caplog fixture captures log records during a test without writing to a file:

def test_missing_weight_logged(caplog, small_db):
    with caplog.at_level(logging.WARNING):
        # submit form and check records
        assert any("without weight" in r.message for r in caplog.records)

Reuse the small_db fixture from the testdatabase lesson to give the test a fresh database.