Logging
Goals
- Understand why structured logging is more useful than
printstatements. - Use Python's built-in
loggingmodule to create a named logger for a module. - Call a
configure_loggingfunction to set the active level and route messages to the terminal. - Add a file handler so that log records survive after the server stops.
- Catch exceptions with
logger.exceptionto record a full traceback when something goes wrong.
Why Not Just Print?
What is wrong with using
- Every
printcall runs unconditionally, whether you want its output or not - When you are done debugging, you have to find and remove each one by hand
- Missing even one leaves stray output in every response forever
- All
printoutput looks the same: there is no way to tell a routine status message from a serious error - A server handling dozens of requests per minute produces a wall of text that is hard to read
What does Python's
loggingmodule give you that
- Every message has a log level:
DEBUG,INFO,WARNING,ERROR, orCRITICAL, in increasing order of severity- One setting silences everything below a chosen level without touching the code:
switching from
DEBUGtoINFOhides all the verbose diagnostic messages at once
- One setting silences everything below a chosen level without touching the code:
switching from
- Every message is stamped automatically with the time, the level, and the name of the source that produced it
- A logger is created once per module with
logging.getLogger(__name__)__name__is Python's built-in name for the current module, so each logger is named after its file- Separate names make it easy to trace a message back to its source when reading the log later
- Log messages go to one or more log handlers
that decide where the output actually lands
- Terminal, file, database, or monitoring service
Adding a Logger to the Server
Add logging to
server_stream.pyso that the server records what it is doing. Add aconfigure_loggingfunction that can be called once at startup.
logger = logging.getLogger(__name__)goes below the imports and above the first function, at module level so every handler in the file shares the same loggerconfigure_loggingcallslogging.basicConfig, which sets up the root logger:level=logging.DEBUGshows every message; raise it tologging.INFOin production to silence diagnostic chatter without changing any handler codeformat="%(asctime)s %(levelname)-8s %(name)s %(message)s"stamps each line with the time, the level (padded to eight characters so columns stay aligned), the logger name, and the messagehandlers=[logging.StreamHandler()]sends output to the terminal
- Nothing appears in the log until
configure_logging()is called; this is a common first-time surprise - Calling
configure_logging()at module level, beforeapp = make_app(), means logging is ready before any request arrives
Add log calls to the handlers. Use
INFOwhen a sighting is inserted or deleted,WARNINGwhen an optional field is left blank, andDEBUGwhen a detail page loads successfully.
logger.info("home page: %d sightings", len(rows))records the count each time the index loads
Pass Values as Arguments, Not f-Strings
Write logger.info("count: %d", n) rather than logger.info(f"count: {n}").
The logging module skips string formatting entirely when the message level is below the active threshold,
so the f-string approach does unnecessary work on every call even when the output is never shown.
logger.warning("new sighting added without weight")flags incomplete submissions without crashinglogger.debug("sighting %d retrieved", sighting_id)disappears automatically when the level is raised toINFO, so it costs nothing in productionlogger.warning("sighting %d not found", sighting_id)before raisingNotFoundExceptionrecords every 404 before the framework handles it
import csv
import io
import logging
import sqlite3
from dataclasses import dataclass
from pathlib import Path
from typing import Annotated
from htpy import a, body, button, form, h1, head, html
from htpy import input as inp
from htpy import label, li, link, table, td, th, title, tr, ul
from litestar import Litestar, MediaType, get, post
from litestar.datastructures import UploadFile
from litestar.enums import RequestEncodingType
from litestar.exceptions import NotFoundException
from litestar.params import Body
from litestar.response import Redirect
from utils import HEADERS, KEYS, LABELS, fmt
LESSON_DIR = Path(__file__).parent
DB_PATH = LESSON_DIR.parent / "database" / "sightings.db"
logger = logging.getLogger(__name__)
INSERT_ROW = (
"insert into sightings (species, sex, weight, color, datetime, latitude, longitude) "
"values (?, ?, ?, ?, ?, ?, ?)"
)
def configure_logging():
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s %(levelname)-8s %(name)s %(message)s",
handlers=[logging.StreamHandler()],
)
@get("/sighting/{sighting_id:int}", media_type=MediaType.HTML)
async def detail(sighting_id: int) -> str:
conn = sqlite3.connect(db_path)
conn.row_factory = sqlite3.Row
row = conn.execute(
"select * from sightings where id = ?", [sighting_id]
).fetchone()
conn.close()
if row is None:
logger.warning("sighting %d not found", sighting_id)
raise NotFoundException(f"No sighting with ID {sighting_id}")
logger.debug("sighting %d retrieved", sighting_id)
return str(
html(lang="en")[
head[
title[f"Sighting {sighting_id}"],
link(rel="stylesheet", href="/style.css"),
],
body[
table[
[
tr[
td(class_="label")[lbl],
td[fmt(row[key])],
]
for key, lbl in LABELS.items()
],
],
a(href="/")["Back to all sightings"],
form(method="post", action=f"/delete/{sighting_id}")[
button(type="submit")["Delete this sighting"],
],
],
]
)
@post("/add")
async def add_sighting(
data: Annotated[dict, Body(media_type=RequestEncodingType.URL_ENCODED)],
) -> Redirect:
if not data["sex"]:
logger.warning("new sighting added without sex")
if not data["weight"]:
logger.warning("new sighting added without weight")
conn = sqlite3.connect(db_path)
conn.execute(
INSERT_ROW,
[
data["species"],
data["sex"] or None,
float(data["weight"]) if data["weight"] else None,
data["color"],
data["datetime"],
float(data["latitude"]),
float(data["longitude"]),
],
)
conn.commit()
conn.close()
logger.info("inserted sighting: species=%s", data["species"])
return Redirect("/", status_code=303)
Start the server from the logging/ directory:
litestar run --app server_rotate:app
Open the app in a browser and add a sighting with the weight field blank. The terminal shows timestamped lines like these:
2026-05-11 10:32:04,812 INFO server_stream home page: 47 sightings
2026-05-11 10:32:07,091 DEBUG server_stream sighting 12 retrieved
2026-05-11 10:32:15,334 WARNING server_stream new sighting added without weight
2026-05-11 10:32:15,335 INFO server_stream inserted sighting: species=G. canadensis
Writing Logs to a File
Copy
server_stream.pytoserver_rotate.pyand updateconfigure_loggingto also write to a file alongside the terminal output.
LOG_PATH = LESSON_DIR / "sightings.log"puts the file next to the server codelogging.FileHandler(LOG_PATH)opens the file and writes each message to it- A single logger can have any number of handlers:
if both a
StreamHandlerand aFileHandlerare given tobasicConfigthen each message goes to both destinations - Log files should not be committed to version control
- Add
*.logto.gitignoreso the file never ends up in a repository by accident
- Add
What happens if the log file grows too large, and how do you handle it?
- A server running for months can produce a log file large enough to fill the disk
logging.handlers.RotatingFileHandler(LOG_PATH, maxBytes=1_000_000, backupCount=3)creates a file that resets instead of growing without limit- When the file reaches
maxBytesbytes (here, one megabyte), the handler renames itsightings.log.1and starts a freshsightings.log - Old backups beyond
backupCountare deleted automatically, so the total disk use never exceeds four megabytes regardless of how long the server runs
- When the file reaches
logging.handlersis a submodule that must be imported explicitly withimport logging.handlers; it is not available throughimport loggingalone
Catching Exceptions in the Log
Modify the CSV upload handler in
server_rotate.pyto catch exceptions and record the full traceback instead of crashing and returning a 500 response.
- Without a try/except, a malformed CSV row terminates the handler immediately and returns a blank 500 error; the log shows nothing useful
- Wrapping each row insert in
try/except Exceptioncatches any error that might arise: a missing column, a value that cannot be converted to a float, a database constraint violation - After the except block, the loop continues with the next row rather than aborting the upload
logger.exception("failed to insert row: %s", dict(row))logs the message and then appends the full traceback automaticallyexceptionis shorthand forerrorcombined withtraceback.format_exc(); usinglogger.erroralone would log the message but lose the traceback
What does the log file contain after a CSV upload whose column names do not match what the handler expects?
- The Python dictionary lookup
row["species"]raisesKeyErrorif the CSV header isSpecies(capital S) instead ofspecies - The log entry looks like this:
2026-05-11 10:34:22,019 ERROR server_rotate failed to insert row: {'Species': 'G. canadensis', ...}
Traceback (most recent call last):
File "server_rotate.py", line 140, in upload_csv
row["species"],
KeyError: 'species'
- The timestamp, level, and module name appear on the first line; the traceback follows immediately
- The log file now contains a permanent record of the failure even after the server restarts, which makes it possible to diagnose problems that happened hours ago
def configure_logging():
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s %(levelname)-8s %(name)s %(message)s",
handlers=[
logging.StreamHandler(),
logging.handlers.RotatingFileHandler(
LOG_PATH, maxBytes=1_000_000, backupCount=3
),
],
)
@post("/upload")
async def upload_csv(
data: Annotated[CsvUpload, Body(media_type=RequestEncodingType.MULTI_PART)],
) -> Redirect:
content = await data.csv_file.read()
reader = csv.DictReader(io.StringIO(content.decode("utf-8")))
conn = sqlite3.connect(db_path)
count = 0
for row in reader:
try:
conn.execute(
INSERT_ROW,
[
row["species"],
row["sex"] or None,
float(row["weight"]) if row["weight"] else None,
row["color"],
row["datetime"],
float(row["latitude"]),
float(row["longitude"]),
],
)
count += 1
except Exception:
logger.exception("failed to insert row: %s", dict(row))
conn.commit()
conn.close()
logger.info("CSV upload complete: %d rows inserted", count)
return Redirect("/", status_code=303)
See [python-logging2025] for the full Python logging documentation.
Check Understanding
A classmate adds logger.debug("starting upload") to the upload handler but sees nothing in the terminal. What are the two most likely causes?
Either configure_logging was never called, so no handler is attached to the root logger,
or configure_logging was called with level=logging.INFO, which silences DEBUG messages.
Both produce the same symptom: complete silence.
Two modules each call logging.getLogger("sightings") instead of logging.getLogger(__name__). When reading the log, how can you tell which module produced a given message?
You cannot.
Both loggers share the same name, so every message shows "sightings" as the source
regardless of which file it came from.
Using logging.getLogger(__name__) in each module gives each logger a distinct name
matching its filename, making every message traceable to its origin.
Why is logger.warning the right level when a sighting is submitted without a weight, rather than logger.info or logger.error?
INFO is for routine events during normal operation.
ERROR is for failures that prevent the operation from completing.
A missing optional weight is neither: the sighting was inserted successfully,
but the data is incomplete in a way worth noticing.
WARNING communicates "something unusual happened, but the application is still working."
After three weeks, the sightings.log file is four gigabytes and the disk is full. What one change to configure_logging prevents this from recurring?
Replace FileHandler with logging.handlers.RotatingFileHandler and set maxBytes and backupCount.
The handler then caps the file at maxBytes bytes and keeps at most backupCount old copies,
so total disk use stays bounded regardless of how long the server runs.
The code below is meant to log a failure, but the traceback never appears in the log file. What is wrong?
logger.error logs the message but discards the traceback.
Replace it with logger.exception("upload failed: %s", e),
which automatically appends the current exception's full traceback to the log entry.
except Exception as e:
logger.error("upload failed: %s", e)
Exercises
Log Elapsed Time
Add timing to the detail handler.
Use time.perf_counter() before and after the database query to measure elapsed time in milliseconds,
and log it at the DEBUG level: "sighting %d retrieved in %.1f ms".
Filter Levels to Separate Files
Configure configure_logging with two RotatingFileHandler instances: one that records only
WARNING and above to errors.log, and one that records everything to all.log.
To restrict a handler to a minimum level, call handler.setLevel(logging.WARNING) before passing
it into basicConfig.
Submit the add form once with weight left blank and confirm the warning appears in both files.
Log Events to the Database
Add a table called event_log to the sightings database with columns for the timestamp,
level name, logger name, and message.
Ask an LLM to generate a logging handler that inserts each log record into this table as a row,
then add the handler to configure_logging alongside the existing file handler.
Start the server, load a few pages, and confirm the handler is working by running
select * from event_log limit 5 in the SQLite shell.
Test That Warnings Are Emitted
Write a pytest test that submits the add form with weight left blank
and confirms that a warning was recorded.
pytest's caplog fixture captures log records during a test without writing to a file:
def test_missing_weight_logged(caplog, small_db):
with caplog.at_level(logging.WARNING):
# submit form and check records
assert any("without weight" in r.message for r in caplog.records)
Reuse the small_db fixture from the testdatabase lesson to give the test a fresh database.