I'm looking at http://www.python.org/doc/2.4.1/lib/minimal-example.html; I try a cut-down version of the example:

$ python
>>> import logging
>>> logging.debug('debug')
>>> logging.info('info')
>>> logging.warning('warning')
WARNING:root:warning

Cool. Now I do this:

>>> logging.basicConfig(level=logging.DEBUG)

I believe I've turned on output for debug-level logging, but no, it's still working as before:

>>> logging.debug('debug')
>>> logging.info('info')
>>> logging.warning('warning')
WARNING:root:warning

Hm... OK, try it in a fresh Python session:

$ python
>>> logging.basicConfig(level=logging.DEBUG)
>>> logging.debug('debug')
DEBUG:root:debug

So is really only possible to configure logging once??? Bah... Let's try this:

$ python
>>> logging.getLogger().setLevel(logging.DEBUG)
>>> logging.debug('debug')
DEBUG:root:debug

All right, so I can grab the internal logger object out of the logging framework, and change its settings, but basicConfig doesn't change settings that are already in place...???

Hm: http://www.python.org/doc/2.4.1/lib/multiple-destinations.html suggests that what I ought to do is create a new handler object, configure it, and stick it back into the logging framework. This seems clumsy---which might explain why nobody I know ever uses the logging library. I hope someone can tell me that there's an easier way...


Later: CrankyCoder says, "log4j was a bad idea in Java, it seems like a worse idea in Python." You have to grab the root logger to set the log level for all child loggers. Unless of course you have those child loggers with overridden log levels.

$ python
>>> from logging import *
>>> debug('fda')
>>> root

>>> root.level = DEBUG
>>> debug('fda')
DEBUG:root:fda

The documentation isn't all that clear but basicConfigdebug(), warn(), info() or other logging commands, you're registering a handler. From basicConfig:

1203     if len(root.handlers) == 0:
1204         filename = kwargs.get("filename")
1205         if filename:
1206             mode = kwargs.get("filemode", 'a')
1207             hdlr = FileHandler(filename, mode)
1208         else:
1209             stream = kwargs.get("stream")
1210             hdlr = StreamHandler(stream)
1211         fs = kwargs.get("format", BASIC_FORMAT)
1212         dfs = kwargs.get("datefmt", None)
1213         fmt = Formatter(fs, dfs)
1214         hdlr.setFormatter(fmt)
1215         root.addHandler(hdlr)

Basically - you're expected to get loggers by calling getLogger('your.named.logger'). You build a hierarchy of loggers up and the log levels cascade down the dotted namespace:

$ python
>>> from logging import *
>>> root

>>> logA = getLogger('a')
>>> logAB = getLogger('a.b')
>>> logA.level = INFO
>>> debug('from root logger')
>>> warn('from root logger')
WARNING:root:from root logger
>>> logA.debug('from a')
>>> logA.info('from a')
INFO:a:from a
>>> logA.warn('from a')
WARNING:a:from a
>>> logAB.debug('from a.b')
>>> logAB.info('from a.b')
INFO:a.b:from a.b
>>> logAB.warn('from a.b')
WARNING:a.b:from a.b

Bah---I think that structured logging is essential to good software development, but I'd be embarrassed putting this in front of students and saying, "It's supposed to work this way."