mirror of
https://github.com/python/cpython.git
synced 2025-11-03 03:22:27 +00:00
Logging documentation reorganised.
This commit is contained in:
parent
7ca6d90681
commit
c63619bcf2
7 changed files with 4101 additions and 4209 deletions
|
|
@ -19,6 +19,8 @@ Currently, the HOWTOs are:
|
||||||
descriptor.rst
|
descriptor.rst
|
||||||
doanddont.rst
|
doanddont.rst
|
||||||
functional.rst
|
functional.rst
|
||||||
|
logging.rst
|
||||||
|
logging-cookbook.rst
|
||||||
regex.rst
|
regex.rst
|
||||||
sockets.rst
|
sockets.rst
|
||||||
sorting.rst
|
sorting.rst
|
||||||
|
|
|
||||||
929
Doc/howto/logging-cookbook.rst
Normal file
929
Doc/howto/logging-cookbook.rst
Normal file
|
|
@ -0,0 +1,929 @@
|
||||||
|
.. _logging-cookbook:
|
||||||
|
|
||||||
|
================
|
||||||
|
Logging Cookbook
|
||||||
|
================
|
||||||
|
|
||||||
|
:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
|
||||||
|
|
||||||
|
This page contains a number of recipes related to logging, which have been found useful in the past.
|
||||||
|
|
||||||
|
.. Contents::
|
||||||
|
|
||||||
|
.. currentmodule:: logging
|
||||||
|
|
||||||
|
Using logging in multiple modules
|
||||||
|
---------------------------------
|
||||||
|
|
||||||
|
It was mentioned above that multiple calls to
|
||||||
|
``logging.getLogger('someLogger')`` return a reference to the same logger
|
||||||
|
object. This is true not only within the same module, but also across modules
|
||||||
|
as long as it is in the same Python interpreter process. It is true for
|
||||||
|
references to the same object; additionally, application code can define and
|
||||||
|
configure a parent logger in one module and create (but not configure) a child
|
||||||
|
logger in a separate module, and all logger calls to the child will pass up to
|
||||||
|
the parent. Here is a main module::
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import auxiliary_module
|
||||||
|
|
||||||
|
# create logger with 'spam_application'
|
||||||
|
logger = logging.getLogger('spam_application')
|
||||||
|
logger.setLevel(logging.DEBUG)
|
||||||
|
# create file handler which logs even debug messages
|
||||||
|
fh = logging.FileHandler('spam.log')
|
||||||
|
fh.setLevel(logging.DEBUG)
|
||||||
|
# create console handler with a higher log level
|
||||||
|
ch = logging.StreamHandler()
|
||||||
|
ch.setLevel(logging.ERROR)
|
||||||
|
# create formatter and add it to the handlers
|
||||||
|
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
||||||
|
fh.setFormatter(formatter)
|
||||||
|
ch.setFormatter(formatter)
|
||||||
|
# add the handlers to the logger
|
||||||
|
logger.addHandler(fh)
|
||||||
|
logger.addHandler(ch)
|
||||||
|
|
||||||
|
logger.info('creating an instance of auxiliary_module.Auxiliary')
|
||||||
|
a = auxiliary_module.Auxiliary()
|
||||||
|
logger.info('created an instance of auxiliary_module.Auxiliary')
|
||||||
|
logger.info('calling auxiliary_module.Auxiliary.do_something')
|
||||||
|
a.do_something()
|
||||||
|
logger.info('finished auxiliary_module.Auxiliary.do_something')
|
||||||
|
logger.info('calling auxiliary_module.some_function()')
|
||||||
|
auxiliary_module.some_function()
|
||||||
|
logger.info('done with auxiliary_module.some_function()')
|
||||||
|
|
||||||
|
Here is the auxiliary module::
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
# create logger
|
||||||
|
module_logger = logging.getLogger('spam_application.auxiliary')
|
||||||
|
|
||||||
|
class Auxiliary:
|
||||||
|
def __init__(self):
|
||||||
|
self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
|
||||||
|
self.logger.info('creating an instance of Auxiliary')
|
||||||
|
def do_something(self):
|
||||||
|
self.logger.info('doing something')
|
||||||
|
a = 1 + 1
|
||||||
|
self.logger.info('done doing something')
|
||||||
|
|
||||||
|
def some_function():
|
||||||
|
module_logger.info('received a call to "some_function"')
|
||||||
|
|
||||||
|
The output looks like this::
|
||||||
|
|
||||||
|
2005-03-23 23:47:11,663 - spam_application - INFO -
|
||||||
|
creating an instance of auxiliary_module.Auxiliary
|
||||||
|
2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
|
||||||
|
creating an instance of Auxiliary
|
||||||
|
2005-03-23 23:47:11,665 - spam_application - INFO -
|
||||||
|
created an instance of auxiliary_module.Auxiliary
|
||||||
|
2005-03-23 23:47:11,668 - spam_application - INFO -
|
||||||
|
calling auxiliary_module.Auxiliary.do_something
|
||||||
|
2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
|
||||||
|
doing something
|
||||||
|
2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
|
||||||
|
done doing something
|
||||||
|
2005-03-23 23:47:11,670 - spam_application - INFO -
|
||||||
|
finished auxiliary_module.Auxiliary.do_something
|
||||||
|
2005-03-23 23:47:11,671 - spam_application - INFO -
|
||||||
|
calling auxiliary_module.some_function()
|
||||||
|
2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
|
||||||
|
received a call to 'some_function'
|
||||||
|
2005-03-23 23:47:11,673 - spam_application - INFO -
|
||||||
|
done with auxiliary_module.some_function()
|
||||||
|
|
||||||
|
Multiple handlers and formatters
|
||||||
|
--------------------------------
|
||||||
|
|
||||||
|
Loggers are plain Python objects. The :func:`addHandler` method has no minimum
|
||||||
|
or maximum quota for the number of handlers you may add. Sometimes it will be
|
||||||
|
beneficial for an application to log all messages of all severities to a text
|
||||||
|
file while simultaneously logging errors or above to the console. To set this
|
||||||
|
up, simply configure the appropriate handlers. The logging calls in the
|
||||||
|
application code will remain unchanged. Here is a slight modification to the
|
||||||
|
previous simple module-based configuration example::
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger('simple_example')
|
||||||
|
logger.setLevel(logging.DEBUG)
|
||||||
|
# create file handler which logs even debug messages
|
||||||
|
fh = logging.FileHandler('spam.log')
|
||||||
|
fh.setLevel(logging.DEBUG)
|
||||||
|
# create console handler with a higher log level
|
||||||
|
ch = logging.StreamHandler()
|
||||||
|
ch.setLevel(logging.ERROR)
|
||||||
|
# create formatter and add it to the handlers
|
||||||
|
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
||||||
|
ch.setFormatter(formatter)
|
||||||
|
fh.setFormatter(formatter)
|
||||||
|
# add the handlers to logger
|
||||||
|
logger.addHandler(ch)
|
||||||
|
logger.addHandler(fh)
|
||||||
|
|
||||||
|
# 'application' code
|
||||||
|
logger.debug('debug message')
|
||||||
|
logger.info('info message')
|
||||||
|
logger.warn('warn message')
|
||||||
|
logger.error('error message')
|
||||||
|
logger.critical('critical message')
|
||||||
|
|
||||||
|
Notice that the 'application' code does not care about multiple handlers. All
|
||||||
|
that changed was the addition and configuration of a new handler named *fh*.
|
||||||
|
|
||||||
|
The ability to create new handlers with higher- or lower-severity filters can be
|
||||||
|
very helpful when writing and testing an application. Instead of using many
|
||||||
|
``print`` statements for debugging, use ``logger.debug``: Unlike the print
|
||||||
|
statements, which you will have to delete or comment out later, the logger.debug
|
||||||
|
statements can remain intact in the source code and remain dormant until you
|
||||||
|
need them again. At that time, the only change that needs to happen is to
|
||||||
|
modify the severity level of the logger and/or handler to debug.
|
||||||
|
|
||||||
|
.. _multiple-destinations:
|
||||||
|
|
||||||
|
Logging to multiple destinations
|
||||||
|
--------------------------------
|
||||||
|
|
||||||
|
Let's say you want to log to console and file with different message formats and
|
||||||
|
in differing circumstances. Say you want to log messages with levels of DEBUG
|
||||||
|
and higher to file, and those messages at level INFO and higher to the console.
|
||||||
|
Let's also assume that the file should contain timestamps, but the console
|
||||||
|
messages should not. Here's how you can achieve this::
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
# set up logging to file - see previous section for more details
|
||||||
|
logging.basicConfig(level=logging.DEBUG,
|
||||||
|
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
|
||||||
|
datefmt='%m-%d %H:%M',
|
||||||
|
filename='/temp/myapp.log',
|
||||||
|
filemode='w')
|
||||||
|
# define a Handler which writes INFO messages or higher to the sys.stderr
|
||||||
|
console = logging.StreamHandler()
|
||||||
|
console.setLevel(logging.INFO)
|
||||||
|
# set a format which is simpler for console use
|
||||||
|
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
|
||||||
|
# tell the handler to use this format
|
||||||
|
console.setFormatter(formatter)
|
||||||
|
# add the handler to the root logger
|
||||||
|
logging.getLogger('').addHandler(console)
|
||||||
|
|
||||||
|
# Now, we can log to the root logger, or any other logger. First the root...
|
||||||
|
logging.info('Jackdaws love my big sphinx of quartz.')
|
||||||
|
|
||||||
|
# Now, define a couple of other loggers which might represent areas in your
|
||||||
|
# application:
|
||||||
|
|
||||||
|
logger1 = logging.getLogger('myapp.area1')
|
||||||
|
logger2 = logging.getLogger('myapp.area2')
|
||||||
|
|
||||||
|
logger1.debug('Quick zephyrs blow, vexing daft Jim.')
|
||||||
|
logger1.info('How quickly daft jumping zebras vex.')
|
||||||
|
logger2.warning('Jail zesty vixen who grabbed pay from quack.')
|
||||||
|
logger2.error('The five boxing wizards jump quickly.')
|
||||||
|
|
||||||
|
When you run this, on the console you will see ::
|
||||||
|
|
||||||
|
root : INFO Jackdaws love my big sphinx of quartz.
|
||||||
|
myapp.area1 : INFO How quickly daft jumping zebras vex.
|
||||||
|
myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
|
||||||
|
myapp.area2 : ERROR The five boxing wizards jump quickly.
|
||||||
|
|
||||||
|
and in the file you will see something like ::
|
||||||
|
|
||||||
|
10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
|
||||||
|
10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
|
||||||
|
10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
|
||||||
|
10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
|
||||||
|
10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
|
||||||
|
|
||||||
|
As you can see, the DEBUG message only shows up in the file. The other messages
|
||||||
|
are sent to both destinations.
|
||||||
|
|
||||||
|
This example uses console and file handlers, but you can use any number and
|
||||||
|
combination of handlers you choose.
|
||||||
|
|
||||||
|
|
||||||
|
Configuration server example
|
||||||
|
----------------------------
|
||||||
|
|
||||||
|
Here is an example of a module using the logging configuration server::
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import logging.config
|
||||||
|
import time
|
||||||
|
import os
|
||||||
|
|
||||||
|
# read initial config file
|
||||||
|
logging.config.fileConfig('logging.conf')
|
||||||
|
|
||||||
|
# create and start listener on port 9999
|
||||||
|
t = logging.config.listen(9999)
|
||||||
|
t.start()
|
||||||
|
|
||||||
|
logger = logging.getLogger('simpleExample')
|
||||||
|
|
||||||
|
try:
|
||||||
|
# loop through logging calls to see the difference
|
||||||
|
# new configurations make, until Ctrl+C is pressed
|
||||||
|
while True:
|
||||||
|
logger.debug('debug message')
|
||||||
|
logger.info('info message')
|
||||||
|
logger.warn('warn message')
|
||||||
|
logger.error('error message')
|
||||||
|
logger.critical('critical message')
|
||||||
|
time.sleep(5)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
# cleanup
|
||||||
|
logging.config.stopListening()
|
||||||
|
t.join()
|
||||||
|
|
||||||
|
And here is a script that takes a filename and sends that file to the server,
|
||||||
|
properly preceded with the binary-encoded length, as the new logging
|
||||||
|
configuration::
|
||||||
|
|
||||||
|
#!/usr/bin/env python
|
||||||
|
import socket, sys, struct
|
||||||
|
|
||||||
|
data_to_send = open(sys.argv[1], 'r').read()
|
||||||
|
|
||||||
|
HOST = 'localhost'
|
||||||
|
PORT = 9999
|
||||||
|
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||||
|
print('connecting...')
|
||||||
|
s.connect((HOST, PORT))
|
||||||
|
print('sending config...')
|
||||||
|
s.send(struct.pack('>L', len(data_to_send)))
|
||||||
|
s.send(data_to_send)
|
||||||
|
s.close()
|
||||||
|
print('complete')
|
||||||
|
|
||||||
|
|
||||||
|
Dealing with handlers that block
|
||||||
|
--------------------------------
|
||||||
|
|
||||||
|
.. currentmodule:: logging.handlers
|
||||||
|
|
||||||
|
Sometimes you have to get your logging handlers to do their work without
|
||||||
|
blocking the thread you’re logging from. This is common in Web applications,
|
||||||
|
though of course it also occurs in other scenarios.
|
||||||
|
|
||||||
|
A common culprit which demonstrates sluggish behaviour is the
|
||||||
|
:class:`SMTPHandler`: sending emails can take a long time, for a
|
||||||
|
number of reasons outside the developer’s control (for example, a poorly
|
||||||
|
performing mail or network infrastructure). But almost any network-based
|
||||||
|
handler can block: Even a :class:`SocketHandler` operation may do a
|
||||||
|
DNS query under the hood which is too slow (and this query can be deep in the
|
||||||
|
socket library code, below the Python layer, and outside your control).
|
||||||
|
|
||||||
|
One solution is to use a two-part approach. For the first part, attach only a
|
||||||
|
:class:`QueueHandler` to those loggers which are accessed from
|
||||||
|
performance-critical threads. They simply write to their queue, which can be
|
||||||
|
sized to a large enough capacity or initialized with no upper bound to their
|
||||||
|
size. The write to the queue will typically be accepted quickly, though you
|
||||||
|
will probably need to catch the :ref:`queue.Full` exception as a precaution
|
||||||
|
in your code. If you are a library developer who has performance-critical
|
||||||
|
threads in their code, be sure to document this (together with a suggestion to
|
||||||
|
attach only ``QueueHandlers`` to your loggers) for the benefit of other
|
||||||
|
developers who will use your code.
|
||||||
|
|
||||||
|
The second part of the solution is :class:`QueueListener`, which has been
|
||||||
|
designed as the counterpart to :class:`QueueHandler`. A
|
||||||
|
:class:`QueueListener` is very simple: it’s passed a queue and some handlers,
|
||||||
|
and it fires up an internal thread which listens to its queue for LogRecords
|
||||||
|
sent from ``QueueHandlers`` (or any other source of ``LogRecords``, for that
|
||||||
|
matter). The ``LogRecords`` are removed from the queue and passed to the
|
||||||
|
handlers for processing.
|
||||||
|
|
||||||
|
The advantage of having a separate :class:`QueueListener` class is that you
|
||||||
|
can use the same instance to service multiple ``QueueHandlers``. This is more
|
||||||
|
resource-friendly than, say, having threaded versions of the existing handler
|
||||||
|
classes, which would eat up one thread per handler for no particular benefit.
|
||||||
|
|
||||||
|
An example of using these two classes follows (imports omitted)::
|
||||||
|
|
||||||
|
que = queue.Queue(-1) # no limit on size
|
||||||
|
queue_handler = QueueHandler(que)
|
||||||
|
handler = logging.StreamHandler()
|
||||||
|
listener = QueueListener(que, handler)
|
||||||
|
root = logging.getLogger()
|
||||||
|
root.addHandler(queue_handler)
|
||||||
|
formatter = logging.Formatter('%(threadName)s: %(message)s')
|
||||||
|
handler.setFormatter(formatter)
|
||||||
|
listener.start()
|
||||||
|
# The log output will display the thread which generated
|
||||||
|
# the event (the main thread) rather than the internal
|
||||||
|
# thread which monitors the internal queue. This is what
|
||||||
|
# you want to happen.
|
||||||
|
root.warning('Look out!')
|
||||||
|
listener.stop()
|
||||||
|
|
||||||
|
which, when run, will produce::
|
||||||
|
|
||||||
|
MainThread: Look out!
|
||||||
|
|
||||||
|
|
||||||
|
.. _network-logging:
|
||||||
|
|
||||||
|
Sending and receiving logging events across a network
|
||||||
|
-----------------------------------------------------
|
||||||
|
|
||||||
|
Let's say you want to send logging events across a network, and handle them at
|
||||||
|
the receiving end. A simple way of doing this is attaching a
|
||||||
|
:class:`SocketHandler` instance to the root logger at the sending end::
|
||||||
|
|
||||||
|
import logging, logging.handlers
|
||||||
|
|
||||||
|
rootLogger = logging.getLogger('')
|
||||||
|
rootLogger.setLevel(logging.DEBUG)
|
||||||
|
socketHandler = logging.handlers.SocketHandler('localhost',
|
||||||
|
logging.handlers.DEFAULT_TCP_LOGGING_PORT)
|
||||||
|
# don't bother with a formatter, since a socket handler sends the event as
|
||||||
|
# an unformatted pickle
|
||||||
|
rootLogger.addHandler(socketHandler)
|
||||||
|
|
||||||
|
# Now, we can log to the root logger, or any other logger. First the root...
|
||||||
|
logging.info('Jackdaws love my big sphinx of quartz.')
|
||||||
|
|
||||||
|
# Now, define a couple of other loggers which might represent areas in your
|
||||||
|
# application:
|
||||||
|
|
||||||
|
logger1 = logging.getLogger('myapp.area1')
|
||||||
|
logger2 = logging.getLogger('myapp.area2')
|
||||||
|
|
||||||
|
logger1.debug('Quick zephyrs blow, vexing daft Jim.')
|
||||||
|
logger1.info('How quickly daft jumping zebras vex.')
|
||||||
|
logger2.warning('Jail zesty vixen who grabbed pay from quack.')
|
||||||
|
logger2.error('The five boxing wizards jump quickly.')
|
||||||
|
|
||||||
|
At the receiving end, you can set up a receiver using the :mod:`socketserver`
|
||||||
|
module. Here is a basic working example::
|
||||||
|
|
||||||
|
import pickle
|
||||||
|
import logging
|
||||||
|
import logging.handlers
|
||||||
|
import socketserver
|
||||||
|
import struct
|
||||||
|
|
||||||
|
|
||||||
|
class LogRecordStreamHandler(socketserver.StreamRequestHandler):
|
||||||
|
"""Handler for a streaming logging request.
|
||||||
|
|
||||||
|
This basically logs the record using whatever logging policy is
|
||||||
|
configured locally.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def handle(self):
|
||||||
|
"""
|
||||||
|
Handle multiple requests - each expected to be a 4-byte length,
|
||||||
|
followed by the LogRecord in pickle format. Logs the record
|
||||||
|
according to whatever policy is configured locally.
|
||||||
|
"""
|
||||||
|
while True:
|
||||||
|
chunk = self.connection.recv(4)
|
||||||
|
if len(chunk) < 4:
|
||||||
|
break
|
||||||
|
slen = struct.unpack('>L', chunk)[0]
|
||||||
|
chunk = self.connection.recv(slen)
|
||||||
|
while len(chunk) < slen:
|
||||||
|
chunk = chunk + self.connection.recv(slen - len(chunk))
|
||||||
|
obj = self.unPickle(chunk)
|
||||||
|
record = logging.makeLogRecord(obj)
|
||||||
|
self.handleLogRecord(record)
|
||||||
|
|
||||||
|
def unPickle(self, data):
|
||||||
|
return pickle.loads(data)
|
||||||
|
|
||||||
|
def handleLogRecord(self, record):
|
||||||
|
# if a name is specified, we use the named logger rather than the one
|
||||||
|
# implied by the record.
|
||||||
|
if self.server.logname is not None:
|
||||||
|
name = self.server.logname
|
||||||
|
else:
|
||||||
|
name = record.name
|
||||||
|
logger = logging.getLogger(name)
|
||||||
|
# N.B. EVERY record gets logged. This is because Logger.handle
|
||||||
|
# is normally called AFTER logger-level filtering. If you want
|
||||||
|
# to do filtering, do it at the client end to save wasting
|
||||||
|
# cycles and network bandwidth!
|
||||||
|
logger.handle(record)
|
||||||
|
|
||||||
|
class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):
|
||||||
|
"""
|
||||||
|
Simple TCP socket-based logging receiver suitable for testing.
|
||||||
|
"""
|
||||||
|
|
||||||
|
allow_reuse_address = 1
|
||||||
|
|
||||||
|
def __init__(self, host='localhost',
|
||||||
|
port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
|
||||||
|
handler=LogRecordStreamHandler):
|
||||||
|
socketserver.ThreadingTCPServer.__init__(self, (host, port), handler)
|
||||||
|
self.abort = 0
|
||||||
|
self.timeout = 1
|
||||||
|
self.logname = None
|
||||||
|
|
||||||
|
def serve_until_stopped(self):
|
||||||
|
import select
|
||||||
|
abort = 0
|
||||||
|
while not abort:
|
||||||
|
rd, wr, ex = select.select([self.socket.fileno()],
|
||||||
|
[], [],
|
||||||
|
self.timeout)
|
||||||
|
if rd:
|
||||||
|
self.handle_request()
|
||||||
|
abort = self.abort
|
||||||
|
|
||||||
|
def main():
|
||||||
|
logging.basicConfig(
|
||||||
|
format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
|
||||||
|
tcpserver = LogRecordSocketReceiver()
|
||||||
|
print('About to start TCP server...')
|
||||||
|
tcpserver.serve_until_stopped()
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
|
|
||||||
|
First run the server, and then the client. On the client side, nothing is
|
||||||
|
printed on the console; on the server side, you should see something like::
|
||||||
|
|
||||||
|
About to start TCP server...
|
||||||
|
59 root INFO Jackdaws love my big sphinx of quartz.
|
||||||
|
59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
|
||||||
|
69 myapp.area1 INFO How quickly daft jumping zebras vex.
|
||||||
|
69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
|
||||||
|
69 myapp.area2 ERROR The five boxing wizards jump quickly.
|
||||||
|
|
||||||
|
Note that there are some security issues with pickle in some scenarios. If
|
||||||
|
these affect you, you can use an alternative serialization scheme by overriding
|
||||||
|
the :meth:`makePickle` method and implementing your alternative there, as
|
||||||
|
well as adapting the above script to use your alternative serialization.
|
||||||
|
|
||||||
|
|
||||||
|
.. _context-info:
|
||||||
|
|
||||||
|
Adding contextual information to your logging output
|
||||||
|
----------------------------------------------------
|
||||||
|
|
||||||
|
Sometimes you want logging output to contain contextual information in
|
||||||
|
addition to the parameters passed to the logging call. For example, in a
|
||||||
|
networked application, it may be desirable to log client-specific information
|
||||||
|
in the log (e.g. remote client's username, or IP address). Although you could
|
||||||
|
use the *extra* parameter to achieve this, it's not always convenient to pass
|
||||||
|
the information in this way. While it might be tempting to create
|
||||||
|
:class:`Logger` instances on a per-connection basis, this is not a good idea
|
||||||
|
because these instances are not garbage collected. While this is not a problem
|
||||||
|
in practice, when the number of :class:`Logger` instances is dependent on the
|
||||||
|
level of granularity you want to use in logging an application, it could
|
||||||
|
be hard to manage if the number of :class:`Logger` instances becomes
|
||||||
|
effectively unbounded.
|
||||||
|
|
||||||
|
|
||||||
|
Using LoggerAdapters to impart contextual information
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
An easy way in which you can pass contextual information to be output along
|
||||||
|
with logging event information is to use the :class:`LoggerAdapter` class.
|
||||||
|
This class is designed to look like a :class:`Logger`, so that you can call
|
||||||
|
:meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
|
||||||
|
:meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
|
||||||
|
same signatures as their counterparts in :class:`Logger`, so you can use the
|
||||||
|
two types of instances interchangeably.
|
||||||
|
|
||||||
|
When you create an instance of :class:`LoggerAdapter`, you pass it a
|
||||||
|
:class:`Logger` instance and a dict-like object which contains your contextual
|
||||||
|
information. When you call one of the logging methods on an instance of
|
||||||
|
:class:`LoggerAdapter`, it delegates the call to the underlying instance of
|
||||||
|
:class:`Logger` passed to its constructor, and arranges to pass the contextual
|
||||||
|
information in the delegated call. Here's a snippet from the code of
|
||||||
|
:class:`LoggerAdapter`::
|
||||||
|
|
||||||
|
def debug(self, msg, *args, **kwargs):
|
||||||
|
"""
|
||||||
|
Delegate a debug call to the underlying logger, after adding
|
||||||
|
contextual information from this adapter instance.
|
||||||
|
"""
|
||||||
|
msg, kwargs = self.process(msg, kwargs)
|
||||||
|
self.logger.debug(msg, *args, **kwargs)
|
||||||
|
|
||||||
|
The :meth:`process` method of :class:`LoggerAdapter` is where the contextual
|
||||||
|
information is added to the logging output. It's passed the message and
|
||||||
|
keyword arguments of the logging call, and it passes back (potentially)
|
||||||
|
modified versions of these to use in the call to the underlying logger. The
|
||||||
|
default implementation of this method leaves the message alone, but inserts
|
||||||
|
an 'extra' key in the keyword argument whose value is the dict-like object
|
||||||
|
passed to the constructor. Of course, if you had passed an 'extra' keyword
|
||||||
|
argument in the call to the adapter, it will be silently overwritten.
|
||||||
|
|
||||||
|
The advantage of using 'extra' is that the values in the dict-like object are
|
||||||
|
merged into the :class:`LogRecord` instance's __dict__, allowing you to use
|
||||||
|
customized strings with your :class:`Formatter` instances which know about
|
||||||
|
the keys of the dict-like object. If you need a different method, e.g. if you
|
||||||
|
want to prepend or append the contextual information to the message string,
|
||||||
|
you just need to subclass :class:`LoggerAdapter` and override :meth:`process`
|
||||||
|
to do what you need. Here's an example script which uses this class, which
|
||||||
|
also illustrates what dict-like behaviour is needed from an arbitrary
|
||||||
|
'dict-like' object for use in the constructor::
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
class ConnInfo:
|
||||||
|
"""
|
||||||
|
An example class which shows how an arbitrary class can be used as
|
||||||
|
the 'extra' context information repository passed to a LoggerAdapter.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __getitem__(self, name):
|
||||||
|
"""
|
||||||
|
To allow this instance to look like a dict.
|
||||||
|
"""
|
||||||
|
from random import choice
|
||||||
|
if name == 'ip':
|
||||||
|
result = choice(['127.0.0.1', '192.168.0.1'])
|
||||||
|
elif name == 'user':
|
||||||
|
result = choice(['jim', 'fred', 'sheila'])
|
||||||
|
else:
|
||||||
|
result = self.__dict__.get(name, '?')
|
||||||
|
return result
|
||||||
|
|
||||||
|
def __iter__(self):
|
||||||
|
"""
|
||||||
|
To allow iteration over keys, which will be merged into
|
||||||
|
the LogRecord dict before formatting and output.
|
||||||
|
"""
|
||||||
|
keys = ['ip', 'user']
|
||||||
|
keys.extend(self.__dict__.keys())
|
||||||
|
return keys.__iter__()
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
from random import choice
|
||||||
|
levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
|
||||||
|
a1 = logging.LoggerAdapter(logging.getLogger('a.b.c'),
|
||||||
|
{ 'ip' : '123.231.231.123', 'user' : 'sheila' })
|
||||||
|
logging.basicConfig(level=logging.DEBUG,
|
||||||
|
format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
|
||||||
|
a1.debug('A debug message')
|
||||||
|
a1.info('An info message with %s', 'some parameters')
|
||||||
|
a2 = logging.LoggerAdapter(logging.getLogger('d.e.f'), ConnInfo())
|
||||||
|
for x in range(10):
|
||||||
|
lvl = choice(levels)
|
||||||
|
lvlname = logging.getLevelName(lvl)
|
||||||
|
a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
|
||||||
|
|
||||||
|
When this script is run, the output should look something like this::
|
||||||
|
|
||||||
|
2008-01-18 14:49:54,023 a.b.c DEBUG IP: 123.231.231.123 User: sheila A debug message
|
||||||
|
2008-01-18 14:49:54,023 a.b.c INFO IP: 123.231.231.123 User: sheila An info message with some parameters
|
||||||
|
2008-01-18 14:49:54,023 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
|
||||||
|
2008-01-18 14:49:54,033 d.e.f INFO IP: 192.168.0.1 User: jim A message at INFO level with 2 parameters
|
||||||
|
2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: sheila A message at WARNING level with 2 parameters
|
||||||
|
2008-01-18 14:49:54,033 d.e.f ERROR IP: 127.0.0.1 User: fred A message at ERROR level with 2 parameters
|
||||||
|
2008-01-18 14:49:54,033 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
|
||||||
|
2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: sheila A message at WARNING level with 2 parameters
|
||||||
|
2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: jim A message at WARNING level with 2 parameters
|
||||||
|
2008-01-18 14:49:54,033 d.e.f INFO IP: 192.168.0.1 User: fred A message at INFO level with 2 parameters
|
||||||
|
2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: sheila A message at WARNING level with 2 parameters
|
||||||
|
2008-01-18 14:49:54,033 d.e.f WARNING IP: 127.0.0.1 User: jim A message at WARNING level with 2 parameters
|
||||||
|
|
||||||
|
|
||||||
|
.. _filters-contextual:
|
||||||
|
|
||||||
|
Using Filters to impart contextual information
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
You can also add contextual information to log output using a user-defined
|
||||||
|
:class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
|
||||||
|
passed to them, including adding additional attributes which can then be output
|
||||||
|
using a suitable format string, or if needed a custom :class:`Formatter`.
|
||||||
|
|
||||||
|
For example in a web application, the request being processed (or at least,
|
||||||
|
the interesting parts of it) can be stored in a threadlocal
|
||||||
|
(:class:`threading.local`) variable, and then accessed from a ``Filter`` to
|
||||||
|
add, say, information from the request - say, the remote IP address and remote
|
||||||
|
user's username - to the ``LogRecord``, using the attribute names 'ip' and
|
||||||
|
'user' as in the ``LoggerAdapter`` example above. In that case, the same format
|
||||||
|
string can be used to get similar output to that shown above. Here's an example
|
||||||
|
script::
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from random import choice
|
||||||
|
|
||||||
|
class ContextFilter(logging.Filter):
|
||||||
|
"""
|
||||||
|
This is a filter which injects contextual information into the log.
|
||||||
|
|
||||||
|
Rather than use actual contextual information, we just use random
|
||||||
|
data in this demo.
|
||||||
|
"""
|
||||||
|
|
||||||
|
USERS = ['jim', 'fred', 'sheila']
|
||||||
|
IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
|
||||||
|
|
||||||
|
def filter(self, record):
|
||||||
|
|
||||||
|
record.ip = choice(ContextFilter.IPS)
|
||||||
|
record.user = choice(ContextFilter.USERS)
|
||||||
|
return True
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
|
||||||
|
a1 = logging.LoggerAdapter(logging.getLogger('a.b.c'),
|
||||||
|
{ 'ip' : '123.231.231.123', 'user' : 'sheila' })
|
||||||
|
logging.basicConfig(level=logging.DEBUG,
|
||||||
|
format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
|
||||||
|
a1 = logging.getLogger('a.b.c')
|
||||||
|
a2 = logging.getLogger('d.e.f')
|
||||||
|
|
||||||
|
f = ContextFilter()
|
||||||
|
a1.addFilter(f)
|
||||||
|
a2.addFilter(f)
|
||||||
|
a1.debug('A debug message')
|
||||||
|
a1.info('An info message with %s', 'some parameters')
|
||||||
|
for x in range(10):
|
||||||
|
lvl = choice(levels)
|
||||||
|
lvlname = logging.getLevelName(lvl)
|
||||||
|
a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
|
||||||
|
|
||||||
|
which, when run, produces something like::
|
||||||
|
|
||||||
|
2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message
|
||||||
|
2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters
|
||||||
|
2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
|
||||||
|
2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters
|
||||||
|
2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters
|
||||||
|
2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters
|
||||||
|
2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
|
||||||
|
2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
|
||||||
|
2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters
|
||||||
|
2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
|
||||||
|
2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
|
||||||
|
2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters
|
||||||
|
|
||||||
|
|
||||||
|
.. _multiple-processes:
|
||||||
|
|
||||||
|
Logging to a single file from multiple processes
|
||||||
|
------------------------------------------------
|
||||||
|
|
||||||
|
Although logging is thread-safe, and logging to a single file from multiple
|
||||||
|
threads in a single process *is* supported, logging to a single file from
|
||||||
|
*multiple processes* is *not* supported, because there is no standard way to
|
||||||
|
serialize access to a single file across multiple processes in Python. If you
|
||||||
|
need to log to a single file from multiple processes, one way of doing this is
|
||||||
|
to have all the processes log to a :class:`SocketHandler`, and have a separate
|
||||||
|
process which implements a socket server which reads from the socket and logs
|
||||||
|
to file. (If you prefer, you can dedicate one thread in one of the existing
|
||||||
|
processes to perform this function.) The following section documents this
|
||||||
|
approach in more detail and includes a working socket receiver which can be
|
||||||
|
used as a starting point for you to adapt in your own applications.
|
||||||
|
|
||||||
|
If you are using a recent version of Python which includes the
|
||||||
|
:mod:`multiprocessing` module, you could write your own handler which uses the
|
||||||
|
:class:`Lock` class from this module to serialize access to the file from
|
||||||
|
your processes. The existing :class:`FileHandler` and subclasses do not make
|
||||||
|
use of :mod:`multiprocessing` at present, though they may do so in the future.
|
||||||
|
Note that at present, the :mod:`multiprocessing` module does not provide
|
||||||
|
working lock functionality on all platforms (see
|
||||||
|
http://bugs.python.org/issue3770).
|
||||||
|
|
||||||
|
.. currentmodule:: logging.handlers
|
||||||
|
|
||||||
|
Alternatively, you can use a ``Queue`` and a :class:`QueueHandler` to send
|
||||||
|
all logging events to one of the processes in your multi-process application.
|
||||||
|
The following example script demonstrates how you can do this; in the example
|
||||||
|
a separate listener process listens for events sent by other processes and logs
|
||||||
|
them according to its own logging configuration. Although the example only
|
||||||
|
demonstrates one way of doing it (for example, you may want to use a listener
|
||||||
|
thread rather than a separate listener process - the implementation would be
|
||||||
|
analogous) it does allow for completely different logging configurations for
|
||||||
|
the listener and the other processes in your application, and can be used as
|
||||||
|
the basis for code meeting your own specific requirements::
|
||||||
|
|
||||||
|
# You'll need these imports in your own code
|
||||||
|
import logging
|
||||||
|
import logging.handlers
|
||||||
|
import multiprocessing
|
||||||
|
|
||||||
|
# Next two import lines for this demo only
|
||||||
|
from random import choice, random
|
||||||
|
import time
|
||||||
|
|
||||||
|
#
|
||||||
|
# Because you'll want to define the logging configurations for listener and workers, the
|
||||||
|
# listener and worker process functions take a configurer parameter which is a callable
|
||||||
|
# for configuring logging for that process. These functions are also passed the queue,
|
||||||
|
# which they use for communication.
|
||||||
|
#
|
||||||
|
# In practice, you can configure the listener however you want, but note that in this
|
||||||
|
# simple example, the listener does not apply level or filter logic to received records.
|
||||||
|
# In practice, you would probably want to do ths logic in the worker processes, to avoid
|
||||||
|
# sending events which would be filtered out between processes.
|
||||||
|
#
|
||||||
|
# The size of the rotated files is made small so you can see the results easily.
|
||||||
|
def listener_configurer():
|
||||||
|
root = logging.getLogger()
|
||||||
|
h = logging.handlers.RotatingFileHandler('/tmp/mptest.log', 'a', 300, 10)
|
||||||
|
f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
|
||||||
|
h.setFormatter(f)
|
||||||
|
root.addHandler(h)
|
||||||
|
|
||||||
|
# This is the listener process top-level loop: wait for logging events
|
||||||
|
# (LogRecords)on the queue and handle them, quit when you get a None for a
|
||||||
|
# LogRecord.
|
||||||
|
def listener_process(queue, configurer):
|
||||||
|
configurer()
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
record = queue.get()
|
||||||
|
if record is None: # We send this as a sentinel to tell the listener to quit.
|
||||||
|
break
|
||||||
|
logger = logging.getLogger(record.name)
|
||||||
|
logger.handle(record) # No level or filter logic applied - just do it!
|
||||||
|
except (KeyboardInterrupt, SystemExit):
|
||||||
|
raise
|
||||||
|
except:
|
||||||
|
import sys, traceback
|
||||||
|
print >> sys.stderr, 'Whoops! Problem:'
|
||||||
|
traceback.print_exc(file=sys.stderr)
|
||||||
|
|
||||||
|
# Arrays used for random selections in this demo
|
||||||
|
|
||||||
|
LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,
|
||||||
|
logging.ERROR, logging.CRITICAL]
|
||||||
|
|
||||||
|
LOGGERS = ['a.b.c', 'd.e.f']
|
||||||
|
|
||||||
|
MESSAGES = [
|
||||||
|
'Random message #1',
|
||||||
|
'Random message #2',
|
||||||
|
'Random message #3',
|
||||||
|
]
|
||||||
|
|
||||||
|
# The worker configuration is done at the start of the worker process run.
|
||||||
|
# Note that on Windows you can't rely on fork semantics, so each process
|
||||||
|
# will run the logging configuration code when it starts.
|
||||||
|
def worker_configurer(queue):
|
||||||
|
h = logging.handlers.QueueHandler(queue) # Just the one handler needed
|
||||||
|
root = logging.getLogger()
|
||||||
|
root.addHandler(h)
|
||||||
|
root.setLevel(logging.DEBUG) # send all messages, for demo; no other level or filter logic applied.
|
||||||
|
|
||||||
|
# This is the worker process top-level loop, which just logs ten events with
|
||||||
|
# random intervening delays before terminating.
|
||||||
|
# The print messages are just so you know it's doing something!
|
||||||
|
def worker_process(queue, configurer):
|
||||||
|
configurer(queue)
|
||||||
|
name = multiprocessing.current_process().name
|
||||||
|
print('Worker started: %s' % name)
|
||||||
|
for i in range(10):
|
||||||
|
time.sleep(random())
|
||||||
|
logger = logging.getLogger(choice(LOGGERS))
|
||||||
|
level = choice(LEVELS)
|
||||||
|
message = choice(MESSAGES)
|
||||||
|
logger.log(level, message)
|
||||||
|
print('Worker finished: %s' % name)
|
||||||
|
|
||||||
|
# Here's where the demo gets orchestrated. Create the queue, create and start
|
||||||
|
# the listener, create ten workers and start them, wait for them to finish,
|
||||||
|
# then send a None to the queue to tell the listener to finish.
|
||||||
|
def main():
|
||||||
|
queue = multiprocessing.Queue(-1)
|
||||||
|
listener = multiprocessing.Process(target=listener_process,
|
||||||
|
args=(queue, listener_configurer))
|
||||||
|
listener.start()
|
||||||
|
workers = []
|
||||||
|
for i in range(10):
|
||||||
|
worker = multiprocessing.Process(target=worker_process,
|
||||||
|
args=(queue, worker_configurer))
|
||||||
|
workers.append(worker)
|
||||||
|
worker.start()
|
||||||
|
for w in workers:
|
||||||
|
w.join()
|
||||||
|
queue.put_nowait(None)
|
||||||
|
listener.join()
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
|
|
||||||
|
|
||||||
|
Using file rotation
|
||||||
|
-------------------
|
||||||
|
|
||||||
|
.. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
|
||||||
|
.. (see <http://blog.doughellmann.com/2007/05/pymotw-logging.html>)
|
||||||
|
|
||||||
|
Sometimes you want to let a log file grow to a certain size, then open a new
|
||||||
|
file and log to that. You may want to keep a certain number of these files, and
|
||||||
|
when that many files have been created, rotate the files so that the number of
|
||||||
|
files and the size of the files both remin bounded. For this usage pattern, the
|
||||||
|
logging package provides a :class:`RotatingFileHandler`::
|
||||||
|
|
||||||
|
import glob
|
||||||
|
import logging
|
||||||
|
import logging.handlers
|
||||||
|
|
||||||
|
LOG_FILENAME = 'logging_rotatingfile_example.out'
|
||||||
|
|
||||||
|
# Set up a specific logger with our desired output level
|
||||||
|
my_logger = logging.getLogger('MyLogger')
|
||||||
|
my_logger.setLevel(logging.DEBUG)
|
||||||
|
|
||||||
|
# Add the log message handler to the logger
|
||||||
|
handler = logging.handlers.RotatingFileHandler(
|
||||||
|
LOG_FILENAME, maxBytes=20, backupCount=5)
|
||||||
|
|
||||||
|
my_logger.addHandler(handler)
|
||||||
|
|
||||||
|
# Log some messages
|
||||||
|
for i in range(20):
|
||||||
|
my_logger.debug('i = %d' % i)
|
||||||
|
|
||||||
|
# See what files are created
|
||||||
|
logfiles = glob.glob('%s*' % LOG_FILENAME)
|
||||||
|
|
||||||
|
for filename in logfiles:
|
||||||
|
print(filename)
|
||||||
|
|
||||||
|
The result should be 6 separate files, each with part of the log history for the
|
||||||
|
application::
|
||||||
|
|
||||||
|
logging_rotatingfile_example.out
|
||||||
|
logging_rotatingfile_example.out.1
|
||||||
|
logging_rotatingfile_example.out.2
|
||||||
|
logging_rotatingfile_example.out.3
|
||||||
|
logging_rotatingfile_example.out.4
|
||||||
|
logging_rotatingfile_example.out.5
|
||||||
|
|
||||||
|
The most current file is always :file:`logging_rotatingfile_example.out`,
|
||||||
|
and each time it reaches the size limit it is renamed with the suffix
|
||||||
|
``.1``. Each of the existing backup files is renamed to increment the suffix
|
||||||
|
(``.1`` becomes ``.2``, etc.) and the ``.6`` file is erased.
|
||||||
|
|
||||||
|
Obviously this example sets the log length much much too small as an extreme
|
||||||
|
example. You would want to set *maxBytes* to an appropriate value.
|
||||||
|
|
||||||
|
.. _zeromq-handlers:
|
||||||
|
|
||||||
|
Subclassing QueueHandler
|
||||||
|
------------------------
|
||||||
|
|
||||||
|
You can use a :class:`QueueHandler` subclass to send messages to other kinds
|
||||||
|
of queues, for example a ZeroMQ 'publish' socket. In the example below,the
|
||||||
|
socket is created separately and passed to the handler (as its 'queue')::
|
||||||
|
|
||||||
|
import zmq # using pyzmq, the Python binding for ZeroMQ
|
||||||
|
import json # for serializing records portably
|
||||||
|
|
||||||
|
ctx = zmq.Context()
|
||||||
|
sock = zmq.Socket(ctx, zmq.PUB) # or zmq.PUSH, or other suitable value
|
||||||
|
sock.bind('tcp://*:5556') # or wherever
|
||||||
|
|
||||||
|
class ZeroMQSocketHandler(QueueHandler):
|
||||||
|
def enqueue(self, record):
|
||||||
|
data = json.dumps(record.__dict__)
|
||||||
|
self.queue.send(data)
|
||||||
|
|
||||||
|
handler = ZeroMQSocketHandler(sock)
|
||||||
|
|
||||||
|
|
||||||
|
Of course there are other ways of organizing this, for example passing in the
|
||||||
|
data needed by the handler to create the socket::
|
||||||
|
|
||||||
|
class ZeroMQSocketHandler(QueueHandler):
|
||||||
|
def __init__(self, uri, socktype=zmq.PUB, ctx=None):
|
||||||
|
self.ctx = ctx or zmq.Context()
|
||||||
|
socket = zmq.Socket(self.ctx, socktype)
|
||||||
|
socket.bind(uri)
|
||||||
|
QueueHandler.__init__(self, socket)
|
||||||
|
|
||||||
|
def enqueue(self, record):
|
||||||
|
data = json.dumps(record.__dict__)
|
||||||
|
self.queue.send(data)
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
self.queue.close()
|
||||||
|
|
||||||
|
|
||||||
|
Subclassing QueueListener
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
You can also subclass :class:`QueueListener` to get messages from other kinds
|
||||||
|
of queues, for example a ZeroMQ 'subscribe' socket. Here's an example::
|
||||||
|
|
||||||
|
class ZeroMQSocketListener(QueueListener):
|
||||||
|
def __init__(self, uri, *handlers, **kwargs):
|
||||||
|
self.ctx = kwargs.get('ctx') or zmq.Context()
|
||||||
|
socket = zmq.Socket(self.ctx, zmq.SUB)
|
||||||
|
socket.setsockopt(zmq.SUBSCRIBE, '') # subscribe to everything
|
||||||
|
socket.connect(uri)
|
||||||
|
|
||||||
|
def dequeue(self):
|
||||||
|
msg = self.queue.recv()
|
||||||
|
return logging.makeLogRecord(json.loads(msg))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
1016
Doc/howto/logging.rst
Normal file
1016
Doc/howto/logging.rst
Normal file
File diff suppressed because it is too large
Load diff
|
|
@ -19,6 +19,8 @@ but they are available on most other systems as well. Here's an overview:
|
||||||
optparse.rst
|
optparse.rst
|
||||||
getopt.rst
|
getopt.rst
|
||||||
logging.rst
|
logging.rst
|
||||||
|
logging.config.rst
|
||||||
|
logging.handlers.rst
|
||||||
getpass.rst
|
getpass.rst
|
||||||
curses.rst
|
curses.rst
|
||||||
curses.ascii.rst
|
curses.ascii.rst
|
||||||
|
|
|
||||||
657
Doc/library/logging.config.rst
Normal file
657
Doc/library/logging.config.rst
Normal file
|
|
@ -0,0 +1,657 @@
|
||||||
|
:mod:`logging.config` --- Logging configuration
|
||||||
|
===============================================
|
||||||
|
|
||||||
|
.. module:: logging.config
|
||||||
|
:synopsis: Configuration of the logging module.
|
||||||
|
|
||||||
|
|
||||||
|
.. moduleauthor:: Vinay Sajip <vinay_sajip@red-dove.com>
|
||||||
|
.. sectionauthor:: Vinay Sajip <vinay_sajip@red-dove.com>
|
||||||
|
|
||||||
|
|
||||||
|
.. _logging-config-api:
|
||||||
|
|
||||||
|
Configuration functions
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
The following functions configure the logging module. They are located in the
|
||||||
|
:mod:`logging.config` module. Their use is optional --- you can configure the
|
||||||
|
logging module using these functions or by making calls to the main API (defined
|
||||||
|
in :mod:`logging` itself) and defining handlers which are declared either in
|
||||||
|
:mod:`logging` or :mod:`logging.handlers`.
|
||||||
|
|
||||||
|
.. function:: dictConfig(config)
|
||||||
|
|
||||||
|
Takes the logging configuration from a dictionary. The contents of
|
||||||
|
this dictionary are described in :ref:`logging-config-dictschema`
|
||||||
|
below.
|
||||||
|
|
||||||
|
If an error is encountered during configuration, this function will
|
||||||
|
raise a :exc:`ValueError`, :exc:`TypeError`, :exc:`AttributeError`
|
||||||
|
or :exc:`ImportError` with a suitably descriptive message. The
|
||||||
|
following is a (possibly incomplete) list of conditions which will
|
||||||
|
raise an error:
|
||||||
|
|
||||||
|
* A ``level`` which is not a string or which is a string not
|
||||||
|
corresponding to an actual logging level.
|
||||||
|
* A ``propagate`` value which is not a boolean.
|
||||||
|
* An id which does not have a corresponding destination.
|
||||||
|
* A non-existent handler id found during an incremental call.
|
||||||
|
* An invalid logger name.
|
||||||
|
* Inability to resolve to an internal or external object.
|
||||||
|
|
||||||
|
Parsing is performed by the :class:`DictConfigurator` class, whose
|
||||||
|
constructor is passed the dictionary used for configuration, and
|
||||||
|
has a :meth:`configure` method. The :mod:`logging.config` module
|
||||||
|
has a callable attribute :attr:`dictConfigClass`
|
||||||
|
which is initially set to :class:`DictConfigurator`.
|
||||||
|
You can replace the value of :attr:`dictConfigClass` with a
|
||||||
|
suitable implementation of your own.
|
||||||
|
|
||||||
|
:func:`dictConfig` calls :attr:`dictConfigClass` passing
|
||||||
|
the specified dictionary, and then calls the :meth:`configure` method on
|
||||||
|
the returned object to put the configuration into effect::
|
||||||
|
|
||||||
|
def dictConfig(config):
|
||||||
|
dictConfigClass(config).configure()
|
||||||
|
|
||||||
|
For example, a subclass of :class:`DictConfigurator` could call
|
||||||
|
``DictConfigurator.__init__()`` in its own :meth:`__init__()`, then
|
||||||
|
set up custom prefixes which would be usable in the subsequent
|
||||||
|
:meth:`configure` call. :attr:`dictConfigClass` would be bound to
|
||||||
|
this new subclass, and then :func:`dictConfig` could be called exactly as
|
||||||
|
in the default, uncustomized state.
|
||||||
|
|
||||||
|
.. function:: fileConfig(fname[, defaults])
|
||||||
|
|
||||||
|
Reads the logging configuration from a :mod:`configparser`\-format file named
|
||||||
|
*fname*. This function can be called several times from an application,
|
||||||
|
allowing an end user to select from various pre-canned
|
||||||
|
configurations (if the developer provides a mechanism to present the choices
|
||||||
|
and load the chosen configuration). Defaults to be passed to the ConfigParser
|
||||||
|
can be specified in the *defaults* argument.
|
||||||
|
|
||||||
|
|
||||||
|
.. function:: listen(port=DEFAULT_LOGGING_CONFIG_PORT)
|
||||||
|
|
||||||
|
Starts up a socket server on the specified port, and listens for new
|
||||||
|
configurations. If no port is specified, the module's default
|
||||||
|
:const:`DEFAULT_LOGGING_CONFIG_PORT` is used. Logging configurations will be
|
||||||
|
sent as a file suitable for processing by :func:`fileConfig`. Returns a
|
||||||
|
:class:`Thread` instance on which you can call :meth:`start` to start the
|
||||||
|
server, and which you can :meth:`join` when appropriate. To stop the server,
|
||||||
|
call :func:`stopListening`.
|
||||||
|
|
||||||
|
To send a configuration to the socket, read in the configuration file and
|
||||||
|
send it to the socket as a string of bytes preceded by a four-byte length
|
||||||
|
string packed in binary using ``struct.pack('>L', n)``.
|
||||||
|
|
||||||
|
|
||||||
|
.. function:: stopListening()
|
||||||
|
|
||||||
|
Stops the listening server which was created with a call to :func:`listen`.
|
||||||
|
This is typically called before calling :meth:`join` on the return value from
|
||||||
|
:func:`listen`.
|
||||||
|
|
||||||
|
|
||||||
|
.. _logging-config-dictschema:
|
||||||
|
|
||||||
|
Configuration dictionary schema
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
Describing a logging configuration requires listing the various
|
||||||
|
objects to create and the connections between them; for example, you
|
||||||
|
may create a handler named 'console' and then say that the logger
|
||||||
|
named 'startup' will send its messages to the 'console' handler.
|
||||||
|
These objects aren't limited to those provided by the :mod:`logging`
|
||||||
|
module because you might write your own formatter or handler class.
|
||||||
|
The parameters to these classes may also need to include external
|
||||||
|
objects such as ``sys.stderr``. The syntax for describing these
|
||||||
|
objects and connections is defined in :ref:`logging-config-dict-connections`
|
||||||
|
below.
|
||||||
|
|
||||||
|
Dictionary Schema Details
|
||||||
|
"""""""""""""""""""""""""
|
||||||
|
|
||||||
|
The dictionary passed to :func:`dictConfig` must contain the following
|
||||||
|
keys:
|
||||||
|
|
||||||
|
* *version* - to be set to an integer value representing the schema
|
||||||
|
version. The only valid value at present is 1, but having this key
|
||||||
|
allows the schema to evolve while still preserving backwards
|
||||||
|
compatibility.
|
||||||
|
|
||||||
|
All other keys are optional, but if present they will be interpreted
|
||||||
|
as described below. In all cases below where a 'configuring dict' is
|
||||||
|
mentioned, it will be checked for the special ``'()'`` key to see if a
|
||||||
|
custom instantiation is required. If so, the mechanism described in
|
||||||
|
:ref:`logging-config-dict-userdef` below is used to create an instance;
|
||||||
|
otherwise, the context is used to determine what to instantiate.
|
||||||
|
|
||||||
|
* *formatters* - the corresponding value will be a dict in which each
|
||||||
|
key is a formatter id and each value is a dict describing how to
|
||||||
|
configure the corresponding Formatter instance.
|
||||||
|
|
||||||
|
The configuring dict is searched for keys ``format`` and ``datefmt``
|
||||||
|
(with defaults of ``None``) and these are used to construct a
|
||||||
|
:class:`logging.Formatter` instance.
|
||||||
|
|
||||||
|
* *filters* - the corresponding value will be a dict in which each key
|
||||||
|
is a filter id and each value is a dict describing how to configure
|
||||||
|
the corresponding Filter instance.
|
||||||
|
|
||||||
|
The configuring dict is searched for the key ``name`` (defaulting to the
|
||||||
|
empty string) and this is used to construct a :class:`logging.Filter`
|
||||||
|
instance.
|
||||||
|
|
||||||
|
* *handlers* - the corresponding value will be a dict in which each
|
||||||
|
key is a handler id and each value is a dict describing how to
|
||||||
|
configure the corresponding Handler instance.
|
||||||
|
|
||||||
|
The configuring dict is searched for the following keys:
|
||||||
|
|
||||||
|
* ``class`` (mandatory). This is the fully qualified name of the
|
||||||
|
handler class.
|
||||||
|
|
||||||
|
* ``level`` (optional). The level of the handler.
|
||||||
|
|
||||||
|
* ``formatter`` (optional). The id of the formatter for this
|
||||||
|
handler.
|
||||||
|
|
||||||
|
* ``filters`` (optional). A list of ids of the filters for this
|
||||||
|
handler.
|
||||||
|
|
||||||
|
All *other* keys are passed through as keyword arguments to the
|
||||||
|
handler's constructor. For example, given the snippet::
|
||||||
|
|
||||||
|
handlers:
|
||||||
|
console:
|
||||||
|
class : logging.StreamHandler
|
||||||
|
formatter: brief
|
||||||
|
level : INFO
|
||||||
|
filters: [allow_foo]
|
||||||
|
stream : ext://sys.stdout
|
||||||
|
file:
|
||||||
|
class : logging.handlers.RotatingFileHandler
|
||||||
|
formatter: precise
|
||||||
|
filename: logconfig.log
|
||||||
|
maxBytes: 1024
|
||||||
|
backupCount: 3
|
||||||
|
|
||||||
|
the handler with id ``console`` is instantiated as a
|
||||||
|
:class:`logging.StreamHandler`, using ``sys.stdout`` as the underlying
|
||||||
|
stream. The handler with id ``file`` is instantiated as a
|
||||||
|
:class:`logging.handlers.RotatingFileHandler` with the keyword arguments
|
||||||
|
``filename='logconfig.log', maxBytes=1024, backupCount=3``.
|
||||||
|
|
||||||
|
* *loggers* - the corresponding value will be a dict in which each key
|
||||||
|
is a logger name and each value is a dict describing how to
|
||||||
|
configure the corresponding Logger instance.
|
||||||
|
|
||||||
|
The configuring dict is searched for the following keys:
|
||||||
|
|
||||||
|
* ``level`` (optional). The level of the logger.
|
||||||
|
|
||||||
|
* ``propagate`` (optional). The propagation setting of the logger.
|
||||||
|
|
||||||
|
* ``filters`` (optional). A list of ids of the filters for this
|
||||||
|
logger.
|
||||||
|
|
||||||
|
* ``handlers`` (optional). A list of ids of the handlers for this
|
||||||
|
logger.
|
||||||
|
|
||||||
|
The specified loggers will be configured according to the level,
|
||||||
|
propagation, filters and handlers specified.
|
||||||
|
|
||||||
|
* *root* - this will be the configuration for the root logger.
|
||||||
|
Processing of the configuration will be as for any logger, except
|
||||||
|
that the ``propagate`` setting will not be applicable.
|
||||||
|
|
||||||
|
* *incremental* - whether the configuration is to be interpreted as
|
||||||
|
incremental to the existing configuration. This value defaults to
|
||||||
|
``False``, which means that the specified configuration replaces the
|
||||||
|
existing configuration with the same semantics as used by the
|
||||||
|
existing :func:`fileConfig` API.
|
||||||
|
|
||||||
|
If the specified value is ``True``, the configuration is processed
|
||||||
|
as described in the section on :ref:`logging-config-dict-incremental`.
|
||||||
|
|
||||||
|
* *disable_existing_loggers* - whether any existing loggers are to be
|
||||||
|
disabled. This setting mirrors the parameter of the same name in
|
||||||
|
:func:`fileConfig`. If absent, this parameter defaults to ``True``.
|
||||||
|
This value is ignored if *incremental* is ``True``.
|
||||||
|
|
||||||
|
.. _logging-config-dict-incremental:
|
||||||
|
|
||||||
|
Incremental Configuration
|
||||||
|
"""""""""""""""""""""""""
|
||||||
|
|
||||||
|
It is difficult to provide complete flexibility for incremental
|
||||||
|
configuration. For example, because objects such as filters
|
||||||
|
and formatters are anonymous, once a configuration is set up, it is
|
||||||
|
not possible to refer to such anonymous objects when augmenting a
|
||||||
|
configuration.
|
||||||
|
|
||||||
|
Furthermore, there is not a compelling case for arbitrarily altering
|
||||||
|
the object graph of loggers, handlers, filters, formatters at
|
||||||
|
run-time, once a configuration is set up; the verbosity of loggers and
|
||||||
|
handlers can be controlled just by setting levels (and, in the case of
|
||||||
|
loggers, propagation flags). Changing the object graph arbitrarily in
|
||||||
|
a safe way is problematic in a multi-threaded environment; while not
|
||||||
|
impossible, the benefits are not worth the complexity it adds to the
|
||||||
|
implementation.
|
||||||
|
|
||||||
|
Thus, when the ``incremental`` key of a configuration dict is present
|
||||||
|
and is ``True``, the system will completely ignore any ``formatters`` and
|
||||||
|
``filters`` entries, and process only the ``level``
|
||||||
|
settings in the ``handlers`` entries, and the ``level`` and
|
||||||
|
``propagate`` settings in the ``loggers`` and ``root`` entries.
|
||||||
|
|
||||||
|
Using a value in the configuration dict lets configurations to be sent
|
||||||
|
over the wire as pickled dicts to a socket listener. Thus, the logging
|
||||||
|
verbosity of a long-running application can be altered over time with
|
||||||
|
no need to stop and restart the application.
|
||||||
|
|
||||||
|
.. _logging-config-dict-connections:
|
||||||
|
|
||||||
|
Object connections
|
||||||
|
""""""""""""""""""
|
||||||
|
|
||||||
|
The schema describes a set of logging objects - loggers,
|
||||||
|
handlers, formatters, filters - which are connected to each other in
|
||||||
|
an object graph. Thus, the schema needs to represent connections
|
||||||
|
between the objects. For example, say that, once configured, a
|
||||||
|
particular logger has attached to it a particular handler. For the
|
||||||
|
purposes of this discussion, we can say that the logger represents the
|
||||||
|
source, and the handler the destination, of a connection between the
|
||||||
|
two. Of course in the configured objects this is represented by the
|
||||||
|
logger holding a reference to the handler. In the configuration dict,
|
||||||
|
this is done by giving each destination object an id which identifies
|
||||||
|
it unambiguously, and then using the id in the source object's
|
||||||
|
configuration to indicate that a connection exists between the source
|
||||||
|
and the destination object with that id.
|
||||||
|
|
||||||
|
So, for example, consider the following YAML snippet::
|
||||||
|
|
||||||
|
formatters:
|
||||||
|
brief:
|
||||||
|
# configuration for formatter with id 'brief' goes here
|
||||||
|
precise:
|
||||||
|
# configuration for formatter with id 'precise' goes here
|
||||||
|
handlers:
|
||||||
|
h1: #This is an id
|
||||||
|
# configuration of handler with id 'h1' goes here
|
||||||
|
formatter: brief
|
||||||
|
h2: #This is another id
|
||||||
|
# configuration of handler with id 'h2' goes here
|
||||||
|
formatter: precise
|
||||||
|
loggers:
|
||||||
|
foo.bar.baz:
|
||||||
|
# other configuration for logger 'foo.bar.baz'
|
||||||
|
handlers: [h1, h2]
|
||||||
|
|
||||||
|
(Note: YAML used here because it's a little more readable than the
|
||||||
|
equivalent Python source form for the dictionary.)
|
||||||
|
|
||||||
|
The ids for loggers are the logger names which would be used
|
||||||
|
programmatically to obtain a reference to those loggers, e.g.
|
||||||
|
``foo.bar.baz``. The ids for Formatters and Filters can be any string
|
||||||
|
value (such as ``brief``, ``precise`` above) and they are transient,
|
||||||
|
in that they are only meaningful for processing the configuration
|
||||||
|
dictionary and used to determine connections between objects, and are
|
||||||
|
not persisted anywhere when the configuration call is complete.
|
||||||
|
|
||||||
|
The above snippet indicates that logger named ``foo.bar.baz`` should
|
||||||
|
have two handlers attached to it, which are described by the handler
|
||||||
|
ids ``h1`` and ``h2``. The formatter for ``h1`` is that described by id
|
||||||
|
``brief``, and the formatter for ``h2`` is that described by id
|
||||||
|
``precise``.
|
||||||
|
|
||||||
|
|
||||||
|
.. _logging-config-dict-userdef:
|
||||||
|
|
||||||
|
User-defined objects
|
||||||
|
""""""""""""""""""""
|
||||||
|
|
||||||
|
The schema supports user-defined objects for handlers, filters and
|
||||||
|
formatters. (Loggers do not need to have different types for
|
||||||
|
different instances, so there is no support in this configuration
|
||||||
|
schema for user-defined logger classes.)
|
||||||
|
|
||||||
|
Objects to be configured are described by dictionaries
|
||||||
|
which detail their configuration. In some places, the logging system
|
||||||
|
will be able to infer from the context how an object is to be
|
||||||
|
instantiated, but when a user-defined object is to be instantiated,
|
||||||
|
the system will not know how to do this. In order to provide complete
|
||||||
|
flexibility for user-defined object instantiation, the user needs
|
||||||
|
to provide a 'factory' - a callable which is called with a
|
||||||
|
configuration dictionary and which returns the instantiated object.
|
||||||
|
This is signalled by an absolute import path to the factory being
|
||||||
|
made available under the special key ``'()'``. Here's a concrete
|
||||||
|
example::
|
||||||
|
|
||||||
|
formatters:
|
||||||
|
brief:
|
||||||
|
format: '%(message)s'
|
||||||
|
default:
|
||||||
|
format: '%(asctime)s %(levelname)-8s %(name)-15s %(message)s'
|
||||||
|
datefmt: '%Y-%m-%d %H:%M:%S'
|
||||||
|
custom:
|
||||||
|
(): my.package.customFormatterFactory
|
||||||
|
bar: baz
|
||||||
|
spam: 99.9
|
||||||
|
answer: 42
|
||||||
|
|
||||||
|
The above YAML snippet defines three formatters. The first, with id
|
||||||
|
``brief``, is a standard :class:`logging.Formatter` instance with the
|
||||||
|
specified format string. The second, with id ``default``, has a
|
||||||
|
longer format and also defines the time format explicitly, and will
|
||||||
|
result in a :class:`logging.Formatter` initialized with those two format
|
||||||
|
strings. Shown in Python source form, the ``brief`` and ``default``
|
||||||
|
formatters have configuration sub-dictionaries::
|
||||||
|
|
||||||
|
{
|
||||||
|
'format' : '%(message)s'
|
||||||
|
}
|
||||||
|
|
||||||
|
and::
|
||||||
|
|
||||||
|
{
|
||||||
|
'format' : '%(asctime)s %(levelname)-8s %(name)-15s %(message)s',
|
||||||
|
'datefmt' : '%Y-%m-%d %H:%M:%S'
|
||||||
|
}
|
||||||
|
|
||||||
|
respectively, and as these dictionaries do not contain the special key
|
||||||
|
``'()'``, the instantiation is inferred from the context: as a result,
|
||||||
|
standard :class:`logging.Formatter` instances are created. The
|
||||||
|
configuration sub-dictionary for the third formatter, with id
|
||||||
|
``custom``, is::
|
||||||
|
|
||||||
|
{
|
||||||
|
'()' : 'my.package.customFormatterFactory',
|
||||||
|
'bar' : 'baz',
|
||||||
|
'spam' : 99.9,
|
||||||
|
'answer' : 42
|
||||||
|
}
|
||||||
|
|
||||||
|
and this contains the special key ``'()'``, which means that
|
||||||
|
user-defined instantiation is wanted. In this case, the specified
|
||||||
|
factory callable will be used. If it is an actual callable it will be
|
||||||
|
used directly - otherwise, if you specify a string (as in the example)
|
||||||
|
the actual callable will be located using normal import mechanisms.
|
||||||
|
The callable will be called with the **remaining** items in the
|
||||||
|
configuration sub-dictionary as keyword arguments. In the above
|
||||||
|
example, the formatter with id ``custom`` will be assumed to be
|
||||||
|
returned by the call::
|
||||||
|
|
||||||
|
my.package.customFormatterFactory(bar='baz', spam=99.9, answer=42)
|
||||||
|
|
||||||
|
The key ``'()'`` has been used as the special key because it is not a
|
||||||
|
valid keyword parameter name, and so will not clash with the names of
|
||||||
|
the keyword arguments used in the call. The ``'()'`` also serves as a
|
||||||
|
mnemonic that the corresponding value is a callable.
|
||||||
|
|
||||||
|
|
||||||
|
.. _logging-config-dict-externalobj:
|
||||||
|
|
||||||
|
Access to external objects
|
||||||
|
""""""""""""""""""""""""""
|
||||||
|
|
||||||
|
There are times where a configuration needs to refer to objects
|
||||||
|
external to the configuration, for example ``sys.stderr``. If the
|
||||||
|
configuration dict is constructed using Python code, this is
|
||||||
|
straightforward, but a problem arises when the configuration is
|
||||||
|
provided via a text file (e.g. JSON, YAML). In a text file, there is
|
||||||
|
no standard way to distinguish ``sys.stderr`` from the literal string
|
||||||
|
``'sys.stderr'``. To facilitate this distinction, the configuration
|
||||||
|
system looks for certain special prefixes in string values and
|
||||||
|
treat them specially. For example, if the literal string
|
||||||
|
``'ext://sys.stderr'`` is provided as a value in the configuration,
|
||||||
|
then the ``ext://`` will be stripped off and the remainder of the
|
||||||
|
value processed using normal import mechanisms.
|
||||||
|
|
||||||
|
The handling of such prefixes is done in a way analogous to protocol
|
||||||
|
handling: there is a generic mechanism to look for prefixes which
|
||||||
|
match the regular expression ``^(?P<prefix>[a-z]+)://(?P<suffix>.*)$``
|
||||||
|
whereby, if the ``prefix`` is recognised, the ``suffix`` is processed
|
||||||
|
in a prefix-dependent manner and the result of the processing replaces
|
||||||
|
the string value. If the prefix is not recognised, then the string
|
||||||
|
value will be left as-is.
|
||||||
|
|
||||||
|
|
||||||
|
.. _logging-config-dict-internalobj:
|
||||||
|
|
||||||
|
Access to internal objects
|
||||||
|
""""""""""""""""""""""""""
|
||||||
|
|
||||||
|
As well as external objects, there is sometimes also a need to refer
|
||||||
|
to objects in the configuration. This will be done implicitly by the
|
||||||
|
configuration system for things that it knows about. For example, the
|
||||||
|
string value ``'DEBUG'`` for a ``level`` in a logger or handler will
|
||||||
|
automatically be converted to the value ``logging.DEBUG``, and the
|
||||||
|
``handlers``, ``filters`` and ``formatter`` entries will take an
|
||||||
|
object id and resolve to the appropriate destination object.
|
||||||
|
|
||||||
|
However, a more generic mechanism is needed for user-defined
|
||||||
|
objects which are not known to the :mod:`logging` module. For
|
||||||
|
example, consider :class:`logging.handlers.MemoryHandler`, which takes
|
||||||
|
a ``target`` argument which is another handler to delegate to. Since
|
||||||
|
the system already knows about this class, then in the configuration,
|
||||||
|
the given ``target`` just needs to be the object id of the relevant
|
||||||
|
target handler, and the system will resolve to the handler from the
|
||||||
|
id. If, however, a user defines a ``my.package.MyHandler`` which has
|
||||||
|
an ``alternate`` handler, the configuration system would not know that
|
||||||
|
the ``alternate`` referred to a handler. To cater for this, a generic
|
||||||
|
resolution system allows the user to specify::
|
||||||
|
|
||||||
|
handlers:
|
||||||
|
file:
|
||||||
|
# configuration of file handler goes here
|
||||||
|
|
||||||
|
custom:
|
||||||
|
(): my.package.MyHandler
|
||||||
|
alternate: cfg://handlers.file
|
||||||
|
|
||||||
|
The literal string ``'cfg://handlers.file'`` will be resolved in an
|
||||||
|
analogous way to strings with the ``ext://`` prefix, but looking
|
||||||
|
in the configuration itself rather than the import namespace. The
|
||||||
|
mechanism allows access by dot or by index, in a similar way to
|
||||||
|
that provided by ``str.format``. Thus, given the following snippet::
|
||||||
|
|
||||||
|
handlers:
|
||||||
|
email:
|
||||||
|
class: logging.handlers.SMTPHandler
|
||||||
|
mailhost: localhost
|
||||||
|
fromaddr: my_app@domain.tld
|
||||||
|
toaddrs:
|
||||||
|
- support_team@domain.tld
|
||||||
|
- dev_team@domain.tld
|
||||||
|
subject: Houston, we have a problem.
|
||||||
|
|
||||||
|
in the configuration, the string ``'cfg://handlers'`` would resolve to
|
||||||
|
the dict with key ``handlers``, the string ``'cfg://handlers.email``
|
||||||
|
would resolve to the dict with key ``email`` in the ``handlers`` dict,
|
||||||
|
and so on. The string ``'cfg://handlers.email.toaddrs[1]`` would
|
||||||
|
resolve to ``'dev_team.domain.tld'`` and the string
|
||||||
|
``'cfg://handlers.email.toaddrs[0]'`` would resolve to the value
|
||||||
|
``'support_team@domain.tld'``. The ``subject`` value could be accessed
|
||||||
|
using either ``'cfg://handlers.email.subject'`` or, equivalently,
|
||||||
|
``'cfg://handlers.email[subject]'``. The latter form only needs to be
|
||||||
|
used if the key contains spaces or non-alphanumeric characters. If an
|
||||||
|
index value consists only of decimal digits, access will be attempted
|
||||||
|
using the corresponding integer value, falling back to the string
|
||||||
|
value if needed.
|
||||||
|
|
||||||
|
Given a string ``cfg://handlers.myhandler.mykey.123``, this will
|
||||||
|
resolve to ``config_dict['handlers']['myhandler']['mykey']['123']``.
|
||||||
|
If the string is specified as ``cfg://handlers.myhandler.mykey[123]``,
|
||||||
|
the system will attempt to retrieve the value from
|
||||||
|
``config_dict['handlers']['myhandler']['mykey'][123]``, and fall back
|
||||||
|
to ``config_dict['handlers']['myhandler']['mykey']['123']`` if that
|
||||||
|
fails.
|
||||||
|
|
||||||
|
.. _logging-config-fileformat:
|
||||||
|
|
||||||
|
Configuration file format
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
The configuration file format understood by :func:`fileConfig` is based on
|
||||||
|
:mod:`configparser` functionality. The file must contain sections called
|
||||||
|
``[loggers]``, ``[handlers]`` and ``[formatters]`` which identify by name the
|
||||||
|
entities of each type which are defined in the file. For each such entity, there
|
||||||
|
is a separate section which identifies how that entity is configured. Thus, for
|
||||||
|
a logger named ``log01`` in the ``[loggers]`` section, the relevant
|
||||||
|
configuration details are held in a section ``[logger_log01]``. Similarly, a
|
||||||
|
handler called ``hand01`` in the ``[handlers]`` section will have its
|
||||||
|
configuration held in a section called ``[handler_hand01]``, while a formatter
|
||||||
|
called ``form01`` in the ``[formatters]`` section will have its configuration
|
||||||
|
specified in a section called ``[formatter_form01]``. The root logger
|
||||||
|
configuration must be specified in a section called ``[logger_root]``.
|
||||||
|
|
||||||
|
Examples of these sections in the file are given below. ::
|
||||||
|
|
||||||
|
[loggers]
|
||||||
|
keys=root,log02,log03,log04,log05,log06,log07
|
||||||
|
|
||||||
|
[handlers]
|
||||||
|
keys=hand01,hand02,hand03,hand04,hand05,hand06,hand07,hand08,hand09
|
||||||
|
|
||||||
|
[formatters]
|
||||||
|
keys=form01,form02,form03,form04,form05,form06,form07,form08,form09
|
||||||
|
|
||||||
|
The root logger must specify a level and a list of handlers. An example of a
|
||||||
|
root logger section is given below. ::
|
||||||
|
|
||||||
|
[logger_root]
|
||||||
|
level=NOTSET
|
||||||
|
handlers=hand01
|
||||||
|
|
||||||
|
The ``level`` entry can be one of ``DEBUG, INFO, WARNING, ERROR, CRITICAL`` or
|
||||||
|
``NOTSET``. For the root logger only, ``NOTSET`` means that all messages will be
|
||||||
|
logged. Level values are :func:`eval`\ uated in the context of the ``logging``
|
||||||
|
package's namespace.
|
||||||
|
|
||||||
|
The ``handlers`` entry is a comma-separated list of handler names, which must
|
||||||
|
appear in the ``[handlers]`` section. These names must appear in the
|
||||||
|
``[handlers]`` section and have corresponding sections in the configuration
|
||||||
|
file.
|
||||||
|
|
||||||
|
For loggers other than the root logger, some additional information is required.
|
||||||
|
This is illustrated by the following example. ::
|
||||||
|
|
||||||
|
[logger_parser]
|
||||||
|
level=DEBUG
|
||||||
|
handlers=hand01
|
||||||
|
propagate=1
|
||||||
|
qualname=compiler.parser
|
||||||
|
|
||||||
|
The ``level`` and ``handlers`` entries are interpreted as for the root logger,
|
||||||
|
except that if a non-root logger's level is specified as ``NOTSET``, the system
|
||||||
|
consults loggers higher up the hierarchy to determine the effective level of the
|
||||||
|
logger. The ``propagate`` entry is set to 1 to indicate that messages must
|
||||||
|
propagate to handlers higher up the logger hierarchy from this logger, or 0 to
|
||||||
|
indicate that messages are **not** propagated to handlers up the hierarchy. The
|
||||||
|
``qualname`` entry is the hierarchical channel name of the logger, that is to
|
||||||
|
say the name used by the application to get the logger.
|
||||||
|
|
||||||
|
Sections which specify handler configuration are exemplified by the following.
|
||||||
|
::
|
||||||
|
|
||||||
|
[handler_hand01]
|
||||||
|
class=StreamHandler
|
||||||
|
level=NOTSET
|
||||||
|
formatter=form01
|
||||||
|
args=(sys.stdout,)
|
||||||
|
|
||||||
|
The ``class`` entry indicates the handler's class (as determined by :func:`eval`
|
||||||
|
in the ``logging`` package's namespace). The ``level`` is interpreted as for
|
||||||
|
loggers, and ``NOTSET`` is taken to mean 'log everything'.
|
||||||
|
|
||||||
|
The ``formatter`` entry indicates the key name of the formatter for this
|
||||||
|
handler. If blank, a default formatter (``logging._defaultFormatter``) is used.
|
||||||
|
If a name is specified, it must appear in the ``[formatters]`` section and have
|
||||||
|
a corresponding section in the configuration file.
|
||||||
|
|
||||||
|
The ``args`` entry, when :func:`eval`\ uated in the context of the ``logging``
|
||||||
|
package's namespace, is the list of arguments to the constructor for the handler
|
||||||
|
class. Refer to the constructors for the relevant handlers, or to the examples
|
||||||
|
below, to see how typical entries are constructed. ::
|
||||||
|
|
||||||
|
[handler_hand02]
|
||||||
|
class=FileHandler
|
||||||
|
level=DEBUG
|
||||||
|
formatter=form02
|
||||||
|
args=('python.log', 'w')
|
||||||
|
|
||||||
|
[handler_hand03]
|
||||||
|
class=handlers.SocketHandler
|
||||||
|
level=INFO
|
||||||
|
formatter=form03
|
||||||
|
args=('localhost', handlers.DEFAULT_TCP_LOGGING_PORT)
|
||||||
|
|
||||||
|
[handler_hand04]
|
||||||
|
class=handlers.DatagramHandler
|
||||||
|
level=WARN
|
||||||
|
formatter=form04
|
||||||
|
args=('localhost', handlers.DEFAULT_UDP_LOGGING_PORT)
|
||||||
|
|
||||||
|
[handler_hand05]
|
||||||
|
class=handlers.SysLogHandler
|
||||||
|
level=ERROR
|
||||||
|
formatter=form05
|
||||||
|
args=(('localhost', handlers.SYSLOG_UDP_PORT), handlers.SysLogHandler.LOG_USER)
|
||||||
|
|
||||||
|
[handler_hand06]
|
||||||
|
class=handlers.NTEventLogHandler
|
||||||
|
level=CRITICAL
|
||||||
|
formatter=form06
|
||||||
|
args=('Python Application', '', 'Application')
|
||||||
|
|
||||||
|
[handler_hand07]
|
||||||
|
class=handlers.SMTPHandler
|
||||||
|
level=WARN
|
||||||
|
formatter=form07
|
||||||
|
args=('localhost', 'from@abc', ['user1@abc', 'user2@xyz'], 'Logger Subject')
|
||||||
|
|
||||||
|
[handler_hand08]
|
||||||
|
class=handlers.MemoryHandler
|
||||||
|
level=NOTSET
|
||||||
|
formatter=form08
|
||||||
|
target=
|
||||||
|
args=(10, ERROR)
|
||||||
|
|
||||||
|
[handler_hand09]
|
||||||
|
class=handlers.HTTPHandler
|
||||||
|
level=NOTSET
|
||||||
|
formatter=form09
|
||||||
|
args=('localhost:9022', '/log', 'GET')
|
||||||
|
|
||||||
|
Sections which specify formatter configuration are typified by the following. ::
|
||||||
|
|
||||||
|
[formatter_form01]
|
||||||
|
format=F1 %(asctime)s %(levelname)s %(message)s
|
||||||
|
datefmt=
|
||||||
|
class=logging.Formatter
|
||||||
|
|
||||||
|
The ``format`` entry is the overall format string, and the ``datefmt`` entry is
|
||||||
|
the :func:`strftime`\ -compatible date/time format string. If empty, the
|
||||||
|
package substitutes ISO8601 format date/times, which is almost equivalent to
|
||||||
|
specifying the date format string ``'%Y-%m-%d %H:%M:%S'``. The ISO8601 format
|
||||||
|
also specifies milliseconds, which are appended to the result of using the above
|
||||||
|
format string, with a comma separator. An example time in ISO8601 format is
|
||||||
|
``2003-01-23 00:29:50,411``.
|
||||||
|
|
||||||
|
The ``class`` entry is optional. It indicates the name of the formatter's class
|
||||||
|
(as a dotted module and class name.) This option is useful for instantiating a
|
||||||
|
:class:`Formatter` subclass. Subclasses of :class:`Formatter` can present
|
||||||
|
exception tracebacks in an expanded or condensed format.
|
||||||
|
|
||||||
|
.. seealso::
|
||||||
|
|
||||||
|
Module :mod:`logging`
|
||||||
|
API reference for the logging module.
|
||||||
|
|
||||||
|
Module :mod:`logging.handlers`
|
||||||
|
Useful handlers included with the logging module.
|
||||||
|
|
||||||
|
|
||||||
814
Doc/library/logging.handlers.rst
Normal file
814
Doc/library/logging.handlers.rst
Normal file
|
|
@ -0,0 +1,814 @@
|
||||||
|
:mod:`logging.handlers` --- Logging handlers
|
||||||
|
============================================
|
||||||
|
|
||||||
|
.. module:: logging.handlers
|
||||||
|
:synopsis: Handlers for the logging module.
|
||||||
|
|
||||||
|
|
||||||
|
.. moduleauthor:: Vinay Sajip <vinay_sajip@red-dove.com>
|
||||||
|
.. sectionauthor:: Vinay Sajip <vinay_sajip@red-dove.com>
|
||||||
|
|
||||||
|
The following useful handlers are provided in the package.
|
||||||
|
|
||||||
|
.. currentmodule:: logging
|
||||||
|
|
||||||
|
.. _stream-handler:
|
||||||
|
|
||||||
|
StreamHandler
|
||||||
|
^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
The :class:`StreamHandler` class, located in the core :mod:`logging` package,
|
||||||
|
sends logging output to streams such as *sys.stdout*, *sys.stderr* or any
|
||||||
|
file-like object (or, more precisely, any object which supports :meth:`write`
|
||||||
|
and :meth:`flush` methods).
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: StreamHandler(stream=None)
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`StreamHandler` class. If *stream* is
|
||||||
|
specified, the instance will use it for logging output; otherwise, *sys.stderr*
|
||||||
|
will be used.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit(record)
|
||||||
|
|
||||||
|
If a formatter is specified, it is used to format the record. The record
|
||||||
|
is then written to the stream with a trailing newline. If exception
|
||||||
|
information is present, it is formatted using
|
||||||
|
:func:`traceback.print_exception` and appended to the stream.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: flush()
|
||||||
|
|
||||||
|
Flushes the stream by calling its :meth:`flush` method. Note that the
|
||||||
|
:meth:`close` method is inherited from :class:`Handler` and so does
|
||||||
|
no output, so an explicit :meth:`flush` call may be needed at times.
|
||||||
|
|
||||||
|
.. versionchanged:: 3.2
|
||||||
|
The ``StreamHandler`` class now has a ``terminator`` attribute, default
|
||||||
|
value ``'\n'``, which is used as the terminator when writing a formatted
|
||||||
|
record to a stream. If you don't want this newline termination, you can
|
||||||
|
set the handler instance's ``terminator`` attribute to the empty string.
|
||||||
|
|
||||||
|
.. _file-handler:
|
||||||
|
|
||||||
|
FileHandler
|
||||||
|
^^^^^^^^^^^
|
||||||
|
|
||||||
|
The :class:`FileHandler` class, located in the core :mod:`logging` package,
|
||||||
|
sends logging output to a disk file. It inherits the output functionality from
|
||||||
|
:class:`StreamHandler`.
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: FileHandler(filename, mode='a', encoding=None, delay=False)
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`FileHandler` class. The specified file is
|
||||||
|
opened and used as the stream for logging. If *mode* is not specified,
|
||||||
|
:const:`'a'` is used. If *encoding* is not *None*, it is used to open the file
|
||||||
|
with that encoding. If *delay* is true, then file opening is deferred until the
|
||||||
|
first call to :meth:`emit`. By default, the file grows indefinitely.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: close()
|
||||||
|
|
||||||
|
Closes the file.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit(record)
|
||||||
|
|
||||||
|
Outputs the record to the file.
|
||||||
|
|
||||||
|
|
||||||
|
.. _null-handler:
|
||||||
|
|
||||||
|
NullHandler
|
||||||
|
^^^^^^^^^^^
|
||||||
|
|
||||||
|
.. versionadded:: 3.1
|
||||||
|
|
||||||
|
The :class:`NullHandler` class, located in the core :mod:`logging` package,
|
||||||
|
does not do any formatting or output. It is essentially a 'no-op' handler
|
||||||
|
for use by library developers.
|
||||||
|
|
||||||
|
.. class:: NullHandler()
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`NullHandler` class.
|
||||||
|
|
||||||
|
.. method:: emit(record)
|
||||||
|
|
||||||
|
This method does nothing.
|
||||||
|
|
||||||
|
.. method:: handle(record)
|
||||||
|
|
||||||
|
This method does nothing.
|
||||||
|
|
||||||
|
.. method:: createLock()
|
||||||
|
|
||||||
|
This method returns ``None`` for the lock, since there is no
|
||||||
|
underlying I/O to which access needs to be serialized.
|
||||||
|
|
||||||
|
|
||||||
|
See :ref:`library-config` for more information on how to use
|
||||||
|
:class:`NullHandler`.
|
||||||
|
|
||||||
|
.. _watched-file-handler:
|
||||||
|
|
||||||
|
WatchedFileHandler
|
||||||
|
^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
.. currentmodule:: logging.handlers
|
||||||
|
|
||||||
|
The :class:`WatchedFileHandler` class, located in the :mod:`logging.handlers`
|
||||||
|
module, is a :class:`FileHandler` which watches the file it is logging to. If
|
||||||
|
the file changes, it is closed and reopened using the file name.
|
||||||
|
|
||||||
|
A file change can happen because of usage of programs such as *newsyslog* and
|
||||||
|
*logrotate* which perform log file rotation. This handler, intended for use
|
||||||
|
under Unix/Linux, watches the file to see if it has changed since the last emit.
|
||||||
|
(A file is deemed to have changed if its device or inode have changed.) If the
|
||||||
|
file has changed, the old file stream is closed, and the file opened to get a
|
||||||
|
new stream.
|
||||||
|
|
||||||
|
This handler is not appropriate for use under Windows, because under Windows
|
||||||
|
open log files cannot be moved or renamed - logging opens the files with
|
||||||
|
exclusive locks - and so there is no need for such a handler. Furthermore,
|
||||||
|
*ST_INO* is not supported under Windows; :func:`stat` always returns zero for
|
||||||
|
this value.
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: WatchedFileHandler(filename[,mode[, encoding[, delay]]])
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`WatchedFileHandler` class. The specified
|
||||||
|
file is opened and used as the stream for logging. If *mode* is not specified,
|
||||||
|
:const:`'a'` is used. If *encoding* is not *None*, it is used to open the file
|
||||||
|
with that encoding. If *delay* is true, then file opening is deferred until the
|
||||||
|
first call to :meth:`emit`. By default, the file grows indefinitely.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit(record)
|
||||||
|
|
||||||
|
Outputs the record to the file, but first checks to see if the file has
|
||||||
|
changed. If it has, the existing stream is flushed and closed and the
|
||||||
|
file opened again, before outputting the record to the file.
|
||||||
|
|
||||||
|
.. _rotating-file-handler:
|
||||||
|
|
||||||
|
RotatingFileHandler
|
||||||
|
^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
The :class:`RotatingFileHandler` class, located in the :mod:`logging.handlers`
|
||||||
|
module, supports rotation of disk log files.
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: RotatingFileHandler(filename, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=0)
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`RotatingFileHandler` class. The specified
|
||||||
|
file is opened and used as the stream for logging. If *mode* is not specified,
|
||||||
|
``'a'`` is used. If *encoding* is not *None*, it is used to open the file
|
||||||
|
with that encoding. If *delay* is true, then file opening is deferred until the
|
||||||
|
first call to :meth:`emit`. By default, the file grows indefinitely.
|
||||||
|
|
||||||
|
You can use the *maxBytes* and *backupCount* values to allow the file to
|
||||||
|
:dfn:`rollover` at a predetermined size. When the size is about to be exceeded,
|
||||||
|
the file is closed and a new file is silently opened for output. Rollover occurs
|
||||||
|
whenever the current log file is nearly *maxBytes* in length; if *maxBytes* is
|
||||||
|
zero, rollover never occurs. If *backupCount* is non-zero, the system will save
|
||||||
|
old log files by appending the extensions '.1', '.2' etc., to the filename. For
|
||||||
|
example, with a *backupCount* of 5 and a base file name of :file:`app.log`, you
|
||||||
|
would get :file:`app.log`, :file:`app.log.1`, :file:`app.log.2`, up to
|
||||||
|
:file:`app.log.5`. The file being written to is always :file:`app.log`. When
|
||||||
|
this file is filled, it is closed and renamed to :file:`app.log.1`, and if files
|
||||||
|
:file:`app.log.1`, :file:`app.log.2`, etc. exist, then they are renamed to
|
||||||
|
:file:`app.log.2`, :file:`app.log.3` etc. respectively.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: doRollover()
|
||||||
|
|
||||||
|
Does a rollover, as described above.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit(record)
|
||||||
|
|
||||||
|
Outputs the record to the file, catering for rollover as described
|
||||||
|
previously.
|
||||||
|
|
||||||
|
.. _timed-rotating-file-handler:
|
||||||
|
|
||||||
|
TimedRotatingFileHandler
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
The :class:`TimedRotatingFileHandler` class, located in the
|
||||||
|
:mod:`logging.handlers` module, supports rotation of disk log files at certain
|
||||||
|
timed intervals.
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: TimedRotatingFileHandler(filename, when='h', interval=1, backupCount=0, encoding=None, delay=False, utc=False)
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`TimedRotatingFileHandler` class. The
|
||||||
|
specified file is opened and used as the stream for logging. On rotating it also
|
||||||
|
sets the filename suffix. Rotating happens based on the product of *when* and
|
||||||
|
*interval*.
|
||||||
|
|
||||||
|
You can use the *when* to specify the type of *interval*. The list of possible
|
||||||
|
values is below. Note that they are not case sensitive.
|
||||||
|
|
||||||
|
+----------------+-----------------------+
|
||||||
|
| Value | Type of interval |
|
||||||
|
+================+=======================+
|
||||||
|
| ``'S'`` | Seconds |
|
||||||
|
+----------------+-----------------------+
|
||||||
|
| ``'M'`` | Minutes |
|
||||||
|
+----------------+-----------------------+
|
||||||
|
| ``'H'`` | Hours |
|
||||||
|
+----------------+-----------------------+
|
||||||
|
| ``'D'`` | Days |
|
||||||
|
+----------------+-----------------------+
|
||||||
|
| ``'W'`` | Week day (0=Monday) |
|
||||||
|
+----------------+-----------------------+
|
||||||
|
| ``'midnight'`` | Roll over at midnight |
|
||||||
|
+----------------+-----------------------+
|
||||||
|
|
||||||
|
The system will save old log files by appending extensions to the filename.
|
||||||
|
The extensions are date-and-time based, using the strftime format
|
||||||
|
``%Y-%m-%d_%H-%M-%S`` or a leading portion thereof, depending on the
|
||||||
|
rollover interval.
|
||||||
|
|
||||||
|
When computing the next rollover time for the first time (when the handler
|
||||||
|
is created), the last modification time of an existing log file, or else
|
||||||
|
the current time, is used to compute when the next rotation will occur.
|
||||||
|
|
||||||
|
If the *utc* argument is true, times in UTC will be used; otherwise
|
||||||
|
local time is used.
|
||||||
|
|
||||||
|
If *backupCount* is nonzero, at most *backupCount* files
|
||||||
|
will be kept, and if more would be created when rollover occurs, the oldest
|
||||||
|
one is deleted. The deletion logic uses the interval to determine which
|
||||||
|
files to delete, so changing the interval may leave old files lying around.
|
||||||
|
|
||||||
|
If *delay* is true, then file opening is deferred until the first call to
|
||||||
|
:meth:`emit`.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: doRollover()
|
||||||
|
|
||||||
|
Does a rollover, as described above.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit(record)
|
||||||
|
|
||||||
|
Outputs the record to the file, catering for rollover as described above.
|
||||||
|
|
||||||
|
|
||||||
|
.. _socket-handler:
|
||||||
|
|
||||||
|
SocketHandler
|
||||||
|
^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
The :class:`SocketHandler` class, located in the :mod:`logging.handlers` module,
|
||||||
|
sends logging output to a network socket. The base class uses a TCP socket.
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: SocketHandler(host, port)
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`SocketHandler` class intended to
|
||||||
|
communicate with a remote machine whose address is given by *host* and *port*.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: close()
|
||||||
|
|
||||||
|
Closes the socket.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit()
|
||||||
|
|
||||||
|
Pickles the record's attribute dictionary and writes it to the socket in
|
||||||
|
binary format. If there is an error with the socket, silently drops the
|
||||||
|
packet. If the connection was previously lost, re-establishes the
|
||||||
|
connection. To unpickle the record at the receiving end into a
|
||||||
|
:class:`LogRecord`, use the :func:`makeLogRecord` function.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: handleError()
|
||||||
|
|
||||||
|
Handles an error which has occurred during :meth:`emit`. The most likely
|
||||||
|
cause is a lost connection. Closes the socket so that we can retry on the
|
||||||
|
next event.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: makeSocket()
|
||||||
|
|
||||||
|
This is a factory method which allows subclasses to define the precise
|
||||||
|
type of socket they want. The default implementation creates a TCP socket
|
||||||
|
(:const:`socket.SOCK_STREAM`).
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: makePickle(record)
|
||||||
|
|
||||||
|
Pickles the record's attribute dictionary in binary format with a length
|
||||||
|
prefix, and returns it ready for transmission across the socket.
|
||||||
|
|
||||||
|
Note that pickles aren't completely secure. If you are concerned about
|
||||||
|
security, you may want to override this method to implement a more secure
|
||||||
|
mechanism. For example, you can sign pickles using HMAC and then verify
|
||||||
|
them on the receiving end, or alternatively you can disable unpickling of
|
||||||
|
global objects on the receiving end.
|
||||||
|
|
||||||
|
.. method:: send(packet)
|
||||||
|
|
||||||
|
Send a pickled string *packet* to the socket. This function allows for
|
||||||
|
partial sends which can happen when the network is busy.
|
||||||
|
|
||||||
|
|
||||||
|
.. _datagram-handler:
|
||||||
|
|
||||||
|
DatagramHandler
|
||||||
|
^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
The :class:`DatagramHandler` class, located in the :mod:`logging.handlers`
|
||||||
|
module, inherits from :class:`SocketHandler` to support sending logging messages
|
||||||
|
over UDP sockets.
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: DatagramHandler(host, port)
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`DatagramHandler` class intended to
|
||||||
|
communicate with a remote machine whose address is given by *host* and *port*.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit()
|
||||||
|
|
||||||
|
Pickles the record's attribute dictionary and writes it to the socket in
|
||||||
|
binary format. If there is an error with the socket, silently drops the
|
||||||
|
packet. To unpickle the record at the receiving end into a
|
||||||
|
:class:`LogRecord`, use the :func:`makeLogRecord` function.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: makeSocket()
|
||||||
|
|
||||||
|
The factory method of :class:`SocketHandler` is here overridden to create
|
||||||
|
a UDP socket (:const:`socket.SOCK_DGRAM`).
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: send(s)
|
||||||
|
|
||||||
|
Send a pickled string to a socket.
|
||||||
|
|
||||||
|
|
||||||
|
.. _syslog-handler:
|
||||||
|
|
||||||
|
SysLogHandler
|
||||||
|
^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
The :class:`SysLogHandler` class, located in the :mod:`logging.handlers` module,
|
||||||
|
supports sending logging messages to a remote or local Unix syslog.
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: SysLogHandler(address=('localhost', SYSLOG_UDP_PORT), facility=LOG_USER, socktype=socket.SOCK_DGRAM)
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`SysLogHandler` class intended to
|
||||||
|
communicate with a remote Unix machine whose address is given by *address* in
|
||||||
|
the form of a ``(host, port)`` tuple. If *address* is not specified,
|
||||||
|
``('localhost', 514)`` is used. The address is used to open a socket. An
|
||||||
|
alternative to providing a ``(host, port)`` tuple is providing an address as a
|
||||||
|
string, for example '/dev/log'. In this case, a Unix domain socket is used to
|
||||||
|
send the message to the syslog. If *facility* is not specified,
|
||||||
|
:const:`LOG_USER` is used. The type of socket opened depends on the
|
||||||
|
*socktype* argument, which defaults to :const:`socket.SOCK_DGRAM` and thus
|
||||||
|
opens a UDP socket. To open a TCP socket (for use with the newer syslog
|
||||||
|
daemons such as rsyslog), specify a value of :const:`socket.SOCK_STREAM`.
|
||||||
|
|
||||||
|
Note that if your server is not listening on UDP port 514,
|
||||||
|
:class:`SysLogHandler` may appear not to work. In that case, check what
|
||||||
|
address you should be using for a domain socket - it's system dependent.
|
||||||
|
For example, on Linux it's usually '/dev/log' but on OS/X it's
|
||||||
|
'/var/run/syslog'. You'll need to check your platform and use the
|
||||||
|
appropriate address (you may need to do this check at runtime if your
|
||||||
|
application needs to run on several platforms). On Windows, you pretty
|
||||||
|
much have to use the UDP option.
|
||||||
|
|
||||||
|
.. versionchanged:: 3.2
|
||||||
|
*socktype* was added.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: close()
|
||||||
|
|
||||||
|
Closes the socket to the remote host.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit(record)
|
||||||
|
|
||||||
|
The record is formatted, and then sent to the syslog server. If exception
|
||||||
|
information is present, it is *not* sent to the server.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: encodePriority(facility, priority)
|
||||||
|
|
||||||
|
Encodes the facility and priority into an integer. You can pass in strings
|
||||||
|
or integers - if strings are passed, internal mapping dictionaries are
|
||||||
|
used to convert them to integers.
|
||||||
|
|
||||||
|
The symbolic ``LOG_`` values are defined in :class:`SysLogHandler` and
|
||||||
|
mirror the values defined in the ``sys/syslog.h`` header file.
|
||||||
|
|
||||||
|
**Priorities**
|
||||||
|
|
||||||
|
+--------------------------+---------------+
|
||||||
|
| Name (string) | Symbolic value|
|
||||||
|
+==========================+===============+
|
||||||
|
| ``alert`` | LOG_ALERT |
|
||||||
|
+--------------------------+---------------+
|
||||||
|
| ``crit`` or ``critical`` | LOG_CRIT |
|
||||||
|
+--------------------------+---------------+
|
||||||
|
| ``debug`` | LOG_DEBUG |
|
||||||
|
+--------------------------+---------------+
|
||||||
|
| ``emerg`` or ``panic`` | LOG_EMERG |
|
||||||
|
+--------------------------+---------------+
|
||||||
|
| ``err`` or ``error`` | LOG_ERR |
|
||||||
|
+--------------------------+---------------+
|
||||||
|
| ``info`` | LOG_INFO |
|
||||||
|
+--------------------------+---------------+
|
||||||
|
| ``notice`` | LOG_NOTICE |
|
||||||
|
+--------------------------+---------------+
|
||||||
|
| ``warn`` or ``warning`` | LOG_WARNING |
|
||||||
|
+--------------------------+---------------+
|
||||||
|
|
||||||
|
**Facilities**
|
||||||
|
|
||||||
|
+---------------+---------------+
|
||||||
|
| Name (string) | Symbolic value|
|
||||||
|
+===============+===============+
|
||||||
|
| ``auth`` | LOG_AUTH |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``authpriv`` | LOG_AUTHPRIV |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``cron`` | LOG_CRON |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``daemon`` | LOG_DAEMON |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``ftp`` | LOG_FTP |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``kern`` | LOG_KERN |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``lpr`` | LOG_LPR |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``mail`` | LOG_MAIL |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``news`` | LOG_NEWS |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``syslog`` | LOG_SYSLOG |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``user`` | LOG_USER |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``uucp`` | LOG_UUCP |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``local0`` | LOG_LOCAL0 |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``local1`` | LOG_LOCAL1 |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``local2`` | LOG_LOCAL2 |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``local3`` | LOG_LOCAL3 |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``local4`` | LOG_LOCAL4 |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``local5`` | LOG_LOCAL5 |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``local6`` | LOG_LOCAL6 |
|
||||||
|
+---------------+---------------+
|
||||||
|
| ``local7`` | LOG_LOCAL7 |
|
||||||
|
+---------------+---------------+
|
||||||
|
|
||||||
|
.. method:: mapPriority(levelname)
|
||||||
|
|
||||||
|
Maps a logging level name to a syslog priority name.
|
||||||
|
You may need to override this if you are using custom levels, or
|
||||||
|
if the default algorithm is not suitable for your needs. The
|
||||||
|
default algorithm maps ``DEBUG``, ``INFO``, ``WARNING``, ``ERROR`` and
|
||||||
|
``CRITICAL`` to the equivalent syslog names, and all other level
|
||||||
|
names to 'warning'.
|
||||||
|
|
||||||
|
.. _nt-eventlog-handler:
|
||||||
|
|
||||||
|
NTEventLogHandler
|
||||||
|
^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
The :class:`NTEventLogHandler` class, located in the :mod:`logging.handlers`
|
||||||
|
module, supports sending logging messages to a local Windows NT, Windows 2000 or
|
||||||
|
Windows XP event log. Before you can use it, you need Mark Hammond's Win32
|
||||||
|
extensions for Python installed.
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: NTEventLogHandler(appname, dllname=None, logtype='Application')
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`NTEventLogHandler` class. The *appname* is
|
||||||
|
used to define the application name as it appears in the event log. An
|
||||||
|
appropriate registry entry is created using this name. The *dllname* should give
|
||||||
|
the fully qualified pathname of a .dll or .exe which contains message
|
||||||
|
definitions to hold in the log (if not specified, ``'win32service.pyd'`` is used
|
||||||
|
- this is installed with the Win32 extensions and contains some basic
|
||||||
|
placeholder message definitions. Note that use of these placeholders will make
|
||||||
|
your event logs big, as the entire message source is held in the log. If you
|
||||||
|
want slimmer logs, you have to pass in the name of your own .dll or .exe which
|
||||||
|
contains the message definitions you want to use in the event log). The
|
||||||
|
*logtype* is one of ``'Application'``, ``'System'`` or ``'Security'``, and
|
||||||
|
defaults to ``'Application'``.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: close()
|
||||||
|
|
||||||
|
At this point, you can remove the application name from the registry as a
|
||||||
|
source of event log entries. However, if you do this, you will not be able
|
||||||
|
to see the events as you intended in the Event Log Viewer - it needs to be
|
||||||
|
able to access the registry to get the .dll name. The current version does
|
||||||
|
not do this.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit(record)
|
||||||
|
|
||||||
|
Determines the message ID, event category and event type, and then logs
|
||||||
|
the message in the NT event log.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: getEventCategory(record)
|
||||||
|
|
||||||
|
Returns the event category for the record. Override this if you want to
|
||||||
|
specify your own categories. This version returns 0.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: getEventType(record)
|
||||||
|
|
||||||
|
Returns the event type for the record. Override this if you want to
|
||||||
|
specify your own types. This version does a mapping using the handler's
|
||||||
|
typemap attribute, which is set up in :meth:`__init__` to a dictionary
|
||||||
|
which contains mappings for :const:`DEBUG`, :const:`INFO`,
|
||||||
|
:const:`WARNING`, :const:`ERROR` and :const:`CRITICAL`. If you are using
|
||||||
|
your own levels, you will either need to override this method or place a
|
||||||
|
suitable dictionary in the handler's *typemap* attribute.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: getMessageID(record)
|
||||||
|
|
||||||
|
Returns the message ID for the record. If you are using your own messages,
|
||||||
|
you could do this by having the *msg* passed to the logger being an ID
|
||||||
|
rather than a format string. Then, in here, you could use a dictionary
|
||||||
|
lookup to get the message ID. This version returns 1, which is the base
|
||||||
|
message ID in :file:`win32service.pyd`.
|
||||||
|
|
||||||
|
.. _smtp-handler:
|
||||||
|
|
||||||
|
SMTPHandler
|
||||||
|
^^^^^^^^^^^
|
||||||
|
|
||||||
|
The :class:`SMTPHandler` class, located in the :mod:`logging.handlers` module,
|
||||||
|
supports sending logging messages to an email address via SMTP.
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: SMTPHandler(mailhost, fromaddr, toaddrs, subject, credentials=None)
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`SMTPHandler` class. The instance is
|
||||||
|
initialized with the from and to addresses and subject line of the email. The
|
||||||
|
*toaddrs* should be a list of strings. To specify a non-standard SMTP port, use
|
||||||
|
the (host, port) tuple format for the *mailhost* argument. If you use a string,
|
||||||
|
the standard SMTP port is used. If your SMTP server requires authentication, you
|
||||||
|
can specify a (username, password) tuple for the *credentials* argument.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit(record)
|
||||||
|
|
||||||
|
Formats the record and sends it to the specified addressees.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: getSubject(record)
|
||||||
|
|
||||||
|
If you want to specify a subject line which is record-dependent, override
|
||||||
|
this method.
|
||||||
|
|
||||||
|
.. _memory-handler:
|
||||||
|
|
||||||
|
MemoryHandler
|
||||||
|
^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
The :class:`MemoryHandler` class, located in the :mod:`logging.handlers` module,
|
||||||
|
supports buffering of logging records in memory, periodically flushing them to a
|
||||||
|
:dfn:`target` handler. Flushing occurs whenever the buffer is full, or when an
|
||||||
|
event of a certain severity or greater is seen.
|
||||||
|
|
||||||
|
:class:`MemoryHandler` is a subclass of the more general
|
||||||
|
:class:`BufferingHandler`, which is an abstract class. This buffers logging
|
||||||
|
records in memory. Whenever each record is added to the buffer, a check is made
|
||||||
|
by calling :meth:`shouldFlush` to see if the buffer should be flushed. If it
|
||||||
|
should, then :meth:`flush` is expected to do the needful.
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: BufferingHandler(capacity)
|
||||||
|
|
||||||
|
Initializes the handler with a buffer of the specified capacity.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit(record)
|
||||||
|
|
||||||
|
Appends the record to the buffer. If :meth:`shouldFlush` returns true,
|
||||||
|
calls :meth:`flush` to process the buffer.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: flush()
|
||||||
|
|
||||||
|
You can override this to implement custom flushing behavior. This version
|
||||||
|
just zaps the buffer to empty.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: shouldFlush(record)
|
||||||
|
|
||||||
|
Returns true if the buffer is up to capacity. This method can be
|
||||||
|
overridden to implement custom flushing strategies.
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: MemoryHandler(capacity, flushLevel=ERROR, target=None)
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`MemoryHandler` class. The instance is
|
||||||
|
initialized with a buffer size of *capacity*. If *flushLevel* is not specified,
|
||||||
|
:const:`ERROR` is used. If no *target* is specified, the target will need to be
|
||||||
|
set using :meth:`setTarget` before this handler does anything useful.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: close()
|
||||||
|
|
||||||
|
Calls :meth:`flush`, sets the target to :const:`None` and clears the
|
||||||
|
buffer.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: flush()
|
||||||
|
|
||||||
|
For a :class:`MemoryHandler`, flushing means just sending the buffered
|
||||||
|
records to the target, if there is one. The buffer is also cleared when
|
||||||
|
this happens. Override if you want different behavior.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: setTarget(target)
|
||||||
|
|
||||||
|
Sets the target handler for this handler.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: shouldFlush(record)
|
||||||
|
|
||||||
|
Checks for buffer full or a record at the *flushLevel* or higher.
|
||||||
|
|
||||||
|
|
||||||
|
.. _http-handler:
|
||||||
|
|
||||||
|
HTTPHandler
|
||||||
|
^^^^^^^^^^^
|
||||||
|
|
||||||
|
The :class:`HTTPHandler` class, located in the :mod:`logging.handlers` module,
|
||||||
|
supports sending logging messages to a Web server, using either ``GET`` or
|
||||||
|
``POST`` semantics.
|
||||||
|
|
||||||
|
|
||||||
|
.. class:: HTTPHandler(host, url, method='GET', secure=False, credentials=None)
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`HTTPHandler` class. The *host* can be
|
||||||
|
of the form ``host:port``, should you need to use a specific port number.
|
||||||
|
If no *method* is specified, ``GET`` is used. If *secure* is True, an HTTPS
|
||||||
|
connection will be used. If *credentials* is specified, it should be a
|
||||||
|
2-tuple consisting of userid and password, which will be placed in an HTTP
|
||||||
|
'Authorization' header using Basic authentication. If you specify
|
||||||
|
credentials, you should also specify secure=True so that your userid and
|
||||||
|
password are not passed in cleartext across the wire.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit(record)
|
||||||
|
|
||||||
|
Sends the record to the Web server as a percent-encoded dictionary.
|
||||||
|
|
||||||
|
|
||||||
|
.. _queue-handler:
|
||||||
|
|
||||||
|
|
||||||
|
QueueHandler
|
||||||
|
^^^^^^^^^^^^
|
||||||
|
|
||||||
|
.. versionadded:: 3.2
|
||||||
|
|
||||||
|
The :class:`QueueHandler` class, located in the :mod:`logging.handlers` module,
|
||||||
|
supports sending logging messages to a queue, such as those implemented in the
|
||||||
|
:mod:`queue` or :mod:`multiprocessing` modules.
|
||||||
|
|
||||||
|
Along with the :class:`QueueListener` class, :class:`QueueHandler` can be used
|
||||||
|
to let handlers do their work on a separate thread from the one which does the
|
||||||
|
logging. This is important in Web applications and also other service
|
||||||
|
applications where threads servicing clients need to respond as quickly as
|
||||||
|
possible, while any potentially slow operations (such as sending an email via
|
||||||
|
:class:`SMTPHandler`) are done on a separate thread.
|
||||||
|
|
||||||
|
.. class:: QueueHandler(queue)
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`QueueHandler` class. The instance is
|
||||||
|
initialized with the queue to send messages to. The queue can be any queue-
|
||||||
|
like object; it's used as-is by the :meth:`enqueue` method, which needs
|
||||||
|
to know how to send messages to it.
|
||||||
|
|
||||||
|
|
||||||
|
.. method:: emit(record)
|
||||||
|
|
||||||
|
Enqueues the result of preparing the LogRecord.
|
||||||
|
|
||||||
|
.. method:: prepare(record)
|
||||||
|
|
||||||
|
Prepares a record for queuing. The object returned by this
|
||||||
|
method is enqueued.
|
||||||
|
|
||||||
|
The base implementation formats the record to merge the message
|
||||||
|
and arguments, and removes unpickleable items from the record
|
||||||
|
in-place.
|
||||||
|
|
||||||
|
You might want to override this method if you want to convert
|
||||||
|
the record to a dict or JSON string, or send a modified copy
|
||||||
|
of the record while leaving the original intact.
|
||||||
|
|
||||||
|
.. method:: enqueue(record)
|
||||||
|
|
||||||
|
Enqueues the record on the queue using ``put_nowait()``; you may
|
||||||
|
want to override this if you want to use blocking behaviour, or a
|
||||||
|
timeout, or a customised queue implementation.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. queue-listener:
|
||||||
|
|
||||||
|
QueueListener
|
||||||
|
^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
.. versionadded:: 3.2
|
||||||
|
|
||||||
|
The :class:`QueueListener` class, located in the :mod:`logging.handlers`
|
||||||
|
module, supports receiving logging messages from a queue, such as those
|
||||||
|
implemented in the :mod:`queue` or :mod:`multiprocessing` modules. The
|
||||||
|
messages are received from a queue in an internal thread and passed, on
|
||||||
|
the same thread, to one or more handlers for processing. While
|
||||||
|
:class:`QueueListener` is not itself a handler, it is documented here
|
||||||
|
because it works hand-in-hand with :class:`QueueHandler`.
|
||||||
|
|
||||||
|
Along with the :class:`QueueHandler` class, :class:`QueueListener` can be used
|
||||||
|
to let handlers do their work on a separate thread from the one which does the
|
||||||
|
logging. This is important in Web applications and also other service
|
||||||
|
applications where threads servicing clients need to respond as quickly as
|
||||||
|
possible, while any potentially slow operations (such as sending an email via
|
||||||
|
:class:`SMTPHandler`) are done on a separate thread.
|
||||||
|
|
||||||
|
.. class:: QueueListener(queue, *handlers)
|
||||||
|
|
||||||
|
Returns a new instance of the :class:`QueueListener` class. The instance is
|
||||||
|
initialized with the queue to send messages to and a list of handlers which
|
||||||
|
will handle entries placed on the queue. The queue can be any queue-
|
||||||
|
like object; it's passed as-is to the :meth:`dequeue` method, which needs
|
||||||
|
to know how to get messages from it.
|
||||||
|
|
||||||
|
.. method:: dequeue(block)
|
||||||
|
|
||||||
|
Dequeues a record and return it, optionally blocking.
|
||||||
|
|
||||||
|
The base implementation uses ``get()``. You may want to override this
|
||||||
|
method if you want to use timeouts or work with custom queue
|
||||||
|
implementations.
|
||||||
|
|
||||||
|
.. method:: prepare(record)
|
||||||
|
|
||||||
|
Prepare a record for handling.
|
||||||
|
|
||||||
|
This implementation just returns the passed-in record. You may want to
|
||||||
|
override this method if you need to do any custom marshalling or
|
||||||
|
manipulation of the record before passing it to the handlers.
|
||||||
|
|
||||||
|
.. method:: handle(record)
|
||||||
|
|
||||||
|
Handle a record.
|
||||||
|
|
||||||
|
This just loops through the handlers offering them the record
|
||||||
|
to handle. The actual object passed to the handlers is that which
|
||||||
|
is returned from :meth:`prepare`.
|
||||||
|
|
||||||
|
.. method:: start()
|
||||||
|
|
||||||
|
Starts the listener.
|
||||||
|
|
||||||
|
This starts up a background thread to monitor the queue for
|
||||||
|
LogRecords to process.
|
||||||
|
|
||||||
|
.. method:: stop()
|
||||||
|
|
||||||
|
Stops the listener.
|
||||||
|
|
||||||
|
This asks the thread to terminate, and then waits for it to do so.
|
||||||
|
Note that if you don't call this before your application exits, there
|
||||||
|
may be some records still left on the queue, which won't be processed.
|
||||||
|
|
||||||
|
|
||||||
|
.. seealso::
|
||||||
|
|
||||||
|
Module :mod:`logging`
|
||||||
|
API reference for the logging module.
|
||||||
|
|
||||||
|
Module :mod:`logging.config`
|
||||||
|
Configuration API for the logging module.
|
||||||
|
|
||||||
|
|
||||||
File diff suppressed because it is too large
Load diff
Loading…
Add table
Add a link
Reference in a new issue