Logging is the process of keeping records of activities and data of a software program. It is an important aspect of developing, debugging, and running software solutions as it helps developers track their program, better understand the flow and discover unexpected scenarios and problems. The log records are extremely helpful in scenarios where a developer has to debug or maintain another developer’s code. So, you can say logging is not only about helping to debug errors easily, but also providing useful data from the right places to analyze the performance and plan further on the application.
In this article, we will explain how logging works when working with Python applications. We will talk about basic Python logging concepts, best practices you should follow to get the most out of your Python logs and more.
What Is Logging in Python?
Logging in Python is a way to record information about your Python scripts and keep track of events generated by the scripts as they take place. Logging is essential when creating any complex Python script as it helps the developers to locate an exact region of the script that needs attention. Without logging, finding the root cause of an issue in your code can be a time-consuming task, especially if the script contains hundreds or thousands of lines of code.
Moreover, logging in Python is straightforward and quick. You don’t need to install anything to get started with Python logging because the Python standard library includes a logging module. Simply import the logging module to use the module in your script.
Printing vs. Logging
Python developers often find themselves in a position where they want to choose printing over logging. Print statements are easy to use but they don’t come with a load of features that Python Logging modules comes with.
Some developers use the print statement to ensure the code works without errors. You can use the print()
function to print variable values to understand a logic error, which might work fine with simple scripts. However, when you need more information, like the date and time a value was assigned to the variable, you need to write additional code with the print()
function.
Python logging, on the other hand, comes pre-built with such options and features that make printing completely inefficient.
print()
statements require access to the console. In addition, print messages are momentary; as soon as you close the program, the output will be erased. But, Python logs are written to various persistent storage. You can keep the logs for later usage as well as share the log information with others.- Logging provides a lot more information compared to
print()
statements. For example, a log message tells you where the log message was generated, the error line, and the time the message was generated. You will need several lines of code to define basic logging using the print statements. - Logging allows tagging severity levels to the log messages. The popular log levels are
INFO, ERROR, WARNING, CRITICAL and DEBUG
. For instance, a general log message could be tagged asINFO
, whereas a message that requires immediate attention could be tagged asCRITICAL
. If you want to implement such functionality using theprint()
function, you will end up crunching multiple lines of code to create an inefficient solution.
Python Logging Module
The Python logging
module is a Python logging class and set of functions that implement a flexible event logging and tracking system for Python applications. The main advantage of having the logging API supplied by a standard library module is that all Python modules can participate in logging, allowing you to combine messages from your own modules with those from third-party modules in your application log.
To begin with, the logging
module, import the module into your Python program using this:
import logging
To insert a log message, call the relevant function that the logging
module provides. The logging
module provides five different severity levels. An example of each of them – in order of increasing severity and including their output – is given below.
import logging ... logging.debug('This is a debug message') logging.info('This is an info message') logging.warning('This is a warning message') logging.error('This is an error message') logging.critical('This is a critical message')
The default shell output for the above code will be:
WARNING:root:This is a warning message ERROR:root:This is an error message CRITICAL:root:This is a critical message
Lets dissect the above output. You will notice that only LogRecords after warning severity level has been logged. That is because the default logger (root) ignores all severity levels above WARNING
severity level. Notice severity level is logged before the first colons (:) of each line.
Similarly, root
is the name of the logger that produced the LogRecord. There are two key things to remember about loggers.
- First,
root
logger is at the top of the logger hierarchy. You can instantiate multiple loggers but root logger remains at the top of the logger hierarchy. - Second, you can format the log messages captured by the root logger or use a new custom logger. If you want to format log messages captured by root logger, use the
basicConfig()
function but remember that the custom loggers share the all the severity level functions with the root logger. So changing the root logger can impact the custom loggers too.
Basic Logging Configuration
Python logging
module has a default configuration, which displays the logs in the console by default in the following format:
severity:logger name:message
Also, as you can see in the previous example, it doesn’t capture the LogRecords associated with debug()
and info()
severity levels. That is because the default severity level for root logger is WARNING. However, you can configure the module to capture all the severity levels.
To change the basic configuration, use the basicConfig(**kwargs)
method, where **kwargs
indicates the function can take variable length arguments in the key-value format. Meaning you can set values to all the parameters together or only set values to a few parameters leaving the rest with their default values.
The commonly used parameters in the basicConfig()
function are:
- level: Sets the severity level of the
root
logger. The severity level will be discussed later in this article. - filename: Specifies the file name, in which you want to store the logs.
- filemode: Specifies the write mode, where the default file mode is
a
. - format: Specifies an appropriate format for a log message.
For example, if you want to store all of your logs to example.log
file and capture all the severity levels from DEBUG
and below, run the following code:
logging.basicConfig(filename='example.log',level=logging.DEBUG)
If you want to remove all the logs from earlier runs and start with a fresh log file in each run, use this command:
logging.basicConfig(filemode='w')
In addition to the parameter mentioned above, additional parameters for basicConfig()
are available in the Python logging documentation.
Note that basicConfig()
function can be only called once to configure the root logger. If you don’t configure debug()
, info()
, warning()
, error()
, and critical()
functions when calling the basicConfig()
function, default values will be assigned to them internally and you won’t be able to configure the root logger any longer.
Here is an example of how you can configure the logger using basicConfig()
function.
import logging ... logging.basicConfig(filename='basic.log',encoding='utf-8',level=logging.INFO, filemode = 'w', format='%(process)d-%(levelname)s-%(message)s') logging.info('This will write in basic.log') Logging.waring('This will also write in basic.log') logging.debug('This will not write in basic.log')
If you were to execute the code above, you will get:
18471-INFO-This will write in basic.log WARNING:This will also write in basic.log
Custom Python Logger Configuration
The basicConfig()
function initializes the root logger. However, there are certain issues with relying solely on the root logger.
- Once configured, it cannot be changed;
- It will not work simultaneously for multiple handlers at a time, i.e., you can’t use both console or file simultaneously for logging;
- It doesn’t allow different configurations for different severity levels.
For these reasons, it is recommended to configure the root logger but to avoid using root loggers in the code. Instead, customize your own logger for each package by leveraging the logging
module.
Python logging
module has four main components.
- Loggers: Refers to the object that manages the logging functionalities;
- Filters: Specifies the severity levels of messages recorded by the object;
- Handlers: Specifies the log destinations, i.e., console, streams, file, etc.;
- Formatters: Formats the log message content.
Python Logging Levels
The Python logging
module is shipped with five pre-defined severity levels:
DEBUG
– Used to diagnose issues and provide detailed information.INFO
– Provides general information on various events in the program.WARNING
– Utilized to warn if something unexpected occurred or might occur.ERROR
– Used to warn if the program has failed to execute.CRITICAL
– Utilized to warn the program is shutting down or can no longer run.
These severity levels are assigned numerical values in Python logging, which determine the logging levels during runtime.
- NOTSET – 0
- DEBUG – 10
- INFO – 20
- WARNING – 30
- ERROR – 40
- CRITICAL – 50
While the above five levels are capable of handling most severity levels, you can still introduce new levels by using numeric values that are greater than 0 and not equal to predefined logging levels. However, it is recommended to limit your severity levels to the predefined levels.
Python Log Handlers
Python log handlers are the components responsible for setting the destination of log messages and dispatching them to the destination. The Python standard library includes several log handlers, each having two important fields: formatter and log level. You can assign a separate log level and format to each handler.
For example, you can create two handlers – one to write to a HTTP destination and the other to write to a file as:
import logging # Create a custom logger logger = logging.getLogger(__name__) # Create Stream handler stream_handle = logging.StreamHandler # Create HTTP handler http_handle = logging.HTTPHandler('example.com','/path',method='POST',secure=True, credentials=('username','password'), context=None) # Set Severity level stream_handle.setLevel(logging.INFO) http_handle.setLevel(logging.WARNING) # Create formatters stream_handle_format = logging.Formatter('%(process)s - %(message)s') http_handle_format = logging.Formatter('%(levelname)s - %(message)s - %(asctime)s') # Assign formatters to handlers stream_handle.setFormatter(stream_handle_format) http_handle.setFormatter(http_handle_format) # Add handlers to the logger logger.addHandler(stream_handle) logger.addHandler(http_handle) # Log messages logger.warning('This is a warning') logger.error('This is an error')
Once you execute the code above, the shell output will be:
192345 - This is a warning 766 - This is an error
Some of the standard log handlers are:
StreamHandler()
outputs logs to streams or any file-like object and displays them in the console.r>FileHandler()
outputs logs to disk fileSMTPHandler()
outputs log messages to an email address via SMTPHTTPHandler()
outputs log messages to a web server using GET or POST verbs
Python Log Filters
Python log filters are used to perform more advanced filtering than level filtering. They can be used at the logger level or the handler level. Filters attached to handlers are consulted before an event is emitted by the handler, whereas filters attached to loggers are consulted whenever an event is logged before sending an event to handlers.
Log filters help stop certain log events from being logged or inject additional context into a log message. They limit the log messages by using their property of inheritance, i.e., they only record messages coming from a filter or any of its children. However, they won’t record messages from other filters or their subclasses.
Let’s take an example to understand this better. Consider the following loggers:
+ prog.person | | | - prog.person.users | + prog.items | - prog.items.products | - prog.items.images
If you set up a filter with prog.items logger, the filter will only record the messages coming from prog.items
, prog.items.products
and prog.items.images
loggers.
Python Formatters
Formatter objects are used by the Python logging handlers to enhance the information available through LogRecord attributes. Python Standard Library provides logging.Formatter() function that takes string entries as a single parameter.
exampleFormatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
Consider the code above. The formatter function is using the the basic elements of LogRecord object where,
%(asctime)s
attribute represents the time the log message was generated%(name)s
attribute represents the name of the logger object%(levelname)s
attribute represents the severity level of the log message%(message)s
attribute represents the log message.
By using the available LogRecord attributes, you can enhance the information in the log message. If you don’t format your log messages using the formatter
function, the default output will only include the data represented by %(message)s
attribute.
fileConfig()
You can configure logging in Python using the logger as shown in the previous sections or by using fileConfig()
. This method reads the logging configuration from a file formatted as configparser
and follows the guidelines described in the Configuration file format. You can call this function multiple times from an application and select from a variety of predefined setups.
You can load the fileConfig()
method with the following parameters:
fname
: The configuration filename, or a file-like object.defaults
: Default values passed to the ConfigParser.encoding
: Encoding applied to a file, iffname
represents a filename.disable_existing_loggers
: IfTrue
, disables all loggers that are active when this function is called.
logging.config.fileConfig('logging.ini', defaults={'logfilename': '/var/log/mylog.log'}, disable_existing_loggers=True, encoding=None)
Here is a sample of the configuration file:
[loggers] keys=root,sampleLogger [handlers] keys=consoleHandler [formatters] keys=sampleFormatter [logger_root] level=DEBUG handlers=consoleHandler [logger_sampleLogger] level=DEBUG handlers=consoleHandler qualname=sampleLogger propagate=0 [handler_consoleHandler] class=StreamHandler level=DEBUG formatter=sampleFormatter args=(sys.stdout,) [formatter_sampleFormatter] format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
Using these components, you can create a custom Python logger. The steps below show you how to implement a simple custom logger.
Step 1: Create a Logger
When creating a logger, we need two methods – getLogger()
and setLevel()
.
getLogger()
takes a single argument, which is the name of the custom logger. When executed, this method creates a custom logger with the specified name. setLevel()
is responsible for setting the severity level of the log messages that custom logger generates.
logger = logging.getLogger('example_logger') logger.setLevel(logging.DEBUG)
In the code above, the getLogger()
method creates and references a custom logger named example_logger. setLevel()
sets the severity level of messages generated by the example_logger
to DEBUG
.
Step 2: Create a Handler
Once the logger is created, you have to assign the necessary handler. Python Standard library offers a few Handler classes.
exampleHandler = logging.FileHandler('SPOT.log') exampleHandler.setLevel(logging.DEBUG)
The code above outputs the log to SPOT.log file and sets the severity level of all the log messages generated by this logger to DEBUG.
Step 3: Create a Formatter and Assign to the Handler
After you have assigned a handler, you can create a formatter object and specify the formatting to the output, this logger generates.
exampleFormatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
Then, you must this formatter object to the handler object using the setFormatter()
method.
exampleHandler.setFormatter(exampleFormatter)
Step 4: Assign the Handler to the Logger
The next step is to add the handler object to the logger object. You can do it using the addHandler()
method as shown in the below example.
logger.addHandler(exampleHandler)
Step 5: Write the Log Messages
After completing all the above steps correctly, you can write the log messages to the log file. An example code for writing a log message is shown below.
logger.debug('write your message')
Logging Multiple Modules
One of the best practices related to Python logging is to create a logger for each module. This allows you granular control over the handler and formatter for each module. Moreover, creating multiple loggers for your modules is quick too.
Let’s take a look at the file structure of a simple project.
Project -- myapp.py -- pkg01 — mylib.py -- pkg02 — mylib2.py -- logging.conf
If you want to implement a separate logger for each module – mylib.py and mylib2.py – you can simply copy and paste the syntax below near the top of each module.
# mylib.py import logging def do_something(): logging.info('Doing something')
Replace the __name__
parameter with a name for the logger. The getLogger()
method uses the name to create and reference a logger with the same name. Then, in your main program, copy and paste the following code.
# myapp.py import logging import mylib def main(): logging.basicConfig(filename='myapp.log', level=logging.INFO) logging.info('Started') mylib.do_something() logging.info('Finished') if __name__ == '__main__': main()
If you run myapp.py, you should see this in myapp.log.
INFO:root:Started INFO:root:Doing something INFO:root:Finished
Note that multiple calls to logging.getLogger('loggerName')
returns a reference to the same logger object, in this case loggerName
object. This holds true both within and between modules as long as they are running in the same Python interpreter session.
For references to the same object, the application code can define and configure a parent logger in one module and create (but not configure) a child logger in a separate module, and all logger calls to the child will pass up to the parent.
You can also use the getLogger()
method to create create a child logger under the logger with the name 'parent_logger_name'
. Note that loggers are organized in hierarchy, so 'parent_logger_name'
will be the parent of newly created child logger and root will be the parent of the 'parent_logger_name'
.
To create a child logger under 'parent_logger_name'
, run:
logger = logging.getLogger("parent_logger_name.child_logger_name")
Using a dot (.) before child_logger_name
indicates that parent_logger_name
is its parent. Once a child logger is created, it inherits the parent properties automatically but can be reconfigured later. Lets demonstrate with an example.
# Create parent.child logger logger = logging.getLogger("parent_logger_name.child_logger_name") # By default the following LogRecord is not printed to the screen logger.info("this is info level") # Create parent logger parentLogger = logging.getLogger("parent_logger_name") # Set parent's level to INFO and assign a new handler handler = logging.StreamHandler() handler.setFormatter(logging.Formatter("%(name)s - %(levelname)s - %(message)s")) parentlogger.setLevel(logging.INFO) parentlogger.addHandler(handler) # Run the LogRecord again logger.info("INFO Level printed.")
If you run the code above, the first instance of logger.info()
won’t print anything to the screen because INFO
severity levels are ignored by default. However, after the severity level of the parent is configured, the child will automatically take parent’s configuration. The output of the above code will be:
Parent_logger_name.child_logger_name - INFO - INFO Level printed.
Logging Variable Data
When you want to add dynamic information in the logs, you can easily do it by adding variable values to the log messages instead of printing strings in logs. Variables are compatible with all severity levels.
You can use text to describe the event and then append the variable data to the log as:
import logging ... fn_Name = 'ReadUser' ... logging.info('Execution of %s function is complete.', fn_Name)
Note that we use %s as a placeholder. The value of variable will be replace %s from the output and it will look like this:
INFO:root: Execution of ReadUser function is complete.
Logging Python Exceptions
Capturing exceptions is the core of any logging module. Python logging module can capture full stack trace of an exception, which provides in-depth information about an error.
You can capture exceptions by implementing the logging.exception()
method in your exception handler. This method records a message with exception information and ERROR severity level. So, it is recommended to only use this logging.exception()
inside the exception handler as below:
import logging try: printf('A test message') except Exception as Argument: logging.exception("Error occurred when executing try block")
If the logging.exception function is placed inside the exception handler, executing the above code produces this output:
ERROR:root:Error occurred when executing try block Traceback (most recent call last): File "C:\Users\index.py", line 3, in <module> printf("A test message") NameError: name 'printf' is not defined
Python Logs Location
By default, Python logs are sent to console. However, you can easily change the destination to fit your use case. Below are the most common log file locations.
Logging to a File
To create a log file in Python, you can use the default Python logging
module and specify the file name in your code as shown below. The following code will create a file called DemoFile.log file which includes the log messages.
logging.basicConfig(filename='DemoFile.log')
Logging to Syslog
Python offers two methods to write to the Unix syslog.
- Using the
syslog module
, which interfaces with the Unix syslog library routines - Using the
SysLogHandler
in the logging module.
To send a log message through the syslog module, use this syntax:
import syslog ... syslog.syslog(log_message) if error: syslog.syslog(syslog.LOG_ERR, log_message)
To send a log message through SysLogHandler
, run:
logging.handlers.SysLogHandler(address=(host,port))
In the example above, address represents the host
and port tuple where host indicates the server address and port
is the UDP port number.
Logging to systemd-journald
If your Python program is running as a systemd
unit on the majority of contemporary Linux systems, anything it outputs to the console will be written to Journal, which is controlled by journald
. But, if you want to make the best use of journald
‘s abilities such as faster lookups, enforced structured logging, and automatic log rotation, you will need to install the systemd-python
library. To do this, use this code:
pip install systemd-python
Once installed, you can start logging with systemd
using a method similar to creating a custom logger.
import logging from systemd.journal import JournaldLogHandler logger = logging.getLogger(__name__) journald_handler = JournaldLogHandler() journald_handler.setFormatter(logging.Formatter('[%(levelname)s] %(message)s')) logger.addHandler(journald_handler) logger.setLevel(logging.DEBUG) if __name__ == '__main__': logger.info('test log event to systemd!')
Python Logging Best Practices
By now, you should be familiar with the basic concepts of Python logging. But as you move forward with it, you will also need to pay attention to the best practices around it. Below are some of the most important ones.
- Include timestamps in log messages. Timestamps help determine when an error occurred. You can use the default Python logging module to add timestamps to the formatter.
- Have a log rotation mechanism. Log rotation simplifies the debugging task. It also helps to avoid disk space issues while working on large applications with several modules. You can use the
RotatingHandler
class from the Pythonlogging
module to do this. - Set the right log level. Make sure to choose the proper logging level for the particular application since logging too many events might be challenging from a debugging perspective
- Use log management tools. To automate the logging process you can use log management tools such as Sematext Logs. This is especially helpful for web applications since error logs can be easily misplaced or quite verbose. Utilizing these tools also allows you to learn more about the error’s variable values, the URL to the problem and the users at risk.
Python Logging with Sematext
Sematext Logs is a unified log management and monitoring solution that collects and analyzes log files from across your whole infrastructure. Without any further configuration, Sematext’s log and service auto-discovery allows you to automatically begin monitoring logs and forwarding them from both log files and containers directly through the user interface.
Sematext allows you to create custom dashboards that connect logs, metrics, and infrastructure data for real-time system visibility, as well as set up useful alerts to be warned before your users discover problems.
There are many ways of sending data to Sematext Logs, but by far the easiest way in Python is by using the SysLogHandler which we already covered. You’d have to point it to the logsene-syslog-receiver.sematext.com
endpoint and provide your Logs App’s write token at the beginning of the message in the Formatter, like:
logging.Formatter("TOKEN_GOES_HERE:%(message)s")
If you want data more structured, you can send JSON messages or you can have existing messages parsed server-side via pipelines. JSON logging is easier to maintain down the line, but requires some more code (e.g. using python-json-logger as a Formatter).
Finally, if these logs are critical, we recommend using a log shipper like Logstash to send data to Sematext Logs. This way, the log shipper takes care of retrying, buffering and potentially any enrichment of your log events, as opposed to your application. As always, there’s a trade-off, and you can read more about it in this post on logging libraries vs log shippers.
Start the 14-day free trial and try Sematext Logs yourself! If you want to learn more about its features and how it works, watch the video below.
Conclusion
Python logging is a simple and effective practice to ease the debugging phase and track the events generated by your application. However, if logs are generated in large volume, they will defeat the original purpose of having them – capturing important information about the events to resolve issues quickly.
To avoid the issues related to large log volumes, use the Python log levels, ensure to implement separate loggers for multiple modules and determine the relevant destination for the logs. And do consider sending your logs to a centralized log management solution like Sematext Logs. This is probably the most efficient way to deal with and get the most out of your logs. Give it a try!