Where are Docker container logs stored? There’s a short answer, and a long answer. The short answer, that will satisfy your needs in the vast majority of cases, is:
/var/lib/docker/containers/<container_id>/<container_id>-json.log
From here you need to ship logs to a central location, and enable log rotation for your Docker containers. Let me elaborate on why with the long answer below.
Where Are Docker Container Logs Stored by Default?
You see, by default, Docker containers emit logs to the stdout
and stderr
output streams. Containers are stateless, and the logs are stored on the Docker host in JSON files by default.
Why are logs stored in JSON files?
The default logging driver is json-file.
What’s a logging driver?
A logging driver is a mechanism for getting info from your running containers. Here’s a more elaborate explanation from the Docker docs. There are several different log drivers you can use except for the default json-file, like syslog, journald, fluentd, or logagent.
How to find the logs?
These logs are emitted from output streams, annotated with the log origin, either stdout
or stderr
, and a timestamp. Each log file contains information about only one container and is in JSON format. Remember, one log file per container. You find these JSON log files in the /var/lib/docker/containers/
directory on a Linux Docker host. The <container_id>
here is the id of the running container.
/var/lib/docker/containers/<container_id>/<container_id>-json.log
If you’re not sure which id is related to which container, you can run the docker ps
command to list all running containers. The container_id
is located in the first column.
docker ps [Output] CONTAINER_ID IMAGE COMMAND CREATED STATUS PORTS NAMES cf74b6fce535 foo_image "node app.js" X min ago Up X min 3000/tcp foo_app
Now you know where the container logs are stored, and you can continue to troubleshoot and debug any issues that come up. That’s where logging comes into play. You collect the logs with a log aggregator and store them in a place where they’ll be available forever. It’s dangerous to keep logs on the Docker host because they can build up over time and eat into your disk space. That’s why you should use a central location for your Docker logs and enable log rotation for your containers.
Debugging Docker Issues with Container Logs
Docker has a dedicated API for working with logs. But, keep in mind, it will only work if you use the json-file log driver. I strongly recommend not changing the log driver! Let’s start debugging. First of all, to list all running containers, use the docker ps
command.
docker ps
Then, with the docker logs
command you can list the logs for a particular container.
docker logs <container_id>
Most of the time you’ll end up tailing these logs in real time, or checking the last few logs lines. Using the --follow
or -f
flag will tail -f
(follow) the Docker container logs:
docker logs <container_id> -f
The --tail
flag will show the last number of log lines you specify:
docker logs <container_id> --tail N
The -t
or --timestamp
flag will show the timestamps of the log lines:
docker logs <container_id> -t
The --details
flag will show extra details about the log lines:
docker logs <container_id> --details
But what if you only want to see specific logs? Luckily, grep works with Docker logs as well.
docker logs <container_id> | grep pattern
This command will only show errors:
docker logs <container_id> | grep -i error
Once an application starts growing, you tend to start using Docker Compose. Don’t worry, it has a logs command as well.
docker-compose logs
This will display the logs from all services in the application defined in the Docker Compose configuration file.
Storing Docker Container Logs in a Central Location Using a Log Shipper
With your infrastructure growing, you can rely on just using the Docker API to troubleshoot logs. You need to store all logs in a secure place, so you can analyze and troubleshoot any issues after-the-fact. You need a steady influx of logs so you can get actionable insight into what is happening to your Docker containers. Setting up log rotation is just step one. By storing logs in one place you can also set up alerts that notify you if anything breaks, or whenever you’re experiencing unexpected behavior. Container logs can be a mix of plain text messages from start scripts and structured logs from applications, making it difficult to tell which log event belongs to what container and application. Although Docker log drivers can ship logs to log management tools, most of them don’t allow you to parse container logs. You need a separate tool called a log shipper, such as Logagent, Logstash or rsyslog to structure and enrich the logs before shipping them. The solution is to have a container dedicated solely to logging and collecting logs. You deploy the dedicated logging container within your Docker environment. It will automatically aggregate logs from all containers, monitor, analyze, and store or forward them to a central location. This makes it easier to move containers between hosts and easily scale your infrastructure. It also lets you collect logs through various streams, including log events, Docker API data, stats, etc. This is what I’d suggest you use. By far the most reliable and convenient way of log collection is to use the json-file driver and set up a log shipper to ship the logs. You always have a local copy of logs on your server and you get the advantage of centralized log management. If you were to use Sematext Logagent there are a few simple steps to follow in order to start sending logs to Sematext. After creating a Logs App, run these commands in a terminal.
docker pull sematext/logagent docker run -d --restart=always --name st-logagent \ -e LOGS_TOKEN=YOUR_LOGS_TOKEN \ -e LOGS_RECEIVER_URL="https://logsene-receiver.sematext.com" \ -v /var/run/docker.sock:/var/run/docker.sock \ sematext/logagent
This will start sending all container logs to Sematext. Watch this short video and see how easy it is to go through these steps:
Conclusion
There we go, both a short and long answer to where Docker Container logs are stored. By default Docker uses the json-file log driver that stores logs in dedicated directories on the host:
/var/lib/docker/containers/<container id>/<container id>-json.log
The long answer, and what I’d suggest you do, is to set up a dedicated logging container that will structure and enrich your container logs, then send them to a central location. This makes troubleshooting and searching through logs much easier. But, you also get alerting which is the main point. You want to know what breaks before your users do.