“Why does the ‘docker logs’ command fail?“, is one of our frequently asked questions. The answer is simple and mentioned in the Docker documentation:
“The docker logs
command is not available for drivers other than json-file
and journald
.”
Amazing as it sounds, it’s true. It is just one of the Top 10 Docker logging gotchas. With so many issues around Docker Log Drivers, are there alternatives? It turns out there are – Docker API based log shippers to the rescue! Here are a few good reasons to look at such alternatives:
- The json-file driver is the default and reliable, a local copy of logs is always available, and the ‘docker logs’ AND Docker API calls for logs just work
- Ability to filter logs by various dynamic criteria like image name or labels
- Better metadata, having full access to Docker API
- No risk of crashing Docker Daemon because such log shippers can be run in a container with limited resource usage and disk space consumption (e.g. put buffer directory in a volume and set useful limits)
Before we start looking at Docker log collection tools, check out these two useful Docker Cheatsheets.
Filebeat. However, note that Filebeat collects container log files generated by the json-file log driver and only the log enrichment with container metadata is done via Docker API calls. Logspout provides multiple outputs and can route logs from different containers to different destinations without changing the application container logging settings. It handles ANSI escape sequences (like color codes in logs), which could be problematic for full-text search in Elasticsearch. Like Logspout, Sematext Docker Agent (SDA) is API based, supports log routing and handles ANSI escape sequences for full-text search. However, Sematext Docker Agent is actually more than just a simple log shipper. SDA also takes care of many issues raised by Docker users such as:
- multi-line logs
- log format detection and log parsing
- complete metadata enrichment containers (labels, geoip, Swarm and Kubernetes specific metadata)
- masking of sensitive data in logs
- disk buffering and reliable shipping via TLS
- ….
It is open source on Github, can be used with the Elastic Stack or Sematext Cloud and can collect not just container logs, but also container events, plus Docker host and container metrics. In other words, it’s a Docker monitoring agent as well as container events and log collector, parser, and shipper. The following comparison table shows the differences between these three Docker logging solutions that work well with the json-file driver and Docker Remote API.
Elastic
Filebeat |
Gliderlabs
Logspout |
Sematext
Docker Agent |
|
Collect container logs when json-file driver is used | Yes | Yes | Yes |
Collect container logs when journald driver is used | No | Yes | Yes |
Enriches logs with container metadata json-file journald | Yes No | Yes Yes | Yes Yes |
Log routing by metadata to different destinations | No | Yes | Yes |
Multiline support | Yes | No | Yes |
Log filter | Yes | Yes | Yes |
Disk buffer (when log destination is not reachable) | Yes | No | Yes |
Integrated log parser per image type | Yes | No | Yes |
Automatic log format detection and parsing | No | No | Yes |
Log enrichment for Geo-IP | Yes | No | Yes |
Masking sensitive data fields in parsed logs | No | No | Yes (hash or remove) |
Container event collection (start, stop, kill, …) | No (part of Metricbeat) | No | Yes |
Docker Hub image | Yes | Yes | Yes |
Container metrics collection | No (part of Metricbeat) | No | Yes |
Docker certified image (Docker Store) | No | No | Yes |
Red Hat certified image | No | No | Yes |
Vendor-hosted image | Yes | No | No |
Setup templates (UI/copy paste) for cluster wide installation | Yes. Kubernetes. | No | Yes Helm, K8S, Swarm, Portainer, Rancher |
“Beware of Docker log drivers gotchas. Use Docker API-based log shippers instead. Side by side comparison of Logspout, Filebeat, and Sematext Docker Agent for shipping docker logs.”
The comparison table above is based on the following details we evaluated for each tool.
Features | Elastic Filebeat |
Log collection | Collects Docker log files, generated by json-file driver. Enrichment with container metadata (name, image, labels) via Docker API. Logs can be forwarded to Elasticsearch, Kafka, Logstash or Redis. |
Log routing | No log routing (different destination/index for different containers). Limited to single log destination and single Elasticsearch index https://discuss.elastic.co/t/multiple-paths-for-different-indexes/44511 |
Multiline support | Multi-line support. A regular expression can be specified globally to match multiline messages. Specific multiline handling is implemented by Filebeat modules (see “Log Parser” below). |
Filter | Filters for docker metadata (container name, image name and container ID) can be defined. |
Disk buffer | Update: Since version 6.3, a queue can be configured. |
Log parser | Update: Filbeat modules are available and could be configured for container or image specific log parsing by the Filebeat “autodiscover” feature.
By default only JSON log parser in a static configuration used to read docker json-file logs. The Docker messages content in this json file is not parsed. Direct output to Elasticsearch results in unparsed logs. Logs must be shipped to a separate Logstash instance or to an Elasticsearch ingest node, having a processing pipeline defined for parsing various container log formats. |
Image Registry | Image on Docker Hub: https://hub.docker.com/r/elastic/filebeat/. Elastic hosts the Filebeat image in the elastic registry: docker.elastic.co/beats/filebeat Various 3rd party images are available too. |
Feature | Gliderlabs Logspout |
Log collection | Collects logs via Docker API including container metadata. Forwarding to Syslog or HTTP destinations. 3rd party output modules are available for Apache Kafka, Logstash, Redis-Logstash, and Gelf. |
Log routing | Log routing supported. Multiple destinations can be specified by label filters to select logs for each destination. |
Multiline support | No multi line support. |
Filter | Filtering to match container labels with wildcards. |
Disk buffer | No support for disk buffers. Logs might be lost when delivery fails. |
Log parser | No log parser. |
Image Registry | Open source image available on Docker Hub: https://hub.docker.com/r/gliderlabs/logspout/ |
Feature | Sematext Docker Agent |
Log collection | Collects Docker logs, Docker events, and metrics directly from Docker API. Log enrichment with container metadata, Docker Swarm metadata, Kubernetes metadata, labels, environment variables and GeoIP information. Logs are forwarded via Elasticsearch bulk API. |
Log routing | Log routing by container labels or environment variables to specify Elasticsearch destination index or Sematext Cloud App. Very flexible with global defaults and individual rules. |
Multiline support | Out of the box multi-line support, catching most stack traces or any log messages with indentation. The default regular expression is configurable. In addition, custom message separators e.g., date patterns at begin of log messages can be specified via pattern definitions per log source (matching container image or container name). |
Filter | Filtering with via and blacklists via regular expressions matching container ID, container name or image name. In addition, containers can be labeled to enabled/disable log collection combined with global defaults (collect all logs or collect no logs without having explicit logging “enabled” label on the application container). |
Disk buffer | Disk buffer supported. SDA stores and retransmits logs in case of failed delivery to the Elasticsearch API. Disk buffer limits can be configured. Oldest logs get dropped when disk buffer limits are reached. |
Log parser | Comprehensive log parser with default log format recognition for JSON and parsing rules for various official images like Nginx, Apache, MongoDB, HBase, Cassandra, Elasticsearch, etc. Individual log parser, filter and transformation rules can be specified in a configuration file or via URL (e.g. Github Gist). IP-Address fields can be enriched with Geo-IP data. Sensitive data fields can be masked/anonymized by replacing the value with a hash code. In addition sensitive data fields could be removed from logs, before the data is shipped to the log storage. |
Image Registry | Open source image on Docker Hub: https://hub.docker.com/r/sematext/sematext-agent-docker/ Docker Certified image in the Docker Store: https://store.docker.com/images/sematext-agent-monitoring-and-logging Red Hat certified image available in the Red Hat Container Catalog: |
The clear recommendation for API based loggers might change in the future as Docker log drivers improve over time and the new plugin mechanism via Unix socket allows new logging driver implementations to run as separate processes. The release of the new Docker logging plugin architecture is a good sign that Docker takes logging issues seriously. Log management vendors need some time to implement their drivers based on the new plugin architecture. In the meantime, consider Docker API based log collectors like Sematext Docker Agent and Logspout to avoid running into issues with Docker logs, like the 10 Docker logging gotchas.
What’s next?
Don’t forget to download the Cheat Sheet you need. Here they are:
Then, you should think about not only collecting logs, but also host and container metrics, and events. In this sense, we’ve prepared a reference architecture document where you will find out about all key Docker metrics to watch. Following that, you will learn how to set up monitoring and logging for a Docker Enterprise Cluster.
Monitoring and Logging for Docker Enterprise Edition
This e-book shows how to collect metrics, events, and logs. Specifically, you’ll learn how to use Sematext Docker Agent for automatic collection and processing of Docker Metrics, Events and Logs for all cluster nodes and
all auto-discovered containers.