In terms of container orchestration systems, Kubernetes is probably the first one that comes to mind. However, while Kubernetes makes it easy to manage large-scale containerized applications, it also introduces new challenges due to its ephemeral and dynamic nature. One of the main challenges is how you centralize Kubernetes logs.
In this post, we are going to show you everything you need to know about logging in Kubernetes, from how it works to what best practices you should follow, and what log management tools are available. By the end, you will be able to aggregate logs for your own cluster. Much of what we’ll be explaining in this post can be considered a DIY approach, and will not have full-blown features like the Sematext Kubernetes Logs integration or different tools on the market. Keep in mind this post covers how to get you started if you want to go into more details and would prefer a comprehensive Kubernetes observability solution for logs, metrics, and traces, check out Sematext Kubernetes Monitoring.
Before getting started, if you need a refresher on what is logging and how it works, have a quick look at our log management guide.
How Is Logging in Kubernetes Different
Log aggregation in Kubernetes is vastly different than logging on traditional servers or virtual machines, mainly due to how it manages its applications (pods).
When an app dies on a virtual machine, logs are still available until you delete them. In Kubernetes, when pods are evicted, crashed, deleted, or scheduled on a different node, the logs from the containers are gone. The system cleans up after itself. Therefore you lose any information about why the anomaly occurred. The transient nature of default logging in Kubernetes makes it crucial to implement a centralized log management solution.
Let’s not forget about the highly distributed and dynamic nature of Kubernetes. In production, you’ll more than likely work with several machines, each having multiple containers that can crash at any time. Kubernetes clusters add even more to the complexity by introducing new layers that need to be monitored, each generating their own type of logs.
Popular Kubernetes Logging Topics
[guides_card post_id=”52719″]
How Does Logging in Kubernetes Work
There are various ways you can collect logs in Kubernetes:
1. Basic Logging Using Stdout and Stderr
In traditional server environments, application logs are written to a file such as /var/log/app.log
. These logs can either be viewed on each server or collected by a logging agent that pushes them to a central location for log analysis and storage.
However, when working with Kubernetes, you need to collect logs for multiple transient pods (applications), across multiple nodes in the cluster, making this log collection method less than optimal. Managing multiple log files for multiple containers across multiple servers in a cluster needs a new and much more reliable approach.
Instead, the default Kubernetes logging framework recommends capturing the standard output (stdout
) and standard error output (stderr
) from each container on the node to a log file. This file is managed by Kubernetes and is usually restricted to the last 10MB of logs. You can see the logs of a particular container by running the command kubectl logs <container name>
.
Here’s an example for Nginx logs generated in a container.
kubectl logs <container name> [Output] 100.116.72.129 - - [12/Feb/2020:13:44:12 +0000] "GET /api/user HTTP/1.1" 200 84 127.0.0.1 - - [12/Feb/2020:13:44:17 +0000] "GET /server-status?auto HTTP/1.1" 200 918 10.4.51.204 - - [12/Feb/2020:13:44:19 +0000] "GET / HTTP/1.1" 200 3124 100.116.72.129 - - [12/Feb/2020:13:44:21 +0000] "GET /api/register HTTP/1.1" 200 84 100.105.140.197 - - [12/Feb/2020:13:44:21 +0000] "POST /api/stats HTTP/1.1" 200 122
If you want to access logs of a crashed instance, you can use –previous
. This method works for clusters with a small number of containers and instances. Otherwise, it’s difficult to manage cluster logs when the number of applications increases and the cluster runs on multiple machines. You need a scalable solution.
Find out about other essential Kubernetes commands from our Kubernetes Tutorial or download our Kubernetes commands cheat sheet.
Kubernetes Cheat Sheet
We’ve prepared a Kubernetes Cheat Sheet which puts all key Kubernetes commands (think kubectl) at your fingertips. Organized in logical groups from resource management (e.g. creating or listing pods, services, daemons), viewing and finding resources, to monitoring and logging.
2. Using Application-level Logging Configuration
If you don’t want to depend on cluster configuration, you can use an application-level logging configuration. This implies that each container has its own logging configuration, which in itself makes logging difficult and prone to errors. Furthermore, any changes in configuration would require you to deploy once again all the containers you want to monitor.
3. Using Logging Agents
Lastly, you can use logging agents such as Sematext Agent.
Logging agents are tools that collect Kubernetes logs and send them to a central location. These agents are lightweight containers that have access to a directory with logs from all application containers on a node. This is the easiest and best solution since it doesn’t affect the deployed applications, and it’s completely scalable, no matter how many nodes you add to the cluster. It’s also super simple to set up. Run just one command and you’re done.
Kubernetes Logging Architecture: Types of Kubernetes Logs
As mentioned previously, there are many layers to logging in Kubernetes, all containing different – but just as useful information – depending on your scenario. Within a Kubernetes system, we can name three types of logs: container logs, node logs, and cluster (or system component) logs.
Kubernetes Container Logging
Container logs are logs generated by your containerized applications. The easiest way to capture container logs is to use stdout
and stderr
.
Let’s say you have a Pod named app
, where you are logging something to stdout
.
apiVersion: v1 kind: Pod metadata: name: app spec: containers: - name: count image: busybox args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
Apply this configuration file by running:
kubectl apply -f app.yaml
Fetch the logs by running this command:
kubectl logs app
You’ll see output in the terminal window right away.
[Output] 0: Mon Jan 1 00:00:00 UTC 2001 1: Mon Jan 1 00:00:01 UTC 2001 2: Mon Jan 1 00:00:02 UTC 2001 ...
Kubernetes Node Logging
Everything that a containerized application writes to stdout
or stderr
is streamed somewhere by the container engine – in Docker’s case, for example, to a logging driver. These logs are usually located in the /var/log/containers directory on your host.
If a container restarts, kubelet keeps logs on the node. To prevent logs from filling up all the available space on the node, Kubernetes has a log rotation policy set in place. Therefore, when a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
Depending on your operating system and services, there are various node-level logs you can collect, such as kernel logs or systemd logs.
On nodes with systemd both the kubelet and container runtime write to journald. If systemd is not present, they write to .log
files in the /var/log
directory.
You can access systemd logs with the journalctl
command. This will output a list of log lines. If you don’t have systemd on the node, then you manage logs like in traditional server environments.
$ journalctl [output] -- Logs begin at Thu 2020-01-23 09:15:28 CET, end at Thu 2020-01-23 14:43:00 CET. -- Jan 23 09:15:28 raha-pc systemd-journald[267]: Runtime journal (/run/log/journal/) is 8.0M, max 114.2M, 106.2M free. Jan 23 09:15:28 pc kernel: microcode: microcode updated early to revision 0xd6, date = 2019-10-03
For more info check out this tutorial about Logging with Journald.
Kubernetes Cluster Logging
Kubernetes cluster logs refer to Kubernetes itself and all of its system component logs, and we can differentiate between components that run in a container and components that do not run in a container. Each has its own role, giving you insight into the health of your Kubernetes system. For example, kube-scheduler, kube-apiserver, etcd, and kube-proxy run inside a container, while kubelet and the container runtime run on the operating system level, usually, as a systemd service.
By default, system components outside a container write files to journald, while components running in containers write to /var/log
directory. However, there is the option to configure the container engine to stream logs to a preferred location.
Kubernetes doesn’t provide a native solution for logging at cluster level. However, there are other approaches available to you:
- Use a node-level logging agent that runs on every node
- Add a sidecar container for logging within the application pod
- Expose logs directly from the application.
Besides system component logs, you can also collect Kubernetes events and Kubernetes audit logs. This is explained in the Audit Logs section below.
The easiest way of setting up a node-level logging agent is to configure a DaemonSet to run the agent on each node. Here’s an example of setting up the Sematext Agent with Helm.
helm install --name st-agent \ --set infraToken=xxxx-xxxx \ --set containerToken=xxxx-xxxx \ --set logsToken=xxxx-xxxx \ --set region=US \ stable/sematext-agent
This setup will, by default, send all cluster and container logs to a central location for easy management and troubleshooting. With a tiny bit of added configuration, you can configure it to collect node-level logs and audit logs as well.
Apart from this, Sematext Agent will also collect cluster-wide metrics and Kubernetes Events mentioned in the section below, giving you a nice dashboard and clear overview of your system health.
Check out our documentation for configuration options with kubectl, Docker, and Docker Swarm.
Kubernetes Events Logging
Kubernetes events hold information about resources state changes or errors such as why pods are evicted or what decisions were made by the scheduler, as well as other informational messages that provide insight into what’s happening inside your cluster.
Events are API objects that are stored in the apiserver on master. Similar to node logging, there is a removal mechanism set in place to avoid using all the master’s disk space. Therefore, Kubernetes removes events an hour after the last occurrence. If you want to capture events over a longer period, you need to install a third-party solution like we explained above in the node-level logging section.
You can quickly check the events in a namespace by running:
kubectl get events -n <namespace>
[Output] LAST SEEN TYPE REASON OBJECT MESSAGE 3m58s Normal NoPods kube-dns No matching pods found 9m31s Normal Killing pod/app Stopping container app 9m32s Normal DELETE pod/app Starting container app ...
If you don’t want to check the whole namespace, instead only a single Pod, you can do that as well.
kubectl describe pod <pod-name> [Output] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 41m default-scheduler Successfully assigned app to ip-xx-xx-xx-xx Normal Pulled 41m kubelet, ip-xx Container image "app-image" already present on machine Normal Created 41m kubelet, ip-xx Created container app Normal Started 41m kubelet, ip-xx Started container app
Ideally, you’d never need to run these commands in the terminal, instead use a cluster-level logging setup to send these events to a central location where you could view them alongside any logs you have.
Kubernetes Audit Logging
Kubernetes audit logs are detailed descriptions of each call made to the kube-apiserver. They provide a chronological sequence of activities that lead to the state of the system at a specific moment. They are extremely useful for security and compliance purposes, telling you exactly who did what, when, and how.
Kubernetes Audit Log Backend
You need to enable audit logs only on the master nodes. First of all you need to create a policy to specify what will be recorded. A good example of the audit-policy.yaml
file is an audit profile used by GCE, or from the official Kubernetes docs.
To enable this policy, you need to edit the definition of the Kubernetes API Server. If you use Kops for cluster management you can run kops edit cluster <cluster>
to open the configuration.
spec: ... kubeAPIServer: auditPolicyFile: /etc/kubernetes/policies/audit-policy.yaml auditLogPath: - # log to stdout auditLogMaxAge: 10 # num days auditLogMaxBackups: 1 # the num of audit logs to retain auditLogMaxSize: 100 # the max size in MB to retain
Otherwise, if you’re using Kubeadm this configuration will be in the /etc/kubernetes/manifests/kube-apiserver.yaml
file, on the master node.
... spec: containers: - command: - kube-apiserver ... - --audit-policy-file=/etc/kubernetes/policies/audit-policy.yaml - --audit-log-path=- # log to stdout - --audit-log-format=json ... volumeMounts: - mountPath: /etc/kubernetes/policies name: policies readOnly: true ... hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/policies type: DirectoryOrCreate name: policies
Apply these changes by restarting the Kubelet.
sudo systemctl restart kubelet
Once you’ve configured logging the Audit logs to stdout you can use cluster-level logging to store these logs in a central location as we explained in the section above. Finally, don’t forget to correctly configure ClusterRoleBindings so the agent has the appropriate permissions to access the Kubernetes system component logs.
Kubernetes Audit Dynamic Backend
Configuring the dynamic backend is even simpler than a log backend. Start by editing the configuration of your Kubernetes API server found at /etc/kubernetes/manifests/kube-apiserver.yaml
, and add three new settings to the end of your -command list.
... spec: containers: - command: - kube-apiserver ... - --audit-dynamic-configuration - --feature-gates=DynamicAuditing=true - --runtime-config=auditregistration.k8s.io/v1alpha1=true
Save the changes and exit the file. This will trigger a restart of the Kubernetes API server. If you have issues, you can review the kube-apiserver logs by running this command.
kubectl -n kube-system logs -f pod/kube-apiserver-minikube
Next, you have to add an auditsink
resource. Create a file named auditsink.yaml
.
apiVersion: auditregistration.k8s.io/v1alpha1 kind: AuditSink metadata: name: k8sauditsink2 policy: level: Metadata stages: - ResponseComplete webhook: throttle: qps: 10 burst: 15 clientConfig: url: "https://logsene-k8s-audit-receiver.sematext.com/<LOGS_TOKEN>/"
Now apply the auditsink.yaml
to the Kubernetes cluster.
kubectl apply -f auditsink.yaml
That’s it. You’re now shipping Kubernetes Audit logs to Sematext Logs for safekeeping.
Kubernetes Ingress Logging
Kubernetes Ingress is becoming the de-facto standard for exposing HTTP and HTTPS routes from outside the cluster to services within the cluster. This makes ingress logs incredibly important for tracking the performance of your services, issues, bugs, and even the security of your cluster. Read more about optimal Kubernetes Nginx ingress logs.
Kubernetes Logging Best Practices
As you’ve probably figured out by now, logging in Kubernetes is a challenge. However, there’s enough literature on the topic to compile a list of best practices you should follow to make sure you capture the logs that you need.
Set Up a Logging System
Reaching Kubernetes logs is fairly easy. The challenge lies in where to send and store them so that they’re available for future use. Whatever your use case, the logging pipeline you choose – whether it’s internal or a service – needs to ship logs to a central location that’s in a separate and isolated environment.
Establish a Retention Mechanism
Whether you choose to perform logging internally or use a third-party service, logs still eat up a lot of space. Therefore, make sure you have a clear retention policy that stores logs for future use. Higher retention policies usually cost more. Consequently, you may need to estimate the required disk space based on your Kubernetes environment. Managed logging services will reduce your infrastructure costs associated with Kubernetes logs and provide volume-based discounts, of course.
Write Logs to Stdout and Stderr
Although it’s standard practice when moving to a containerized environment, some companies still write apps that log to files. Redirecting logs to stdout and stderr allows Kubernetes’ centralized logging framework to come into play and automatically stream the logs to any desired location.
Also, separating errors into the error stream helps with both log collection and makes it easier for you to filter logs.
Use Separate Clusters for Development and Production
By creating separate clusters for dev and prod, you can avoid accidents such as deleting a pod that’s critical in production. You can easily switch between the two with kubectl config use-context
command. You should use the same approach for keeping dev and prod logs in different locations. This would mean using different Logs Apps in Sematext or Elasticsearch indices for different environments.
Avoid Using Sidecar Containers for Logging
Kubernetes best practices state that you should always try to have one container instance per pod. This means one pod should only have instances of the same container which is defined by the same image. This container can then be replicated within the pod, and you can end up having multiple containers in the pod. But, they are still built from the same instance.
“Each Pod is meant to run a single instance of a given application”
– Kubernetes Documentation
A sidecar is a second container within the same pod that captures the output from the first container where your actual app is. Therefore, it takes up resources from a per pod level. For example, if you have 5 pods running on your node, with sidecars, you actually work with 10 logging containers.
There are several instances when sidecar containers can’t be avoided, such as when you don’t have control of the app, and it writes logs to files or if you want to hide output from the Kubernetes logging framework.
The solution, in this case, is to collect all logs at once from the entire node with a single container, instead of collecting them at node-level. This is what I’ll explain in the section below.
Kubernetes Logging Tools: Collecting and Analyzing Logs
Now that you have a better overview of how logging works in Kubernetes, let’s see some of the best tools you can choose from to build your logging pipeline.
Kubernetes Logging with Sematext
Sematext Logs is compatible with a large number of log shippers – including Fluentd, Filebeat, and Logstash – logging libraries, platforms, frameworks, and our own agents, enabling you to aggregate, alert, and analyze log data from any layer within Kubernetes, in real-time.
Sematext is a fully-managed ELK solution. It helps you avoid the hassle of handling Elasticsearch yourself, while still offering the full benefits of the Elasticsearch API and Kibana. It will also collect Kubernetes and container metrics and events for all the containers running in your Kubernetes cluster including system-component containers from the kube-system namespace.
The process of getting Kubernetes logging configured with Sematext is as simple as running one command.
helm install --name st-agent \ --set infraToken=xxxx-xxxx \ --set containerToken=xxxx-xxxx \ --set logsToken=xxxx-xxxx \ --set region=US \ stable/sematext-agent
With this simple Helm command, you’ll install cluster-wide logging with DaemonSets running on each node, collecting logs from all containers as well as the Kubernetes system components.
As an added bonus this configuration will also collect Kubernetes and container metrics and events, giving you full observability into your Kubernetes cluster.
Kubernetes Logging with Fluentd
Fluentd is an open-source log aggregator that allows you to collect logs from your Kubernetes cluster, parse them from various formats like MySQL, Apache2, and many more, and ship them to the desired location – such as Elasticsearch, Amazon S3 or a third-party log management solution – where they can be stored and analyzed.
The most common way to deploy Fluentd is as a DaemonSet. The DaemonSet acts like a pod that collects logs from the kube-apiserver, kubelet, and all running pods on each node. Fluentd then enriches logs with metadata such as pod name or Kubernetes context to give more context.
Fluentd is often the tool of choice when one is just getting started with Kubernetes. It’s Kubernetes-native, it integrates seamlessly with Kubernetes deployment and there are a lot of resources available you can learn from. However, it has its limitations.
Fluentd works well with low volume, but when it’s time to add to the number of nodes and applications, it becomes problematic – Fluentd is written in Ruby, which is not considered to be a particularly performant language. Performance is important for log shipping tools. We see this today with more and more tools being written in Go, Rust, and Node.js. To get the most of it you need to fiddle with performance settings, enable multi-worker modes, flush thread counts, reduce memory, and a bunch of other things you most definitely don’t want to do.
In the end, Fluentd is just a log shipping tool. You still need to handle log storage, alerting, analysis, archiving, dashboarding, etc. With Sematext you get all of that out of the box. As well as built-in support for events and dashboards with infrastructure metrics.
Kubernetes Logging with ELK
The ELK stack is by now the most popular free and open-source log management solution, including for Kubernetes. It’s a collection of four tools that ensures an end-to-end logging pipeline.
Elasticsearch is a full-text search and analytics engine where you can store Kubernetes logs. Logstash is a log aggregator similar to Fluentd, that collects and parses logs before shipping them to Elasticsearch. Kibana is the visualization layer that allows users to visualize, query, and analyze their data via graphs and charts. And finally, Beats, are lightweight data shippers used to send logs and metrics to Elasticsearch.
If you’re interested in finding out more about Elasticsearch on Kubernetes, check out our blog post series where you can learn how to run and deploy Elasticsearch to Kubernetes and how to run and deploy the Elasticsearch Operator.
However, no matter how popular the ELK stack is, it’s not that easy to manage and deploy, especially if you run large-scale applications. Scaling Elasticsearch is quite challenging, and you’d need to become an Elasticsearch expert, and master how to architect the shards and indices.
Most people accept reality and settle for using a managed SaaS solution. It’s cheaper because you don’t pay for the infrastructure nor the engineers that maintain it.
Kubernetes Logging with Google Stackdriver
Google Stackdriver is a free Kubernetes-native logging and monitoring solution for applications running on Google Cloud Platform (GCP) and Amazon Web Services (AWS).
Stackdriver offers native logging for both providers. Once you create a GCP account and configure the integration with AWS, Stackdriver will automatically discover your cloud resources and provide an initial set of dashboards. From there, you can deploy Fluentd to collect logs and get deeper visibility into your virtual machines, databases, web servers and other components.
And since Stackdriver is a hosted service, Google takes care of the operational overhead associated with monitoring and maintaining the service for you. By comparison, Sematext is not tied to a cloud provider. You can use it with any cloud provider, and even your own hardware. You are not bound by any limitation. Apart from that, Sematext supports creating custom dashboards and charts from all your collected log data.
Wrap up
By now, you have the basic knowledge to build a logging strategy for your Kubernetes cluster. And you should definitely put some effort into it, whether you choose an internal logging system or a third-party solution. It’s crucial to have all logs in one place where you can visualize and analyze them as they enable you to monitor and troubleshoot your Kubernetes environment easily. That way, you can reduce the odds of anomalies occurring on the customer’s end.
Implementing a logging infrastructure is not an easy process nor quick. But have in mind that once you have a proper system set in place, instead of losing time scaling your logging infrastructure, you can focus on monitoring key metrics that help to scale your product and, ultimately, grow your revenue. Check out our Kubernetes monitoring tutorial and learn how to track cluster performance and our guide about monitoring and alerting if you need a refresher on why you need to monitor your applications in the first place.