At the end of November, we’ll be migrating the Sematext Logs backend from Elasticsearch to OpenSearch

Tutorial: Logging with journald

April 28, 2020

Table of contents

I’m sure you bumped into journald: it’s what most distros use by default for system logging in Linux. Most applications running as a service will also log to the journal. So how do you make use of these logs to:

  • find the error or debug message that you’re looking for?
  • make sure logs don’t fill your disk?
  • centralize journals so you don’t have to ssh to each box?

In this post, we’ll answer all the above and more. We will dive into the following topics:

  • what is journald, how it came to be and what are its benefits
  • main configuration options, like when to remove old logs so you don’t run out of disk
  • journald and containers: can/should containers log to the journal?
  • journald vs syslog: advantages and disadvantages of both, how they integrate
  • ways to centralize journals. Advantages and disadvantages of each method, and which is the best. Spoiler alert: you can configure journald to send logs directly to Sematext Cloud; or you can use the open-source Logagent as a journald aggregator. Or – and this is the easiest approach of all – use Journald Discovery. Either way, you’ll have one place to search and analyze your journal events:

why use journald for logging

There are lots of other options to centralize journal entries, and lots of tools to help. We’ll explore them in detail, but before that, let’s zoom in to journald itself.

What is journald?

journald is the part of systemd that deals with logging. systemd, at its core, is in charge of managing services: it starts them up and keeps them alive.

All services and systemd itself need to log: “ssh started” or “user root logged in”, they might say. That’s where journald comes in: to capture these logs, record them, make them easy to find, and remove them when they pass a certain age.

Why use journald?

In short, because syslog sucks 🙂 Jokes aside, the paper announcing journald explained that systemd needed functionality that was hard to get through existing syslog implementations. Examples include structured logging, indexing logs for fast search, access control and signed messages.

As you might expect, not everyone agrees with these statements or the general approach systemd took with journald. But by now, systemd is adopted by most Linux distributions, and it includes journald as well. journald happily coexists with syslog daemons, as:

  • some syslog daemons can both read from and write to the journal
  • journald exposes the syslog API

journald benefits

Think of journald as your mini-command-line-ELK that lives on virtually every Linux box. It provides lots of features, most importantly:

  • Indexing. journald uses a binary storage for logs, where data is indexed. Lookups are much faster than with plain text files
  • Structured logging. Though it’s possible with syslog, too, it’s enforced here. Combined with indexing, it means you can easily filter specific logs (e.g. with a set priority, in a set timeframe)
  • Access control. By default, storage files are split by user, with different permissions to each. As a regular user, you won’t see everything root sees, but you’ll see your own logs
  • Automatic log rotation. You can configure journald (see below) to keep logs only up to a space limit, or based on free space

Configuring journald

To tweak how journald behaves, you’ll edit /etc/systemd/journald.conf and then reload the journal service like:

systemctl reload systemd-journald.service

Though earlier versions of journald need to be restarted:

systemctl restart systemd-journald.service

Most important settings will be around storage: whether the journal should be kept in memory or on disk, when to remove old logs and how much to rate limit. We’ll focus on some of those next, but you can see all the configuration options in journald.conf’s man page.

journald storage

The Storage option controls whether the journal is stored in memory (under /run/log/journal) or on disk (under /var/log/journal). Setting Storage=volatile will store the journal in memory, while Storage=persistent will store it on disk. Most distributions have it set to auto, which means it will store the journal on disk if /var/log/journal exists, otherwise it will be stored in memory.

Once you’ve decided where to store the journal, you may want to set some limits. For example, SystemMaxUse=4G will limit /var/log/journal to about 4GB. Similarly, SystemKeepFree=10G will try to keep 10GB of disk space free. If you choose to keep the journal in memory, the equivalent options are RuntimeMaxUse and RuntimeKeepFree.

You can check the current disk usage of the journal with journalctl via journalctl --disk-usage. If you need to, you can clean it up on demand via journalctl --vacuum-size=4GB (i.e. to reduce it to 4GB).

Compression is enabled by default on log entries larger than 512 bytes. If you want to change this threshold to, say 1KB, you’d add Compress=1K.

Also by default, journald will drop all log messages from a service if it passes certain limits. These limits can be configured via RateLimitBurst and RateLimitIntervalSec, which default to 10000 and 30s respectively. Actual values will depend on the available free space. For example, if you have more than 64GB of free disk space, the multiplier will be 6. Meaning it will drop logs from a service after 60K messages sent in 30 seconds.

The rate limit defaults are sensible, unless you have a specific service that’s generating lots of logs (e.g. a web server). In that case, it might be better to LogRateLimitBurst and LogRateLimitIntervalSec in that application’s service definition.

journald commands via journalctl

journalctl is your main tool for interacting with the journal. If you just run it, you’ll see:

  • all entries, from oldest to newest
  • paged by less
  • lines go past the edge of your screen if they have to (use left and right arrow keys to navigate)
  • format is similar to the syslog output, as it is configured in most Linux distributions: syslog timestamp + hostname + program and its PID + message

Here’s an example snippet:

Apr 09 10:22:49 localhost.localdomain su[866]: pam_unix(su-l:session): session opened for user solr by (uid=0)<
Apr 09 10:22:49 localhost.localdomain systemd[1]: Started Session c1 of user solr.<
Apr 09 10:22:49 localhost.localdomain systemd[1]: Created slice User Slice of solr.<
Apr 09 10:22:49 localhost.localdomain su[866]: (to solr) root on none

This is rarely what you want. More common scenarios are:

  • last N lines (equivalent of tail -n 20 – if N=20): journalctl -n 20
  • follow (tail -f equivalent): journalctl -f
  • page from newest to oldest: journalctl --reverse
  • skip paging and just grep for something (e.g. “solr”): journalctl --no-pager | grep solr

If you often find yourself using --no-pager, you can change the default pager through the SYSTEMD_PAGER variable. export SYSTEMD_PAGER=cat will disable paging. That said, you might want to look into journalctl’s own options for displaying and filtering – described below – before using text processing tools.

journalctl display settings

The main option here is --output, which can take many values. As an ELK consultant, I want my timestamps ISO 8601, and --output=short-iso will do just that. Now this is more like it:

2020-04-09T10:23:01+0000 localhost.localdomain solr[860]: Started Solr server on port 8983 (pid=999). Happy searching!
2020-04-09T10:23:01+0000 localhost.localdomain su[866]: pam_unix(su-l:session): session closed for user solr

journald keeps more information than what the short/short-iso output shows. Adding --output=json-pretty (or just json if you want it compact) can look like this for a single event:

{
 "__CURSOR" : "s=83694dffb084461ea30a168e6cef1e6c;i=103f;b=f0bbba1703cb43229559a8fcb4cb08b9;m=c2c9508c;t=5a2d9c22f07ed;x=c5fe854a514cef39",
 "__REALTIME_TIMESTAMP" : "1586431033018349",
 "__MONOTONIC_TIMESTAMP" : "3267973260",
 "_BOOT_ID" : "f0bbba1703cb43229559a8fcb4cb08b9",
 "PRIORITY" : "6",
 "_UID" : "0",
 "_GID" : "0",
 "_MACHINE_ID" : "13e3a06d01d54447a683822d7e0b4dc9",
 "_HOSTNAME" : "localhost.localdomain",
 "SYSLOG_FACILITY" : "3",
 "SYSLOG_IDENTIFIER" : "systemd",
 "_TRANSPORT" : "journal",
 "_PID" : "1",
 "_COMM" : "systemd",
 "_EXE" : "/usr/lib/systemd/systemd",
 "_CAP_EFFECTIVE" : "1fffffffff",
 "_SYSTEMD_CGROUP" : "/",
 "CODE_FILE" : "src/core/job.c",
 "CODE_FUNCTION" : "job_log_status_message",
 "RESULT" : "done",
 "MESSAGE_ID" : "9d1aaa27d60140bd96365438aad20286",
 "_SELINUX_CONTEXT" : "system_u:system_r:init_t:s0",
 "UNIT" : "user-0.slice",
 "MESSAGE" : "Removed slice User Slice of root.",
 "CODE_LINE" : "781",
 "_CMDLINE" : "/usr/lib/systemd/systemd --switched-root --system --deserialize 22",
 "_SOURCE_REALTIME_TIMESTAMP" : "1586431033018103"
}

This is where you can use structured logging to filter events. Next up, we’ll look closer at the most important options for filtering.

journald log filtering

You can filter by any field (see the JSON output above) by specifying key=value arguments, like:

journalctl _SYSTEMD_UNIT=sshd.service

There are shortcuts, for example the _SYSTEMD_UNIT above can be expressed as -u. The above command is the equivalent of of:

journalctl -u sshd.service

Other useful shortcuts:

  • severity (here called priority). journalctl -p warning will show logs with at least a severity of warning
  • show only kernel messages: journalctl --dmesg

You can also filter by time, of course. Here, you have multiple options:

  • --since/--until as a full timestamp. For example: journalctl --since="2020-04-09 11:30:00"
  • date only (00:00:00 is assumed as the time): journalctl --since=2020-04-09
  • abbreviations: journalctl --since=yesterday --until=now

In general, you have to specify the exact value you’re looking for. With the exception of _SYSTEMD_UNIT. Here, patterns also work:

journalctl -u sshd*

Newer versions of systemd also allow a --grep flag, which allows you to filter the MESSAGE field by regex. But you can always pipe the journalctl output through grep itself.

journald and boots

Besides messages logged by applications, journald remembers significant events, such as system reboots. Here’s an example:

# journalctl MESSAGE="Server listening on 0.0.0.0 port 22."
-- Logs begin at Wed 2020-04-08 11:53:18 UTC, end at Thu 2020-04-09 12:01:01 UTC. --
Apr 08 11:53:23 localhost.localdomain sshd[822]: Server listening on 0.0.0.0 port 22.
Apr 08 13:23:42 localhost.localdomain sshd[7425]: Server listening on 0.0.0.0 port 22.
-- Reboot --
Apr 09 10:22:49 localhost.localdomain sshd[857]: Server listening on 0.0.0.0 port 22.

You can suppress these special messages via -q. Use -b to show only messages after a certain boot. For example, to show messages since the last boot:

# journalctl MESSAGE="Server listening on 0.0.0.0 port 22." -b
-- Logs begin at Wed 2020-04-08 11:53:18 UTC, end at Thu 2020-04-09 12:01:01 UTC. --
Apr 09 10:22:49 localhost.localdomain sshd[857]: Server listening on 0.0.0.0 port 22.

You can specify a boot as an offset to the current one (e.g. -b -1 is the boot before the last). You can also specify a boot ID, but to do this you need to know what are the available boot IDs:

# journalctl --list-boots
-1 d26652f008ef4020b15a3d510bbcb381 Wed 2020-04-08 11:53:18 UTC—Wed 2020-04-08 14:31:16 UTC
 0 f0bbba1703cb43229559a8fcb4cb08b9 Thu 2020-04-09 10:22:43 UTC—Thu 2020-04-09 12:01:01 UTC

And then:

# journalctl MESSAGE="Server listening on 0.0.0.0 port 22." -b d26652f008ef4020b15a3d510bbcb381
-- Logs begin at Wed 2020-04-08 11:53:18 UTC, end at Thu 2020-04-09 12:01:01 UTC. --
Apr 08 11:53:23 localhost.localdomain sshd[822]: Server listening on 0.0.0.0 port 22.
Apr 08 13:23:42 localhost.localdomain sshd[7425]: Server listening on 0.0.0.0 port 22.

This is all useful if you configure journald for persistent storage (see the configuration section above).

journald centralized logging

As you probably noticed, journald is quite host-centric. In practice, you’ll want to access these logs in a central location, without having to SSH into each machine.

There are multiple ways of centralizing journald logs, and we’ll detail each below:

  • systemd-journal-upload uploads journal entries. Either directly to Sematext Cloud or to a log shipper that can read its output, such as the open-source Logagent
  • systemd-journal-remote as a “centralizer”. The idea is to have all journals on one host, so you can use journalctl to search (see above). This can work in “pull” or “push” mode
  • a syslog daemon or another log shipper reads from the local journal. Then, it forwards logs to a central store like ELK or Sematext Cloud
  • journald forwards entries to a local syslog socket. Then, a log shipper (typically a syslog daemon) picks messages up and forwards them to the central store

While all these tools will work, the approach that is the easiest by far is via journald auto-discovery in Sematext Cloud. Journald Discovery brings all your systemd service logs under one roof where you can granularly define log shipping rules by including/excluding specific services. This is what it looks like:

journald logging tutorial

systemd-journal-upload to ELK or Sematext Cloud

systemd-journal-upload is a service that pushes new journal entries over HTTP/HTTPS. That destination can be the Sematext Cloud Journald Receiver – the easiest way to centralize journald logs. And probably the best, as we’ll discuss below.

Although it’s part of journald/systemd, systemd-journal-upload isn’t installed by default on most distros. So you have to add it via something like:

apt-get install systemd-journal-remote

Then, uploading journal entries is as easy as:

systemd-journal-upload --url=http://logsene-journald-receiver.sematext.com:80/YOUR_LOGS_TOKEN

Though most likely you’ll want to configure it as a service:

$ cat /etc/systemd/journal-upload.conf
[Upload]
URL=http://logsene-journald-receiver.sematext.com:80/YOUR_LOGS_TOKEN

If you need more control, or if you want to send journal entries to your local Elasticsearch, you can use the open-source Logagent with its journald input plugin as a journald centralizer:

how to use journald for logging

Here’s the relevant part of logagent.conf:

input:
  journal-upload:
    module: input-journald-upload
    port: 9090
    worker: 0
    systemdUnitFilter:
      include: !!js/regexp /.*/i

Using Logagent and Elasticsearch or Sematext Cloud (i.e. we host Logagent and Elasticsearch for you) is probably the best option to centralize journald logs. That’s because you get all journald’s structured data over a reliable protocol (HTTP/HTTPS) with minimal overhead. The catch? Initial import is tricky, because it can generate a massive HTTP payload. For this, you might want to do the initial import by streaming journalctl output through Logagent, like:

journalctl --output=json --no-page | logagent --index SEMATEXT-LOGS-TOKEN

systemd-journal-remote

Journald comes with its own “log centralizer”: systemd-journal-remote. You don’t get anywhere near the flexibility of ELK/Sematext Cloud, but it’s already there and it might be enough for small environments.

systemd-journal-remote can either pull journals from remote systems or listen for journal entries on HTTP/HTTPS. The push model – where systemd-journal-upload is in charge of pushing logs – is typically better because:

  • it can continuously tail the journal and remembers where it left off (i.e. maintains a cursor)
  • you don’t need to open access to the journal of every system

systemd-journal-remote typically comes in the same package as systemd-journal-upload. Once it’s installed, you can make it listen to HTTP/HTTPS traffic:

host2# systemd-journal-remote --listen-http=0.0.0.0:19352 --output=/var/log/journal/remote

Now you can push the journal of a remote host like this:

host1# systemd-journal-upload --url=http://host2:19352

systemd-journal-remote and systemd-journal-gatewayd

systemd-journal-remote can also pull journal entries from remote hosts. These hosts would normally serve their journal via systemd-journal-gatewayd (which is often provided by the same package). Once you have systemd-journal-gatewayd, you can start it via:

host1# systemctl start systemd-journal-gatewayd.socket

You can verify if it works like this:

curl host1:19531/entries

Then, from the “central” host, you can use systemd-journal-remote to fetch journal entries:

host2# systemd-journal-remote --url http://host1:19531

By default, systemd-journal-remote will write the imported journal to /var/log/journal/remote/ (you might have to create it first!), so you can search them via journalctl:

journalctl -D /var/log/journal/remote/

Tools that read directly from the journal

Another approach for centralizing journald logs is to have a log shipper read from the journal, much like journalctl does. Then, it can process logs and send them to destinations like Elasticsearch or Sematext Cloud (which exposes the Elasticsearch API).

For this approach, there’s a PoC journald input plugin for Logstash. As you probably know, Logstash is easy to use, so reading from the journal is as easy as:

input {
  journald {
  # you may add other options here, but of course the defaults are sensible :)
  }
}

Journalbeat is also available. It’s as easy to install and use as Filebeat, except that it reads from the journal. But it’s marked as experimental.

Why PoC and experimental? Because of potential journal corruption which might lead to nasty results. Check the comments in rsyslog’s journal input documentation for details.

Syslog daemons are also log shippers. Some of them can also read from the journal, or even write to it. There’s a lot to say about syslog and the journal, so we’ll dissect the topic in a section of its own.

journald vs syslog

Journald provides a good out-of-the-box logging experience for systemd. The trade-off is, journald is a bit of a monolith, having everything from log storage and rotation, to log transport and search. Some would argue that syslog is more UNIX-y: more lenient, easier to integrate with other tools. Which was its main criticism to begin with.

Flame wars aside, there’s good integration between the two. Journald provides a syslog API and can forward to syslog (see below). On the other hand, syslog daemons have journal integrations. For example, rsyslog provides plugins to both read from journald and write to journald. In fact, they recommend two architectures:

  • A small setup (e.g. N embedded devices and one server) could work by centralizing journald logs (see above). If embedded devices don’t have systemd/journald but have syslog, they can centralize via syslog to the server and finally write to the server’s journal. This journal will act like a mini-ELK
  • A larger setup can work by aggregating journal entries through a syslog daemon. We’ll concentrate on this scenario in the rest of this section

There are two ways of centralizing journal entries via syslog:

  1. syslog daemon acts as a journald client (like journalctl or Logstash or Journalbeat)
  2. journald forwards messages to syslog (via socket)

Option 1) is slower – reading from the journal is slower than reading from the socket – but captures all the fields from the journal. Option 2) is safer (e.g. no issues with journal corruption), but the journal will only forward traditional syslog fields (like severity, hostname, message..). Typically, you’d go for 2) unless you need the structured info.

Here’s an example configuration for implementing 1) with rsyslog, and writing all messages to Elasticsearch or Sematext Cloud:

# module that reads from journal
module(load="imjournal"
 StateFile="/var/run/journal.state" # we write here where we left off
 PersistStateInterval="100" # update the state file every 100 messages
)
# journal entries are read as JSON, we'll need this to parse them
module(load="mmjsonparse")
# Elasticsearch or Sematext Cloud HTTP output
module(load="omelasticsearch")

# this is done on every message (i.e. parses the JSON)
action(type="mmjsonparse")

# output template that simply writes the parsed JSON
template(name="all-json" type="list"){
 property(name="$!all-json")
}

action(type="omelasticsearch"
 template="all-json" # use the template defined earlier
 searchIndex="SEMATEXT-LOGS-APP-TOKEN-GOES-HERE"
 server="logsene-receiver.sematext.com"
 serverport="80"
 bulkmode="on" # use the bulk API
 action.resumeretrycount="-1" # retry indefinitely if Logsene/Elasticsearch is unreachable
)

For option 2), we’ll need to configure journald to forward to a socket. It’s as easy as adding this to /etc/systemd/journald.conf:

ForwardToSyslog=yes

And it will write messages, in syslog format, to /run/systemd/journal/syslog. On the rsyslog side, you’ll have to configure its socket input module to listen to that socket. Here’s a similar example of sending logs to Elasticsearch or Sematext Cloud:

module(load="imuxsock"
 SysSock.Name="/run/systemd/journal/syslog")

# template to write traditional syslog fields as JSON
template(name="plain-syslog"
 type="list") {
 constant(value="{")
 constant(value="\"timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
 constant(value="\",\"host\":\"") property(name="hostname")
 constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
 constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
 constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
 constant(value="\",\"message\":\"") property(name="msg" format="json")
 constant(value="\"}")
}

action(type="omelasticsearch"
 template="plain-syslog" # use the template defined earlier
 searchIndex="SEMATEXT-LOGS-APP-TOKEN-GOES-HERE"
 server="logsene-receiver.sematext.com"
 serverport="80"
 bulkmode="on" # use the bulk API
 action.resumeretrycount="-1" # retry indefinitely if Logsene/Elasticsearch is unreachable
)

Whether you read the journal through syslog, systemd-journal-upload or through a log shipper, all the above methods assume that you’re dealing with Linux running on bare metal or VMs. But what if you’re using containers? Let’s explore your options in the next section.

journald and containers

In this context, I think it’s worth making a distinction between Docker containers and systemd containers. Let’s take them one at a time.

journald and Docker

Typically, a Docker container won’t have systemd, because it would make it too “heavy”. As a consequence, it won’t have journald, either. That said, you probably have journald on the host, if the host is running Linux. This means you can use the journald logging driver to send all the logs of a host’s containers to that host’s journal. It’s as easy as:

docker run my_container --log-driver=journald

And that container’s logs will be in the journal:

# journalctl CONTAINER_NAME=my_container --all
Apr 09 13:03:28 localhost.localdomain dockerd-current[25558]: hello journal

If you want to use journald by default, you can make the change in daemon.json and restart Docker:

# cat /etc/docker/daemon.json
{
 "log-driver": "journald"
}
systemctl restart docker

If you have more than one host, you’re back to the centralizing problem that we explored in the previous section: getting all journals in one place. This makes journald an intermediate step that may not be necessary.

A better approach is to centralize container logs via Logagent, which can run as a container. Here, Logagent picks up logs and forwards them to a central place, like Elasticsearch or Sematext Cloud. But it’s not the only way. In fact, we explore different approaches, with their pros and cons, in our Docker logging guide.

Or, if you’d like to learn more about how to get started with Sematext Cloud, check out the video guide below.

 

journald and systemd containers

systemd provides containers as well (called machines) via systemd-nspawn. Unlike Docker containers, systemd-nspawn machines can log to the journal directly. You can read the logs of a specific machine like this:

journalctl --machine $MACHINE_NAME

Where $MACHINE_NAME is one of the running machines. You’d use machinectl list to see all of them.

As with Docker’s journald logging driver, this setup might be challenging when you have multiple hosts. You’ll either want to centralize your journals – as described in the previous section. Or, you can send logs from your systemd containers directly to the central location – either via a log shipper or a logging library.

Conclusions

Did you read all the way to the end? You’re a hero! And you probably figured that journald is good for structured logging, quick local searches, and tight integration with systemd. Its design shows its weaknesses when it comes to centralizing log events. Here we have many options, but none is perfect. That said, Logagent’s journald input and Sematext Cloud’s journald receiver (the hosted equivalent) come pretty close.

Java Logging Basics: Concepts, Tools, and Best Practices

Imagine you're a detective trying to solve a crime, but...

Best Web Transaction Monitoring Tools in 2024

Websites are no longer static pages.  They’re dynamic, transaction-heavy ecosystems...

17 Linux Log Files You Must Be Monitoring

Imagine waking up to a critical system failure that has...