5 Logstash Alternatives

When it comes to centralizing logs to Elasticsearch, the first log shipper that comes to mind is Logstash. People hear about it even if it’s not clear what it does:
– Bob: I’m looking to aggregate logs
– Alice: you mean… like… Logstash?

When you get into it, you realize centralizing logs often implies a bunch of things, and Logstash isn’t the only log shipper that fits the bill:

  • fetching data from a source: a file, a UNIX socket, TCP, UDP…
  • processing it: appending a timestamp, parsing unstructured data, adding Geo information based on IP
  • shipping it to a destination. In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry

In this post, we’ll describe Logstash and its alternatives – 5 “alternative” log shippers (Filebeat, Fluentd, rsyslog, syslog-ng and Logagent), so you know which fits which use-case.


It’s not the oldest shipper of this list (that would be syslog-ng, ironically the only one with “new” in its name), it’s certainly the best known. That’s because it has lots of plugins: inputs, codecs, filters and outputs. Basically, you can take pretty much any kind of data, enrich it as you wish, then push it to lots of destinations.


Logstash’s main strongpoint is flexibility, due to the number of plugins. Also, its clear documentation and straightforward configuration format means it’s used in a variety of use-cases. This leads to a virtuous cycle: you can find online recipes for doing pretty much anything. Here are a few examples from us: 5 minute intro, reindexing data in Elasticsearch, parsing Elasticsearch logs, rewriting Elasticsearch slowlogs so you can replay them with JMeter.


Logstash’s Achille’s heel has always been performance and resource consumption (the default heap size is 1GB). Though performance improved a lot over the years, it’s still a lot slower than the alternatives. We’ve done some benchmarks comparing Logstash to rsyslog and to filebeat and Elasticsearch’s Ingest node. This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones.

Another problem is that Logstash currently doesn’t buffer yet. A typical workaround is to use Redis or Kafka as a central buffer:

Logstash - Kafka - Elasticsearch

Typical use-case

Because of the flexibility and abundance of recipes, Logstash is a great tool for prototyping, especially for more complex parsing. If you have big servers, you might as well install Logstash on each. You won’t need buffering if you’re tailing files, because the file itself can act as a buffer (i.e. Logstash remembers where it left off):

Logstash - Elasticsearch (1)

If you have small servers, installing Logstash on each is a no go, so you’ll need a lightweight log shipper on them, that could push data to Elasticsearch though one (or more) central Logstash servers:

Light shipper - Logstash - Elasticsearch

As your logging project moves forward, you may or may not need to change your log shipper because of performance/cost. When choosing whether Logstash performs well enough, it’s important to have a good estimation of throughput needs – which would predict how much you’d spend on Logstash hardware.


As part of the Beats “family”, Filebeat is a lightweight log shipper that came to life precisely to address the weakness of Logstash: Filebeat was made to be that lightweight log shipper that pushes to Logstash.

With version 5.x, Elasticsearch has some parsing capabilities (like Logstash’s filters) called Ingest. This means you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing. You shouldn’t need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off:

Filebeat - Ingest - Elasticsearch

If you need buffering (e.g. because you don’t want to fill up the file system on logging servers), you can use Redis/Kafka, because Filebeat can talk to them:

Filebeat - Kafka - Elasticsearch


Filebeat is just a tiny binary with no dependencies. It takes very little resources and, though it’s young, I find it quite reliable – mainly because it’s simple and there are few things that can go wrong. That said, you have lots of knobs regarding what it can do. For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn’t get changes for a while.


Filebeat’s scope is very limited, so you’ll have a problem to solve somewhere else. For example, if you use Logstash down the pipeline, you have about the same performance issue. Because of this, Filebeat’s scope is growing. Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.

Typical use-cases

Filebeat is great for solving a specific problem: you log to files, and you want to either:

  • ship directly to Elasticsearch. This works if you want to just “grep” them or if you log in JSON (Filebeat can parse JSON). Or, if you want to use Elasticsearch’s Ingest for parsing and enriching (assuming the performance and functionality of Ingest fits your needs)
  • put them in Kafka/Redis, so another shipper (e.g. Logstash, or a custom Kafka consumer) can do the enriching and shipping. This assumes that the chosen shipper fits your functionality and performance needs


This is our log shipper that was born out of the need to make it easy for someone who didn’t use a log shipper before to send logs to Logsene (our logging SaaS which exposes the Elasticsearch API). And because Logsene exposes the Elasticsearch API, Logagent can be just as easily used to push data to Elasticsearch.


The main one is ease of use: if Logstash is easy (actually, you still need a bit of learning if you never used it, that’s natural), this one really gets you started in a minute. It tails everything in /var/log out of the box, parses various logging formats out of the box (Elasticsearch, Solr, MongoDB, Apache HTTPD…). It can mask sensitive data like PII, date of birth, credit card numbers, etc. It will also do GeoIP enriching based on IPs (e.g. for access logs) and update the GeoIP database automatically. It’s also light and fast, you’ll be able to put it on most logging boxes (unless you have very small ones, like appliances). The new 2.x version added support for pluggable inputs and outputs in a form of 3rd party node.js modules. Very importantly, Logagent has local buffering so, unlike Logstash, it will not lose your logs when the destination is not available.


Logagent is still young, although is developing and maturing quickly. It has some interesting functionality (e.g. it accepts Heroku or CloudFoundry logs), but it is not yet as flexible as Logstash.

Typical use-cases

Logagent is a good choice of a shipper that can do everything (tail, parse, buffer – yes, it can buffer on disk – and ship) that you can install on each logging server. Especially if you want to get started quickly. Logagent is embedded in Sematext Docker Agent to parse and ship Docker containers logs. Sematext Docker Agent works with Docker Swarm, Docker Datacenter, Docker Cloud, as well as Amazon EC2, Google Container Engine, Kubernetes, Mesos, RancherOS, and CoreOS, so for Docker log shipping, this is the tool to use.


The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages. It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch. You can find a howto for processing Apache and system logs here.


rsyslog is the fastest shipper that we tested so far. If you use it as a simple router/shipper, any decent machine will be limited by network bandwidth, but it really shines when you want to parse multiple rules. Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim). This means that with 20-30 rules, like you have when parsing Cisco logs, it can outperform the regex-based parsers like grok by a factor of 100 (it can be more or less, depending on the grok implementation and liblognorm version).

It’s also one of the lightest parsers you can find, depending on the configured memory buffers.


rsyslog requires more work to get the configuration right (you can find some sample configuration snippets here on our blog) and this is made more difficult by two things:

  • documentation is hard to navigate, especially for somebody new to the terminology
  • versions up to 5.x had a different configuration format (expanded from the syslogd config format, which it still supports). Newer versions can still work with the old format, but most newer features (like the Elasticsearch output) only work with the new configuration format, but then again there are older plugins (for example, the Postgres output) which only support the old format

Though rsyslog tends to be reliable once you get to a stable configuration (and it’s rich enough that there are usually multiple ways of getting the same result), you’re likely to find some interesting bugs along the way. Not all features are tested as part of the testbench.

Typical use-cases

rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container). If you need to do processing in another shipper (e.g. Logstash) you can forward JSON over TCP for example, or connect them via a Kafka/Redis buffer.

rsyslog also works well when you need that ultimate performance. Especially if you have multiple parsing rules. Then it makes sense to invest time in getting that configuration working.


You can think of syslog-ng as an alternative to rsyslog (though historically it was actually the other way around). It’s also a modular syslog daemon, that can do much more than just syslog. It recently received disk buffers and an Elasticsearch HTTP output. Equipped with a grammar-based parser (PatternDB), it has all you probably need to be a good log shipper to Elasticsearch.


Like rsyslog, it’s a light log shipper and it also performs well. It used to be a lot slower than rsyslog before, and I haven’t benchmarked the two recently, but 570K logs/s two years ago isn’t bad at all. Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation.


The main reason why distros switched to rsyslog was syslog-ng Premium Edition, which used to be much more feature-rich than the Open Source Edition which was somewhat restricted back then. We’re concentrating on the Open Source Edition here, all these log shippers are open source. Things have changed in the meantime, for example disk buffers, which used to be a PE feature, landed in OSE. Still, some features, like the reliable delivery protocol (with application-level acknowledgements) have not made it to OSE yet.

Typical use-cases

Similarly to rsyslog, you’d probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing. As with rsyslog, there’s a Kafka output that allows you to use Kafka as a central queue and potentially do more processing in Logstash or a custom consumer:

syslog-ng - Kafka - Elasticsearch

The difference is, syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance: for example, only outputs are buffered, so processing is done before buffering – meaning that a processing spike would put pressure up the logging stream.


Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don’t have to guess which substring is which field of which type. As a result, there are libraries for virtually every language, meaning you can easily plug in your custom applications to your logging pipeline.


Like most Logstash plugins, Fluentd plugins are in Ruby and very easy to write. So there are lots of them, pretty much any source and destination has a plugin (with varying degrees of maturity, of course). This, coupled with the “fluent libraries” means you can easily hook almost anything to anything using Fluentd.


Because in most cases you’ll get structured data through Fluentd, it’s not made to have the flexibility of other shippers on this list (Filebeat excluded). You can still parse unstructured via regular expressions and filter them using tags, for example, but you don’t get features such as local variables or full-blown conditionals. Also, while performance is fine for most use-cases, it’s not in on the top of this list: buffers exist only for outputs (like in syslog-ng), single-threaded core and the Ruby GIL for plugins means ultimate performance on big boxes is limited, but resource consumption is acceptable for most use-cases. For small/embedded devices, you might want to look at Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.

Typical use-cases

Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins. Also, if most of the sources are custom applications, you may find it easier to work with fluent libraries than coupling a logging library with a log shipper. Especially if your applications are written in multiple languages – meaning you’d use multiple logging libraries, which may behave differently.

The conclusion?

First of all, the conclusion is that you’re awesome for reading all the way to this point. If you did that, you get the nuances of an “it depends on your use-case” kind of answer. All these shippers have their pros and cons, and ultimately it’s down to your specifications (and in practice, also to your personal preferences) to choose the one that works best for you. If you need help deciding, integrating, or really any help with logging don’t be afraid to reach out – we offer Logging Consulting. Similarly, if you are looking for a place to ship your logs and avoid costs/headaches associated with running the full ELK/Elastic Stack on your own servers, check out Logsene – it exposes Elasticsearch API, so you can use it with all shippers we covered here.

25 thoughts on “5 Logstash Alternatives

  1. Nice posting but little bit off the topic. Logstash alternatives really mean living without Logstash and using something other.

    You could have mentioned logstash is unable to easily work in multitenant world. You need to tag incoming feed instead of really working as properly streaming. For me after 35 years of various streaming Logstash semantic is really weird – personally as an architect would have done it differently meaning when you need to aggregate streams then create a virtual channel.

    Also Logstash, even the latest one, has lots of bugs. It also does not really tell what’s wrong with it. Try having dns problem and watch what Logstash does .. nothing. Also really hard to find out how to do things like drop log4j – if it is configured inproperly Logstash fails but does not tell you that either.

    Have the latest version and noticed at least 5 to 10 various annoying bugs.

    Is the only way to code own *proper* version of Logstash? Personally do not understand why it needs 1GB+ while have done mjpeg proxy and had 200 clients needed less than 100MB and was real time when used early Xeons decade+ ago.

    1. Thanks for sharing your experience with Logstash. For sure we couldn’t have covered everything, any additions here in the comments section is welcome.

      The feature overlap isn’t perfect between Logstash and the others but then that’s never the case for competitors 🙂 I think for most use-cases you can pick any of the other five. Which I think answers your question on whether the only way is to code your own Logstash. Of course you can do that, it’s just not trivial.

      The 1GB+ memory requirement is mostly because Logstash runs in a JVM. High-traffic installations of any shipper will likely need significant memory for buffering.

    1. @metadaddy – yes, we’ve looked at it in its early days. We wanted to make sure people can use it to ship data to Logsene ( http://sematext.com/logsene ) but if I recall correctly StreamSets used only ES TransportClient. I seem to recall trying to explain using the HTTP API would be better. Maybe that is the case now that ES 5.x is out with a nice Java client that uses the HTTP API?

    1. Thanks, Mathew! It was an interesting read. Indeed there are so many options, Mozilla Heka is another one… We’d love it if someone (or we, in the future) will add more (hopefully as objective as possible) reviews of shippers not included in this post.

  2. Very thorough list, when working with rsyslogd recently I was presently surprised by all the possibilities, although the configuration syntax is anything but self explanatory.

    Now if only there were an alternative to ElasticSearch that doesn’t suffer the Garbage Collection death spiral 😉

    1. We’ve had relatively good experience with rsyslog, though we hit a couple of rough bugs. Re ES … it’s not *so* bad :). If you need help with ES, there are lots of ES resources on this very blog, there are classes, etc. And if you like ES for logs, but don’t want to manage ES, there is always Logsene – http://sematext.com/logsene

  3. Another important point worth mentioning is that all those tools are OSS. Therefore, it’s of paramount importance to compare the communities behind each product. I’d be happy to share my experiences with some of the mentioned products in order to enrich this post.

    1. Hi Fabien,

      I actually did mention that all these tools are OSS – though there are so many important things it’s hard to figure out which should be emphasized 🙂

      I’d love to hear your experiences – I obviously have mine though I’ve avoided to share them because they are subjective by definition. Even though if we can collect enough such experiences, we can extract some “more objective” ideas 🙂

  4. There is a typo: “[syslog-ng] recently received memory buffers”
    That should read “disk buffers”.
    Memory buffers have been there for ages

  5. The biggest weakness of Logagent in my opinion is it’s written in Node. This makes it very difficult to support operationally. Much prefer things written in Go, I can drop the binary anywhere, even on appliances, and not have to worry about dependencies.

    1. That’s interesting to hear. Here at Sematext we rarely hear this sort of comment and we see a lot of people loving Logagent because it just works. But yes, the good point about runtime vs. binary.

  6. This is a great post. Very helpful as usual. I use rsyslog, logstash, python, kafka, and redis in the care and feeding of my 40TB elasticsearch cluster. One tool I really appreciate and take for granted is nxlog. I use it to scrape windows events off a central event collector and forward to my rsyslog frontend at about 500 msgs/s. While I can attest to the high performance of rsyslog, nxlog is no slouch and may be more approachable and more easily configurable for newbies. I do not have experience with direct nxlog -> elasticsearch but here’s some info: http://nxlog.org/using-nxlog-elasticsearch-and-kibana

    Hope that helps. Thanks so much for the post and your previous writings! We do similar things and I steal your ideas where we don’t. 🙂

    1. Thanks for your very informative comment! I must admit I haven’t been working with nxlog enough so it didn’t get to the top 5, but it sounds like that should change 🙂 Windows event log collection is definitely a big plus in a mixed environment (like many deployments have).

  7. First of all, thank you for mentioning syslog-ng! Also, based on many discussions with syslog-ng users, I’d like to say that the figure you show about syslog-ng is not a typical use case.

    Generally people use syslog-ng clients to collect log messages and to send it to a central syslog-ng server. This central server runs message parsing using patterndb or any of the other parsers and sends the results directly to the Elasticsearch servers using the Java-based Elasticsearch driver of syslog-ng.

    In larger installations relays do the parsing and send the results to a cluster of syslog-ng servers which send logs to Elasticsearch.

    What I hear most often from our users, that they replace Logstash with syslog-ng due to resource usage. Even if the actual Elasticsearch driver is in Java, all of the collection and processing is done in efficient C code. It’s also an advantage, that the same application can be used everywhere: on the client, relay and server. There is no
    need for external queuing / buffering, like redis or kafka, which makes the whole architecture a lot more simple and easier to manage.

    1. Thanks a lot for your valuable comment, Peter! The typical use-case I was pointing to was actually the resource-tight, yet complex processing one. The figure is merely pointing to how I’d use syslog-ng in a more complex pipeline that involves Kafka (since syslog-ng recently gained a Kafka destination). I thought “that requires a figure” 🙂

      That said, it does indeed make sense that processing is typically done on central syslog-ng boxes (or relays – for large installs). Because it’s often that you can’t make significant changes (e.g. upgrade syslog-ng) on the logging boxes. Especially if those are routers, now I realize it’s a common case for our clients as well.

Leave a Reply