Skip to main content

5 Logstash Alternatives

Radu Gheorghe Radu Gheorghe on

When it comes to centralizing logs to Elasticsearch, the first log shipper that comes to mind is Logstash. People hear about it even if it’s not clear what it does:
– Bob: I’m looking to aggregate logs
– Alice: you mean… like… Logstash?

When you get into it, you realize centralizing logs often implies a bunch of things, and Logstash isn’t the only log shipper that fits the bill:

  • fetching data from a source: a file, a UNIX socket, TCP, UDP…
  • processing it: appending a timestamp, parsing unstructured data, adding Geo information based on IP
  • shipping it to a destination. In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry

In this post, we’ll describe Logstash and its alternatives – 5 “alternative” log shippers – 5 of the best “alternative” log shippers (Filebeat, Fluentd, rsyslog, syslog-ng and Logagent), so you know which fits which use-case depending on their advantages.

Looking for an easy way to manage your Elasticsearch cluster?
Sematext Logs integrates with all standard logging tools to make it easier for you to correlate logs and metrics.
Create an account now! See our plans
Free for 30 days. No credit card required


Logstash is not the oldest shipper of this list (that would be syslog-ng, ironically the only one with “new” in its name), but it’s certainly the best known. That’s because it has lots of plugins: inputs, codecs, filters and outputs. Basically, you can take pretty much any kind of data, enrich it as you wish, then push it to lots of destinations.

Typical use cases: What is Logstash used for?

Logstash is typically used for collecting, parsing, and storing logs for future use as part of log management.

Logstash Advantages

Logstash’s main strongpoint is flexibility, due to the number of plugins.

Also, its clear documentation and straightforward configuration format means it’s used in a variety of use-cases. This leads to a virtuous cycle: you can find online recipes for doing pretty much anything.

Here are a few Logstash recipe examples from us: “5 minute tutorial intro”, “How to reindex data in Elasticsearch”, “How to parse Elasticsearch logs”, “How to rewrite Elasticsearch slowlogs so you can replay them with JMeter”.

Logstash Disadvantages

Logstash’s biggest con or “Achille’s heel” has always been performance and resource consumption (the default heap size is 1GB).

Though performance improved a lot over the years, it’s still a lot slower than the alternatives. We’ve done some benchmarks comparing Logstash to rsyslog and to filebeat and Elasticsearch’s Ingest node.

This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones. That said, you can delegate the heavy processing to one or more central Logstash boxes, while keeping the logging servers with a simpler – and thus less resource-consuming – configuration.

This works best with versions 5 and later, which come with configurable in-memory or on-disk buffers:

Because of the flexibility and abundance of recipes, Logstash is a great tool for prototyping, especially for more complex parsing.

If you have big servers, you might as well install Logstash on each. You won’t need much buffering if you’re tailing files, because the file itself can act as a buffer (i.e. Logstash remembers where it left off):

Logstash - Elasticsearch (1)

If you have small servers, installing Logstash on each is a no go, so you’ll need a lightweight log shipper on them, that could push data to Elasticsearch through one (or more) central Logstash servers:

Light shipper - Logstash - Elasticsearch

As your logging project moves forward, you may or may not need to change your log shipper because of performance/cost.

When choosing whether Logstash performs well enough, it’s important to have a good estimation of throughput needs – which would predict how much you’d spend on Logstash hardware.


log management and analitics ebook sematext

Log Management & Analytics – A Quick Guide to Logging Basics

Looking to replace Splunk or a similar commercial solution with Elasticsearch, Logstash, and Kibana (aka, “ELK stack” or “Elastic stack”) or an alternative logging stack? In this eBook, you’ll find useful how-to instructions, screenshots, code, info about structured logging with rsyslog and Elasticsearch, and more. Download yours.

Logstash vs Filebeat

As part of the Beats “family”, Filebeat is a lightweight log shipper that came to life precisely to address the weakness of Logstash: Filebeat was made to be that lightweight log shipper that pushes to Logstash or Elasticsearch.

So the main differences between Logstash and Filebeat are that Logstash has more functionality, while Filebeat takes less resources. The same goes when you compare Logstash vs Beats in general: while Logstash has a lot of inputs, there are specialized beats (most notably MetricBeat) that do the job of collecting data with very little CPU and RAM.

Filebeat Advantages

Filebeat is just a tiny binary with no dependencies. It takes very little resources and, though it’s young, I find it quite reliable – mainly because it’s simple and there are few things that can go wrong. That said, you have lots of knobs regarding what it can do. For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn’t get changes for a while.

Another great thing about Filebeat is that, since 5.2, it comes with modules for specific log types. For example, the apache module will point Filebeat to default access.log and error.log paths, configure Elasticsearch’s Ingest node to parse them, configure Elasticsearch’s mappings and settings as well as deploy Kibana dashboards for analyzing things like response time and response code breakdown.

Filebeat Disadvantages

Filebeat’s scope is very limited, so you’ll have a problem to solve somewhere else. For example, if you use Logstash down the pipeline, you have about the same performance issue. Because of this, Filebeat’s scope is growing. Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.

Filebeat Typical use-cases

Filebeat is great for solving a specific problem: you log to files, and you want to either:

  • ship directly to Elasticsearch. This works if you want to just “grep” them or if you log in JSON (Filebeat can parse JSON). Or, if you want to use Elasticsearch’s Ingest for parsing and enriching (assuming the performance and functionality of Ingest fits your needs)
  • put them in Kafka/Redis, so another shipper (e.g. Logstash, or a custom Kafka consumer) can do the enriching and shipping. This assumes that the chosen shipper fits your functionality and performance needs

Filebeat to Elasticsearch’s Ingest

Since version 5.x, Elasticsearch has some parsing capabilities (like Logstash’s filters) called Ingest. This means you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing.

You shouldn’t need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off:

Filebeat - Ingest - Elasticsearch

Filebeat to Kafka

If you need buffering (e.g. because you don’t want to fill up the file system on logging servers), you can use a central Logstash for that.

However, Logstash’s queue doesn’t have built-in sharding or replication. For larger deployments, you’d typically use Kafka as a queue instead, because Filebeat can talk to Kafka as well:

Filebeat - Kafka - Elasticsearch

To summarize the differences between Logstash and Filebeat:


Logstash Filebeat
Resource usage heavy light
Input options many fewer: files, TCP/UDP (including syslog), stdin
Output options many fewer: Logstash, Elasticsearch, console, file
Buffering disk, memory small memory buffer for performance

Logstash vs Logagent

This is our log shipper that was born out of the need to make it easy for someone who didn’t use a log shipper before to send logs to Logsene (our logging SaaS which exposes the Elasticsearch API). And because Logsene exposes the Elasticsearch API, Logagent can be just as easily used to push data to your own Elasticsearch cluster.

Logagent Advantages

The main one is ease of use: if Logstash is easy (actually, you still need a bit of learning if you never used it, that’s natural), Logagent really gets you started in a minute. It tails everything in /var/log out of the box, parses various logging formats out of the box (Elasticsearch, Solr, MongoDB, Apache HTTPD…).

It can mask sensitive data like PII, date of birth, credit card numbers, etc. It will also do GeoIP enriching based on IPs (e.g. for access logs) and update the GeoIP database automatically. It’s also light and fast, you’ll be able to put it on most logging boxes (unless you have very small ones, like appliances).

The new 2.x version added support for pluggable inputs and outputs in a form of 3rd party node.js modules. Like Logstash, it can have persistent buffers and it can write to and read from Kafka.

Logagent Disadvantages

Logagent is still young, although is developing and maturing quickly. It has some interesting functionality (e.g. it accepts Heroku or CloudFoundry logs), but it is not yet as flexible as Logstash.

To summarize the main differences between Logstash and Logagent are that Logstash is more mature and more out-of-the-box functionality, while Logagent is lighter and easier to use.

Logagent Typical use-cases

Logagent is a good choice of a shipper that can do everything (tail, parse, buffer – yes, it can buffer on disk – and ship) that you can install on each logging server. Especially if you want to get started quickly.

Logagent is embedded in Sematext Docker Agent to parse and ship Docker containers logs. Sematext Docker Agent works with Docker Swarm, Docker Datacenter, Docker Cloud, as well as Amazon EC2, Google Container Engine, Kubernetes, Mesos, RancherOS, and CoreOS, so for Docker log shipping, this is the tool to use.

Logstash vs rsyslog

The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages. It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch.

You can find more info on how to for processing Apache and system logs here.

Rsyslog Advantages

rsyslog is the fastest shipper that we tested so far.

If you use it as a simple router/shipper, any decent machine will be limited by network bandwidth, but it really shines when you want to parse multiple rules. Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim). This means that with 20-30 rules, like you have when parsing Cisco logs, it can outperform the regex-based parsers like grok by a factor of 100 (it can be more or less, depending on the grok implementation and liblognorm version).

It’s also one of the lightest parsers you can find, depending on the configured memory buffers.

Rsyslog Disadvantages

rsyslog requires more work to get the configuration right (you can find some sample configuration snippets here on our blog) and this is made more difficult by two things:

  • documentation is hard to navigate, especially for somebody new to the terminology
  • versions up to 5.x had a different configuration format (expanded from the syslogd config format, which it still supports). Newer versions can still work with the old format, but most newer features (like the Elasticsearch output, Kafka input and output) only work with the new configuration format, but then again there are older plugins (for example, the Postgres output) which only support the old format

Though rsyslog tends to be reliable once you get to a stable configuration (and it’s rich enough that there are usually multiple ways of getting the same result), you’re likely to find some interesting bugs along the way. Automatic testing constantly improves in rsyslog, but it’s not yet as good as something like Logstash or Filebeat.

To summarize, the main difference between Logstash and rsyslog is that Logstash is easier to use while rsyslog lighter.

Rsyslog Typical use-cases

rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container). If you need to do processing in another shipper (e.g. Logstash) you can forward JSON over TCP for example, or connect them via a Kafka/Redis buffer.

rsyslog also works well when you need that ultimate performance. Especially if you have multiple parsing rules. Then it makes sense to invest time in getting that configuration working.

To summarize the differences between Logstash and rsyslog:


Logstash rsyslog
Resource usage heavy light
Inputs many fewer: files, all syslog flavors, Kafka
Filters many fewer: GeoIP, anonymizing, etc. Though events can be manipulated through variables and templates
Outputs many many (Elasticsearch, Kafka, SQL..) though still fewer than Logstash
Regex parsing grok grok (less mature)
Grammar-based parsing dissect (less mature) liblognorm (powerful, fast)
Multiple processing pipelines yes yes
Exposes internal metrics yes, pull (HTTP API) yes, push (input module)
Queues memory, disk memory, disk, hybrid. Outputs can have their own queues
Variables event-specific (metadata) event-specific and global

Logstash vs syslog-ng

You can think of syslog-ng as an alternative to rsyslog (though historically it was actually the other way around). It’s also a modular syslog daemon, that can do much more than just syslog. It recently received disk buffers and an Elasticsearch HTTP output. Equipped with a grammar-based parser (PatternDB), it has all you probably need to be a good log shipper to Elasticsearch.

Syslog-ng Advantages

Like rsyslog, it’s a light log shipper and it also performs well. It used to be a lot slower than rsyslog before, and I haven’t benchmarked the two recently, but 570K logs/s years ago isn’t bad at all.

Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation. Packaging support for various distros is also very good.

Syslog-ng Disadvantages

The main reason why distros switched to rsyslog was syslog-ng Premium Edition, which used to be much more feature-rich than the Open Source Edition which was somewhat restricted back then. We’re concentrating on the Open Source Edition here, all these log shippers are open source.

Things have changed in the meantime, for example disk buffers, which used to be a PE feature, landed in OSE. Still, some features, like the reliable delivery protocol (with application-level acknowledgements) have not made it to OSE yet.

Syslog-ng Typical use-cases

Similarly to rsyslog, you’d probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing.

As with rsyslog, there’s a Kafka output that allows you to use Kafka as a central queue and potentially do more processing in Logstash or a custom consumer:

syslog-ng - Kafka - Elasticsearch

The difference is, syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance: for example, only outputs are buffered, so processing is done before buffering – meaning that a processing spike would put pressure up the logging stream.

Logstash vs Fluentd

Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don’t have to guess which substring is which field of which type.

As a result, there are libraries for virtually every language, meaning you can easily plug in your custom applications to your logging pipeline.

Fluentd Advantages

Like most Logstash plugins, Fluentd plugins are in Ruby and very easy to write. So there are lots of them, pretty much any source and destination has a plugin (with varying degrees of maturity, of course). This, coupled with the “fluent libraries” means you can easily hook almost anything to anything using Fluentd.

Fluentd Disadvantages

Because in most cases you’ll get structured data through Fluentd, it’s not made to have the flexibility of other shippers on this list (Filebeat excluded). You can still parse unstructured via regular expressions and filter them using tags, for example, but you don’t get features such as local variables or full-blown conditionals.

Also, while performance is fine for most use-cases, it’s not in on the top of this list: buffers exist only for outputs (like in syslog-ng), single-threaded core and the Ruby GIL for plugins means ultimate performance on big boxes is limited, but resource consumption is acceptable for most use-cases.

For small/embedded devices, you might want to look at Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.

Fluentd Typical use-cases

Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins.

Also, if most of the sources are custom applications, you may find it easier to work with fluent libraries than coupling a logging library with a log shipper. Especially if your applications are written in multiple languages – meaning you’d use multiple logging libraries, which may behave differently.

To summarize the differences between Logstash and Fluentd:

Logstash Fluentd
Resource usage high low
Variables yes no
Inputs many many
Outputs many many
Queue memory, disk. For filters and outputs Memory, disk. For outputs
Libraries nothing specific many

Some honorable alternatives mentions

There are some technologies that are definitely worth mentioning in this conversation.

Without trying to be exhaustive, we’ll try to address the most important ones.

Logstash vs Apache Flume

Apache Flume’s architecture is different than that of most shippers described here. You have sources (inputs), channels (buffers) and sinks (outputs). Processing, such as parsing unstructured data, would be done preferably in outputs, to avoid pipeline backpressure.

The most interesting output is based on Morphlines, which can do processing like Logstash’s grok, but also send data to the likes of Solr and Elasticsearch. Unfortunately, the Morphlines Elasticsearch plugin didn’t get much attention since its initial contribution (by our colleague Paweł, 4 years ago).

Logstash vs Splunk

Splunk isn’t a log shipper, it’s a commercial logging solution, so it doesn’t compare directly to Logstash.

o compare Logstash with Splunk, you’ll need to add at least Elasticsearch and Kibana in the mix, so you can have the complete ELK stack. Alternatively, you can point Logstash to Logsene, our logging service.

That said, there are two main differences between Splunk and ELK: one is that ELK is open-source, and the other is that Splunk tends to do a lot of query-time parsing. By contrast, in ELK you’d typically parse logs with Logstash to make them structured, and index them in Elasticsearch.

This trades disk space and inserts performance for query performance which, for large datasets, is usually a good trade-off.

Logstash vs Graylog

Graylog is another complete logging solution, an open-source alternative to Splunk.

It uses Elasticsearch as its storage backend. Its graylog-server component aims to do what Logstash does and more: everything goes through graylog-server, from authentication to queries. graylog-server also has pipeline definitions and buffering parameters, like Logstash and other log shippers mentioned here. Graylog is nice because you have a complete logging solution, but it’s going to be harder to customize than an ELK stack.

Conclusion: How does Logstash compare to these alternatives?

First of all, the conclusion is that you’re awesome for reading all the way to this point. If you did that, you get the nuances of an “it depends on your use-case” kind of answer.

All these shippers have their pros and cons, and ultimately it’s down to your specifications (and in practice, also to your personal preferences) to choose the one that works best for you.

If you need help deciding, integrating, or really any help with logging don’t be afraid to reach out – we offer Logging Consulting.

Want to avoid the hassle and costs of managing the Elastic Stack on your own servers?
Use Logstash or any Logstash alternative to send logs to Sematext Logs and it will do the work for you.
Give Sematext Logs a try! See our plans
Free for 30 days. No credit card required

25 thoughts on “5 Logstash Alternatives

  1. Nice posting but little bit off the topic. Logstash alternatives really mean living without Logstash and using something other.

    You could have mentioned logstash is unable to easily work in multitenant world. You need to tag incoming feed instead of really working as properly streaming. For me after 35 years of various streaming Logstash semantic is really weird – personally as an architect would have done it differently meaning when you need to aggregate streams then create a virtual channel.

    Also Logstash, even the latest one, has lots of bugs. It also does not really tell what’s wrong with it. Try having dns problem and watch what Logstash does .. nothing. Also really hard to find out how to do things like drop log4j – if it is configured inproperly Logstash fails but does not tell you that either.

    Have the latest version and noticed at least 5 to 10 various annoying bugs.

    Is the only way to code own *proper* version of Logstash? Personally do not understand why it needs 1GB+ while have done mjpeg proxy and had 200 clients needed less than 100MB and was real time when used early Xeons decade+ ago.

    1. Thanks for sharing your experience with Logstash. For sure we couldn’t have covered everything, any additions here in the comments section is welcome.

      The feature overlap isn’t perfect between Logstash and the others but then that’s never the case for competitors 🙂 I think for most use-cases you can pick any of the other five. Which I think answers your question on whether the only way is to code your own Logstash. Of course you can do that, it’s just not trivial.

      The 1GB+ memory requirement is mostly because Logstash runs in a JVM. High-traffic installations of any shipper will likely need significant memory for buffering.

    1. @metadaddy – yes, we’ve looked at it in its early days. We wanted to make sure people can use it to ship data to Logsene ( ) but if I recall correctly StreamSets used only ES TransportClient. I seem to recall trying to explain using the HTTP API would be better. Maybe that is the case now that ES 5.x is out with a nice Java client that uses the HTTP API?

    1. Thanks, Mathew! It was an interesting read. Indeed there are so many options, Mozilla Heka is another one… We’d love it if someone (or we, in the future) will add more (hopefully as objective as possible) reviews of shippers not included in this post.

  2. Very thorough list, when working with rsyslogd recently I was presently surprised by all the possibilities, although the configuration syntax is anything but self explanatory.

    Now if only there were an alternative to ElasticSearch that doesn’t suffer the Garbage Collection death spiral 😉

    1. We’ve had relatively good experience with rsyslog, though we hit a couple of rough bugs. Re ES … it’s not *so* bad :). If you need help with ES, there are lots of ES resources on this very blog, there are classes, etc. And if you like ES for logs, but don’t want to manage ES, there is always Logsene –

  3. Another important point worth mentioning is that all those tools are OSS. Therefore, it’s of paramount importance to compare the communities behind each product. I’d be happy to share my experiences with some of the mentioned products in order to enrich this post.

    1. Hi Fabien,

      I actually did mention that all these tools are OSS – though there are so many important things it’s hard to figure out which should be emphasized 🙂

      I’d love to hear your experiences – I obviously have mine though I’ve avoided to share them because they are subjective by definition. Even though if we can collect enough such experiences, we can extract some “more objective” ideas 🙂

  4. There is a typo: “[syslog-ng] recently received memory buffers”
    That should read “disk buffers”.
    Memory buffers have been there for ages

  5. The biggest weakness of Logagent in my opinion is it’s written in Node. This makes it very difficult to support operationally. Much prefer things written in Go, I can drop the binary anywhere, even on appliances, and not have to worry about dependencies.

    1. That’s interesting to hear. Here at Sematext we rarely hear this sort of comment and we see a lot of people loving Logagent because it just works. But yes, the good point about runtime vs. binary.

  6. This is a great post. Very helpful as usual. I use rsyslog, logstash, python, kafka, and redis in the care and feeding of my 40TB elasticsearch cluster. One tool I really appreciate and take for granted is nxlog. I use it to scrape windows events off a central event collector and forward to my rsyslog frontend at about 500 msgs/s. While I can attest to the high performance of rsyslog, nxlog is no slouch and may be more approachable and more easily configurable for newbies. I do not have experience with direct nxlog -> elasticsearch but here’s some info:

    Hope that helps. Thanks so much for the post and your previous writings! We do similar things and I steal your ideas where we don’t. 🙂

    1. Thanks for your very informative comment! I must admit I haven’t been working with nxlog enough so it didn’t get to the top 5, but it sounds like that should change 🙂 Windows event log collection is definitely a big plus in a mixed environment (like many deployments have).

  7. First of all, thank you for mentioning syslog-ng! Also, based on many discussions with syslog-ng users, I’d like to say that the figure you show about syslog-ng is not a typical use case.

    Generally people use syslog-ng clients to collect log messages and to send it to a central syslog-ng server. This central server runs message parsing using patterndb or any of the other parsers and sends the results directly to the Elasticsearch servers using the Java-based Elasticsearch driver of syslog-ng.

    In larger installations relays do the parsing and send the results to a cluster of syslog-ng servers which send logs to Elasticsearch.

    What I hear most often from our users, that they replace Logstash with syslog-ng due to resource usage. Even if the actual Elasticsearch driver is in Java, all of the collection and processing is done in efficient C code. It’s also an advantage, that the same application can be used everywhere: on the client, relay and server. There is no
    need for external queuing / buffering, like redis or kafka, which makes the whole architecture a lot more simple and easier to manage.

    1. Thanks a lot for your valuable comment, Peter! The typical use-case I was pointing to was actually the resource-tight, yet complex processing one. The figure is merely pointing to how I’d use syslog-ng in a more complex pipeline that involves Kafka (since syslog-ng recently gained a Kafka destination). I thought “that requires a figure” 🙂

      That said, it does indeed make sense that processing is typically done on central syslog-ng boxes (or relays – for large installs). Because it’s often that you can’t make significant changes (e.g. upgrade syslog-ng) on the logging boxes. Especially if those are routers, now I realize it’s a common case for our clients as well.

Leave a Reply