Centralizing logs to Elasticsearch? Of course, the first log shipper that comes to mind is
Logstash. When you get into it, you realize centralizing logs often implies a bunch of things, and Logstash isn’t the only log shipper that fits the bill. We’ve listed and explained
5 “alternative” log shippers (Filebeat, Fluentd, rsyslog, syslog-ng and Logagent), so you know which fits which use-case.
Looking for hosted Elasticsearch as a Service?
Try Sematext Cloud! Get scheduled reports, alerting, anomaly detection, ChatOps Integration, and more.
Today we’ll talk about rsyslog. We’ve just published a free rsyslog eBook which tackles data collection and parsing using rsyslog. Mostly aimed at the centralizing logs to Elasticsearch use-case, this eBook is divided in eight sections:
- Rsyslog install/upgrade and rsyslog.conf. Here we make sure you’re all set up with a recent version of Elasticsearch, and explain a bit on why the rsyslog.conf shipped with your distro can look confusing 🙂
- Rsyslog plugins: main input modules and their configurations. This describes the general flow of data through rsyslog and the available data sources.
- Message modifiers: using mmnormalize to parse unstructured data in a scalable way. If you’ve used Logstash grok, mmnormalize is not as flexible, but a whole lot faster. Though rsyslog has grok as well.
- Output modules. Mainly how to write data to Elasticsearch, but we cover formatting messages in general, so you can write them to files or other network-based destinations.
- Tuning queues, workers and batch sizes. Ever heard of rsyslog boxes processing millions of messages per second? It’s not a myth or voodoo magic. You can do that, too, with some proper tuning. Here you’ll also learn how rsyslog can persist queues on disk, with or without memory buffers.
- Using rulesets to manage multiple data flows. For example, you may want local syslog to be processed separately from the application logs, when it comes to performance, priority and/or delivery guarantees.
- RainerScript: variables, conditionals, loops and lookup tables. Besides parsing, you can modify and enrich data in many ways. Here we’ll explore how.
- Pipeline patterns when sending data to Elasticsearch. We’re wrapping up the eBook by showing various architecture options for a logging pipeline that includes rsyslog, along with the likes of Kafka and Logstash.
Free eBook: Centralized Logging with Rsyslog
Evaluating rsyslog for a log management project? This eBook covers all you need to know about collecting and parsing data using rsyslog. You’ll find useful how-to instructions, code, structured logging with rsyslog and Elasticsearch, and more.
We hope that you’ll find this eBook useful. If you want to learn more, you’d be more than welcome in our Elasticsearch training classes (we have a module for logging, where you’ll get in-depth knowledge of Kibana, Logstash and Beats). We can also assist you in building your logging project via logging consulting, and help put out fires once it’s deployed via production support.
Alternatively, if you don’t actually want to maintain your own ELK or want to get off of Splunk, Sematext Cloud is free to play with and it frees you up from having to manage your own Elasticsearch cluster. And yes, of course, you can use rsyslog to ship logs to Sematext Cloud.