Elastic Stack Training

Elasticsearch Training, San Francisco & New York, October

If you are using Elasticsearch and are looking for Elasticsearch training to quickly improve your Elastic Stack skills, we’ve running several Elasticsearch classes this October in San Francisco and New York.

All classes are also available virtually. This means you get to participate in the class, see the whiteboard, see and hear the instructor as well as other attendees, and they get to see and hear you….. without you having to travel.

Have two people attend the training from the same company? The second one gets 25% off.

To see the full course details, full outline and information about the instructor, click on the class names below.

solr-elasticsearch-training-fall-2016
San Francisco:

ny-solr-elasticsearch-training-fall-2016
New York City:

All classes include breakfast and lunch. If you have special dietary needs, please let us know ahead of time. If you have any questions about any of the classes you can use our live chat (see bottom-right of the page), email us at training@sematext.com or call us at 1-347-480-1610.

Elasticsearch / Elastic Stack Training – NYC June 13-16

Next month, June 13-16, 2016, we will be running three Elastic Stack (aka ELK Stack) classes in New York City:

  1. June 13 & 14: Elasticsearch for Developers Training Workshop
  2. June 15: Elasticsearch Operations Training Workshop
  3. June 16: Elasticsearch for Logging Training Workshop

All classes cover Elasticsearch 2.x as well as Elasticsearch 5.x!

You can see the complete course outlines under Training Overview.  All three classes include lots of valuable hands-on exercises.  Be prepared to learn a lot!

Cost:

  • 2-day course: $1,200 early bird rate (valid through June 1) and $1,500 afterwards.
  • 1-day course: $700 early bird rate (valid through June 1) and $800 afterwards.

There’s also a 50% discount for the purchase of a 2nd seat!

Location:
462 7th Avenue, New York, NY 10018 – see map

If you have any questions please get in touch.

Monitoring rsyslog with Kibana and SPM

A while ago we published this post where we explained how you can get stats about rsyslog, such as the number of messages enqueued, the number of output errors and so on. The point was to send them to Elasticsearch (or Logsene, our logging SaaS, which exposes the Elasticsearch API) in order to analyze them.

This is part 2 of that story, where we share how we process these stats in production. We’ll cover:

  • an updated config, working with Elasticsearch 2.x
  • what Kibana dashboards we have in Logsene to get an overview of what rsyslog is doing
  • how we send some of these metrics to SPM as well, in order to set up alerts on their values: both threshold-based alerts and anomaly detection

Read More

AWS CloudWatch / VPC Logs to Logsene

Sending AWS CloudWatch/VPC Logs to Logsene

Use-case: you’re using AWS VPC and want visibility over the connection to your VPC. Which IPs are allowed or denied connections to certain ports, how much traffic goes through each connection and so on.

 

Solution: send AWS VPC logs (one type of CloudWatch logs) to a Logsene application. There, you can search these logs, visualize them and set up alerts. This post will show you how to forward VPC logs (any CloudWatch logs, for that matter) to Logsene using an AWS Lambda function.

The main steps for implementing the solution are:

  1. Create a Flow Log for your VPC, if there isn’t one already. This will send your AWS VPC logs to CloudWatch
  2. Create a new Lambda Function, which will parse and forward the CloudWatch/VPC logs
  3. Clone this GitHub repo and fill in your Logsene Application Token, create a ZIP file with the contents of the cloned repository, and configure the new Lambda function to use the created ZIP file as code
  4. Decide on the maximum memory to allocate for this function and the timeout for its execution
  5. Explore your logs in Logsene 🙂

Create a Flow Log

To start, log in to your AWS Console, then go to Services -> VPC. There, select your VPC, right-click it and select Create Flow Log:
createflowlog

Then you’ll need to set up a IAM role that’s able to push VPC logs to your CloudWatch account (if you don’t have one already) and then choose a name for this flow. You’ll use the name later on in the lambda function.
flowlog

Create a new AWS Lambda function

Now go to Services -> Lambda and get started with a new function. Then the first step is to select a blueprint for your function. Take cloudwatch-logs-process-data:

blueprint

The next step is to select a source. Here you’d make sure the source type is CloudWatch Logs and select the flow you just created. You can filter only certain logs, but you’d normally leave the Filter Pattern empty to process all of them. Nevertheless, you need to give this filter a name:

source

At the next step, you’d configure the function itself. First you give it a name:

name

Then you have to specify the code:

Add the code to your Lambda function

First you’d need to clone the GitHub repository:

git@github.com:sematext/logsene-aws-lambda-s3.git

Then, open index.js and fill in your Logsene application token in the logseneToken variable. To find the Logsene Application Token, go to your Sematext Account, then in the Services menu select Logsene, and then the Logsene application you want to send your logs to. Once you’re in that application, click the Integration button and you’ll see the application token:
token

Now your code is ready, so you need to make a zip file out of it. Note: make sure you zip only the contents of the repository, not the directory containing the repository. Like:

pwd # /tmp/cloned-repos/logsene-aws-lambda-cloudwatch zip -r logsene.zip *

Finally, you’d upload the zip to AWS Lambda as the function code:
upload

Finalize the function configuration

After the code, leave the handler to the default index.handler and select a role that allows this function to execute. You can create a new Basic execution role to do that (from the drop-down) or select a basic execution role that you’ve already created:
role

Then, you need to decide on how much memory you allow for the function and how long you allow it to run. This depends on the log throughput (more logs will need more processing resources) and will influence costs (i.e. like keeping the equivalent general-purpose instance up for that time). Normally, runtime is very short so even large resources shouldn’t generate significant costs. 256MB of memory and a 30 second timeout should be enough for most use-cases:
memory

To enable the function to run when new logs come in, you’d need to enable the source with your Flow Log name at the last step.
enable

Exploring CloudTrail logs with Logsene

As logs get generated by VPC, the function should upload their contents to Logsene. You can use the native UI to explore those logs:

native

And because VPC logs get parsed out of the box, you can also use Kibana 4 to generate visualizations. Like breaking down connections by the number of bytes:

Kibana

Happy Logsene-ing! 🙂

1-Click ELK Stack: Hosted Kibana 4

We just pushed a new release of Logsene to production, including 1-Click Access to Kibana 4!

Did you know that Logsene provides a complete ELK Stack? Logsene’s indexing and search API is compatible with the Elasticsearch API.  That’s why it is very easy to use Logsene – you can use the existing Logstash Elasticsearch output, point it to Logsene for indexing, and then you can use Kibana and point it to Logsene like it’s your local Elasticsearch cluster.  And not only is this process easy, but Logsene actually adds more functionality to the bare “ELK” stack!  In fact, here is a long list of features the open-source ELK stack just doesn’t have, such as:

  • User Authentication and User Roles
  • Secured communication (TLS/HTTPS)
  • App Sharing: access control for each Logsene App, aka Index
  • Account Sharing: share resources, not passwords
  • Syslog receiver – no need to run Logstash just for forwarding server logs
  • Anomaly detection and Alerts for logs or any indexed data!

Let’s take a look to the Kibana 4 integration. You’ll find the “Kibana 4” button in the Logsene App Overview. Simply click on it and Kibana 4 will load the data from your Logsene App.

KIbana4-LS-OverviewKibana 4 automatically shows the “Discover” view and doesn’t require any setup – Logsene does everything for you! This means you can immediately start to build Queries, Visualizations, and Dashboards!

Kibana4-Discover
Kibana 4 Discover View – displaying data stored in Logsene
Kibana4-Apache-Logs-Dashboard
Simple Demo Dashboard – try it here!

If you prefer to run Kibana and point it to Logsene, yes, you can still do that; we show how to do that in How to use Kibana 4 with Logsene.

If you don’t want to run and manage your own Elasticsearch cluster but would like to use Kibana for log and data analysis, then give Logsene a quick try by registering here – we do all the backend heavy lifting so you can focus on what you want to get out of your data and not on infrastructure.  There’s no commitment and no credit card required.  And, if you are a young startup, a small or non-profit organization, or an educational institution, ask us for a discount (see special pricing)!

We are happy to answer questions or receive feedback – please drop us a line or get us @sematext.

Monitoring Kibana 4's Node.js App

The release of Kibana 4.x has had an impact on monitoring and other related activities.  In this post we’re going to get specific and show you how to add Node.js monitoring to the Kibana 4 server app.  Why Node.js?  Because Kibana 4 now comes with a little Node.js server app that sits between the Kibana UI and the Elasticsearch backend.  Conveniently, you can monitor Node.js apps with SPM, which means SPM can monitor Kibana in addition to monitoring Elasticsearch.  Futhermore, Logstash can also be monitored with SPM, which means you can use SPM to monitor your whole ELK Stack!  But, I digress…

A few important things to note first:

  • the Kibana 4 project moved from Ruby to pure browser app to Node.js on the server side, as mentioned above
  • it now uses the popular Express Web Framework
  • the server component has a built-in proxy to Elasticsearch, just like it did with the Ruby app
  • when monitoring Kibana 4, the proxy requests to Elasticsearch are monitored at the same time

OK, here’s how to add Node.js monitoring to the Kibana 4 server-side app.

1) Preparation

Get an App Token for SPM by creating a new Node.js SPM App in SPM.

Kibana 4 currently ships with Node.js version 0.10.35 in a subdirectory – so please make sure your Node.js is on 0.10 while installing SPM Agent for Node.js (it compiles native modules, which need to fit to Kibana’s 0.10 runtime).

  npm-install n -g
  n 0.10.35

After finishing the described installation below you can easily switch back to 0.12 or io.js 2.0 by using “n 0.12” or “n io 2.0” – because Kibana will use its own node.js sub-folder.

2) Install SPM Agent for Node.js

Switch over to your Kibana 4 installation directory.  It has a “src” folder where the Node.js modules are installed.

  cd src
  npm install spm-agent-nodejs

Add the following line to ./src/app.js

  var spmAgent = require ('spm-agent-nodejs')

Add the following line to bin/kibana shell script at the beginning

export spmagent_tokens__spm=YOUR-SPM-APP-TOKEN

3) Run Kibana

bin/kibana

4) Check results in SPM

After a minute you should see the performance metrics such as EventLoop Latencies, Memory Usage, Garbage Collection details and HTTP statistics of your Kibana 4 Server app in SPM.

Kibana 4 - monitored with SPM for Node.js
Kibana 4 – monitored with SPM for Node.js

SPM for Node.js Monitoring – Details, Screenshots and more

For more specific details about SPM’s Node.js monitoring integration, check out this blog post.

That’s all there is to it!  If you’ve got questions or feedback to this post, please let us know!

How to use Kibana 4 with Logsene Log Management

Did you know that Logsene provides a complete ELK Stack; i.e., a complete Log management, analytics, exploration, and visualization solution? Logsene currently supports Kibana 3 with complete Kibana 4 support about to be released soon.

Can’t wait to use Kibana 4 with Logsene? No problem – part of the integration is already done and we’ve prepared instructions to run your own Kibana 4 with Logsene:

  • Open Kibana 4 configuration file config/kibana.yml and add Logsene server and Kibana-Index:
    elasticsearch_url: “https://logsene-receiver.sematext.com”
    kibana_index: “LOGSENE_TOKEN_kibana
  • Start Kibana 4 (./bin/kibana) and open the web browser http://localhost:5601 – Kibana 4 asks for an index pattern. Here you need to enter the Logsene token and a daily date pattern separated by an underscore:
    [YOUR-LOGSENE-TOKEN_]YYYY-MM-DD
  • Enter Kibana Index-PatternNow you are ready to set up your visualizations and dashboards in Kibana 4:
Kibana4-Logsene-Syslog
Kibana 4 Dashboard for data stored in Logsene

Perhaps you prefer automation of tasks? We prepared it for you:

That’s all there is to it.  Like what you see here?  Sound like something that could benefit your organization?  Then try Logsene for Free by registering here.  There’s no commitment and no credit card required.  And, if you are a young startup, a small or non-profit organization, or an educational institution, ask us for a discount (see special pricing)!

We are happy answer questions or receive feedback – please drop us a line or get us @sematext.

Monitoring rsyslog's Performance with impstats and Elasticsearch

If you’re using rsyslog for processing lots of logs (and, as we’ve shown before, rsyslog is good at processing lots of logs), you’re probably interested in monitoring it. To do that, you can use impstats, which comes from input module for process stats. impstats produces information like:
input stats, like how many events went through each input
queue stats, like the maximum size of a queue
– action (output or message modification) stats, like how many events were forwarded by each action
– general stats, like CPU time or memory usage

In this post, we’ll show you how to send those stats to Elasticsearch (or Logsene — essentially hosted ELK, our log analytics service) that exposes the Elasticsearch API), where you can explore them with a nice UI, like Kibana. For example get the number of logs going through each input/output per hour:
kibana_graph
More precisely, we’ll look at:
– the useful options around impstats
– how to use those stats and what they’re about
– how to ship stats to Elasticsearch/Logsene by using rsyslog’s Elasticsearch output
– how to do this shipping in a fast and reliable way. This will apply to most rsyslog use-cases, not only impstats

Read More

Using Elasticsearch Mapping Types to Handle Different JSON Logs

By default, Elasticsearch does a good job of figuring the type of data in each field of your logs. But if you like your logs structured like we do, you probably want more control over how they’re indexed: is time_elapsed an integer or a float? Do you want your tags analyzed so you can search for big in big data? Or do you need it not_analyzed, so you can show top tags via the terms aggregation? Or maybe both?

In this post, we’ll look at how to use index templates to manage multiple types of logs across multiple indices. Also, we’ll explain how to use logging tools (such as Logstash and rsyslog) to handle JSON logging and specify types.

Elasticsearch Mapping and Logs

As you may already know, to control these things in Elasticsearch you’ll need to define a mapping. This works similarly in Logsene, our log analytics SaaS, because it uses Elasticsearch and exposes its API.

With logs you’ll probably use time-based indices, because they scale better (in Logsene, for instance, you get daily indices). That said, to make sure the mapping you define today applies to the index you create tomorrow, you need to define it in an index template.

Managing Multiple Types

Mappings provide a nice abstraction when you have to deal with multiple types of structured data. Let’s say you have two apps generating logs of different structures: both have a timestamp field, but one recording logins has a user field, and another one recording purchases has an amount field.

To deal with this, you can define the timestamp field in the _default_ mapping which applies to all types. Then, in each type’s own mapping we’ll define fields unique to that mapping. The following snippet is an example that works with Logsene, provided that aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee is your Logsene app token. If you roll your own Elasticsearch, you can use whichever name you want, and make sure the template applies to your index pattern.

curl -XPUT 'logsene-receiver.sematext.com/_template/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee_MyTemplate' -d '{
 "template" : "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee*",
 "order" : 21,
 "mappings" : {
  "_default_" : {
   "properties" : {
    "timestamp" : { "type" : "date" }
   }
  },
  "firstapp" : {
   "properties" : {
    "user" : { "type" : "string" }
   }
  },
  "secondapp" : {
   "properties" : {
    "amount" : { "type" : "long" }
   }
  }
 }
}'

Sending JSON Logs to Specific Types

When you send a document to Elasticsearch by using the API, you have to provide an index and a type. You can use an Elasticsearch client for your preferred language to log directly to Elasticsearch or Logsene this way. But I wouldn’t recommend this, because then you’d have to manage things like buffering if the destination is unreachable.

Instead, I’d keep my logging simple and use a specialized logging tool, such as Logstash or rsyslog to do the hard work for me. Logging to a file is usually the easiest option. It’s local, and you can have your logging tool tail the file and send contents over the network. I usually prefer sockets (like syslog) because they let me configure Logstash/rsyslog to:
– write events in a human format to a local file I can tail if I need to (usually in development)
– forward logs without hitting disk if I need to (usually in production)
Whatever you prefer, I think writing to local files or sockets is better than sending logs over the network from your application. Unless you’re willing to do a reliability trade-off and use UDP, which gets rid of most complexities.

Opinions aside, here’s a Logstash configuration for tailing a file with JSON logs separated by a newline. Here’s how you’d send those documents to Logsene via the Elasticsearch API:

input {
 file {
 path => "/var/log/test"
 codec => "json"
 }
}

output {
 elasticsearch {
 host => "logsene-receiver.sematext.com"
 port => 80
 index => "LOGSENE-APP-TOKEN-GOES-HERE"
 index_type => "fileapp"
 protocol => "http"
 manage_template => false
 }
}

Note how the JSON codec does the parsing here, instead of the more expensive and maintenance-heavy approach with grok that we’ve shown in an earlier post on getting started with Logstash. Some applications let you configure the log format, so you can make them write JSON (Apache httpd, for example).

If you want to send JSON over syslog, there’s the JSON-over-syslog (CEE) format that we detailed in a previous post. You can use rsyslog’s JSON parser module to take your structured logs and forward them to Logsene:

module(load="imuxsock")        # can listen to local syslog socket
module(load="omelasticsearch") # can forward to Elasticsearch
module(load="mmjsonparse")     # can parse JSON

action(type="mmjsonparse")  # parse CEE-formatted messages

template(name="syslog-cee" type="list") {  # Elasticsearch documents will contain
  property(name="$!all-json")              # all JSON fields that were parsed
}

action(
  type="omelasticsearch"
  template="syslog-cee"                     # use the template defined earlier
  server="logsene-receiver.sematext.com"
  serverport="80"
  searchType="syslogapp"
  searchIndex="LOGSENE-APP-TOKEN-GOES-HERE"
  bulkmode="on"                                # send logs in batches
  queue.dequeuebatchsize="1000"                # of up to 1000
  action.resumeretrycount="-1"    # retry indefinitely (buffer) if destination is unreachable
)

To send a CEE-formatted syslog, you can run logger ‘@cee: {“amount”: 50}’ for example. Rsyslog would forward this JSON to Elasticsearch or Logsene via HTTP. Note that Logsene also supports CEE-formatted JSON over syslog out of the box if you want to use a syslog protocol instead of the Elasticsearch API.

Filtering by Type

Once your logs are in, you can filter them by type (via the _type field) in Kibana:
Type Filtering with Kibana
However, if you want more refined filtering by source, we suggest using a separate field for storing the application name. This can be useful when you have different applications using the same logging format. For example, both crond and postfix use plain syslog.

If you’re looking for a place to send your logs to, check out Logsene!

Parsing and Centralizing Elasticsearch Logs with Logstash

NOTE: this configuration was tested with Logstash 2.2 on logs generated by Elasticsearch 2.2

No, it’s not an endless loop waiting to happen, the plan here is to use Logstash to parse Elasticsearch logs and send them to another Elasticsearch cluster or to a log analytics service like Logsene (which conveniently exposes the Elasticsearch API, so you can use it without having to run and manage your own Elasticsearch cluster).

If you’re looking for some ELK stack intro and you think you’re in the wrong place, try our 5-minute Logstash tutorial. Still, if you have non-trivial amounts of data, you might end up here again. Because you’ll probably need to centralize Elasticsearch logs for the same reasons you centralize other logs:

  • to avoid SSH-ing into each server to figure out why something went wrong
  • to better understand issues such as slow indexing or searching (via slowlogs, for instance)
  • to search quickly in big logs

In this post, we’ll describe how to use Logstash’s file input to tail the main Elasticsearch log and the slowlogs. We’ll use grok and other filters to parse different parts of those logs into their own fields and we’ll send the resulting structured events to Logsene/Elasticsearch via the elasticsearch output. In the end, you’ll be able to do things like slowlog slicing and dicing with Kibana:

logstash_elasticsearch

TL;DR note: scroll down to the FAQ section for the whole config with comments.

Read More