At the end of November, we’ll be migrating the Sematext Logs backend from Elasticsearch to OpenSearch

Elasticsearch security: Authentication, Encryption, Backup

July 16, 2019

Table of contents

There’s no need to look outside the ELK Stack for apps to ensure data protection. 

Basic Elasticsearch Security features are free and include a lot of functionality to help you prevent unauthorized access, preserve data integrity by encrypting communication between nodes, and maintain an audit trail on who did what to your stack and with the data it stores. From authentication to encryption and backup, Elasticsearch security covers everything that’s needed to safeguard your cluster.

When dealing with security breaches, there is a general plan of action. In this post we’re going to show you how to work your way through it and secure your Elastic Stack by using a few simple and free prevention methods:

Let’s start with the general attack ⇒ countermeasures:

  • Port scanning ⇒ minimize exposure:
    • Don’t use the default port 9200
    • Don’t expose Elasticsearch to the public Internet – put Elasticsearch behind a firewall
    • Don’t forget Kibana port
  • Data theft ⇒ secure access:
    • Lock down the HTTP API with authentication
    • Encrypt communication with SSL/TLS
  • Data deletion ⇒ set up backup:
    • Backup your data
  • Log file manipulation ⇒ log auditing and alerting
    • Hackers might manipulate or delete system log files to cover their tracks. Sending logs to a remote destination increases the chances of discovering intrusion early.

6 Steps to secure Elasticsearch:

From port scanning to log file manipulation, let’s drill into each of the above items with step-by-step actions to secure Elasticsearch:

1. Lock Down Open Ports

Firewall: Close the public ports

The first action should be to close the relevant ports to the Internet:

iptables -A INPUT -i eth0 -p tcp --destination-port 9200 -s {PUBLIC-IP-ADDRESS-HERE} -j DROP

iptables -A INPUT -i eth0 -p tcp --destination-port 9300 -s {PUBLIC-IP-ADDRESS-HERE} -j DROP

If you run Kibana note that the Kibana server acts as a proxy to Elasticsearch and thus needs its port closed as well:

iptables -A INPUT -i eth0 -p tcp --destination-port 5601 -s {PUBLIC-IP-ADDRESS-HERE} -j DROP

After this you can relax a bit! Elasticsearch won’t not reachable from the Internet anymore.

Bind Elasticsearch ports only to private IP addresses

Change the configuration in elasticsearch.yml to bind only to private IP addresses or for single node instances to the loopback interface:

network.bind_host: 127.0.0.1

2. Add private networking between Elasticsearch and client services

There are several open-source and free solutions that provide Elasticsearch access authentication, but if you want something quick and simple, here is how to do it yourself with just Nginx:

ssh -Nf -L 9200:localhost:9200 user@remote-elasticsearch-server

You can then access Elasticsearch via the SSH tunnel with from client machines e.g.

curl http://localhost:9200/_search

3. Set up authentication and SSL/TLS with Nginx

There are several open-source and free solutions that provide Elasticsearch access authentication, but if you want something quick and simple, here is how to do it yourself with just Nginx:

Generate password file

printf "esuser:$(openssl passwd -crypt MySecret)\n" > /etc/nginx/passwords

Generate self-signed SSL certificates, if you don’t have official certificates.

sudo mkdir /etc/nginx/ssl

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout
/etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt

Add the proxy configuration with SSL and activate basic authentication to /etc/nginx/nginx.conf (note we expect the SSL certificate and key file in /etc/nginx/ssl/). Example:

# define proxy upstream to Elasticsearch via loopback interface in 
http {
  upstream elasticsearch {
    server 127.0.0.1:9200;
  }
}

server {
  # enable TLS
  listen 0.0.0.0:443 ssl;
  ssl_certificate /etc/nginx/ssl/nginx.crt;
  ssl_certificate_key /etc/nginx/ssl/nginx.key
  ssl_protocols TLSv1.2;
  ssl_prefer_server_ciphers on;
    ssl_session_timeout 5m;
    ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
    
    # Proxy for Elasticsearch 
    location / {
            auth_basic "Login";
            auth_basic_user_file passwords;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_set_header X-NginX-Proxy true;
            # use defined upstream with the name "elasticsearch"
            proxy_pass http://elasticsearch/;
            proxy_redirect off;
            if ($request_method = OPTIONS ) {
                add_header Access-Control-Allow-Origin "*"; 
            add_header Access-Control-Allow-Methods "GET, POST, , PUT, OPTIONS";
            add_header Access-Control-Allow-Headers "Content-Type,Accept,Authorization, x-requested-with"; 
            add_header Access-Control-Allow-Credentials "true"; 
            add_header Content-Length 0;
            add_header Content-Type application/json;
            return 200;
        } 
}

Restart Nginx and try to access Elasticsearch via https://localhost/_search.

4. Install Free Security Plugins for Elasticsearch

Alternatively, you could install and configure one of the several free security plugins for Elasticsearch to enable authentication:

  • ReadonlyREST plugin for Elasticsearch is available on Github. It provides different types of authentication, from basic to LDAP, as well as index- and operation-level access control.
  • SearchGuard is a free security plugin for Elasticsearch including role-based access control and SSL/TLS encrypted node-to-node communication. Additional enterprise features like LDAP authentication or JSON Web Token authentication are available and licensed per Elasticsearch cluster. See how simple it is to install and configure SearchGuard to secure an Elasticsearch and Kibana setup. Note that SearchGuard support is also included in some Sematext Elasticsearch Support Subscriptions.
  • Open Distro for Elasticsearch from AWS has free security plugins for Elasticsearch and Kibana. Open Distro provides a rich set of features to ensure data security and compliance with regulations such as GDPR, HIPAA, PCI, and ISO. Some of its main capabilities include using Kerberos or JSON tokens for single sign-on (SSO), encrypting data-in-transit, authenticating users against Active Directory, or monitoring and logging any malicious attempts. Read our complete guide to Open Ditro for Elasticsearch and learn more.

5. Maintain an audit trail and set up alerts

As with any type of system holding sensitive data, you have to monitor it very closely.  This means not only monitoring its various metrics (whose sudden changes could be an early sign of trouble), but also watching its logs.  Concretely, in the recent Elasticsearch attacks, anyone who had alert rules that trigger when the number of documents in an index suddenly drops would have immediately been notified that something was going on. A number of monitoring vendors have Elasticsearch support, including Sematext (see Elasticsearch monitoring integration). 

Logs should be collected and shipped to a log management tools in real time, where alerting needs to be set up to watch for any anomalous or suspicious activity, among other things. The log management service can be on premises or it can be a 3rd party SaaS, like Sematext Cloud. Shipping logs off site has the advantage of preventing attackers from covering their tracks by changing the logs.  Once logs are off site attackers won’t be able to get to them. Alerting on metrics and logs means you will become aware of a security compromise early and take appropriate actions to, hopefully, prevent further damage.

6. Backup and restore data

A very handy tool to backup/restore or re-index data based on Elasticsearch queries is Elasticdump.

To backup complete indices, the Elasticsearch snapshot API is the right tool. The snapshot API provides operations to create and restore snapshots of whole indices, stored in files, or in Amazon S3 buckets.

Let’s have a look at a few examples for Elasticdump and snapshot backups and recovery.

  1. Install elasticdump with the node package manager
    npm i elasticdump -g
  2. Backup by query to a zip file:
    elasticdump --input='http://username:password@localhost:9200/myindex' --searchBody '{"query" : {"range" :{"timestamp" : {"lte": 1483228800000}}}}' --output=$ --limit=1000 | gzip > /backups/myindex.gz
  3. Restore from a zip file:
    zcat /backups/myindex.gz | elasticdump --input=$ --output=http://username:password@localhost:9200/index_name

Examples for backup and restore data with snapshots to Amazon S3 or files

First configure the snapshot destination

  1. S3 example
 curl 'localhost:9200/_snapshot/my_repository?pretty' -XPUT -H 'Content-Type: application/json' -d '{
   "type" : "s3",
   "settings" : {
     "bucket" : "test-bucket",
     "base_path" : "backup-2017-01",
     "max_restore_bytes_per_sec" : "1gb",
     "max_snapshot_bytes_per_sec" : "1gb",
     "compress" : "true",
     "access_key" : "<ACCESS_KEY_HERE>",
     "secret_key" : "<SECRET_KEY_HERE>"
   }
 }'
  1. Local disk or mounted NFS example
 curl 'localhost:9200/_snapshot/my_repository?pretty' -XPUT -H 'Content-Type: application/json' -d '{
   "type" : "fs",
   "settings" : {
     "location": "<PATH … for example /mnt/storage/backup>"
   }
 }'
  1. Trigger snapshot
 curl -XPUT 'localhost:9200/_snapshot/my_repository/<snapshot_name>'
  1. Show all backups
 curl 'localhost:9200/_snapshot/my_repository/_all'
  1. Restore – the most important part of backup is verifying that backup restore actually works!
curl -XPOST 'localhost:9200/_snapshot/my_repository/<snapshot_name>/_restore'

What about Hosted Elasticsearch?

There are several hosted Elasticsearch services, with Sematext Cloud being a great alternative for time series data like logs.  Each hosted Elasticsearch service is a little different. The list below shows a few relevant aspects of Sematext:

  • Sematext API for logs is compatible with Elasticsearch API except for a few security-related exceptions
  • Sematext  does not expose Elasticsearch management API’s like index listing or global search via /_search
  • Scripting and index deletion operations are blocked
  • Users can define and revoke access tokens for read and write access
  • Sematext provides role-based access control and SSL/TLS
  • Daily snapshots for all customers are automatic and are stored securely
  • Sematext supports raw data archiving to Amazon S3

If you have any questions, feel free to contact us. We provide professional Elasticsearch support, trainings and consulting.

Or, if you are interested in trying Sematext Cloud, our hassle-free managed ELK you don’t need to maintain and scale, here it is a short path to a free trial:

SIGN UP – FREE TRIAL

Java Logging Basics: Concepts, Tools, and Best Practices

Imagine you're a detective trying to solve a crime, but...

Best Web Transaction Monitoring Tools in 2024

Websites are no longer static pages.  They’re dynamic, transaction-heavy ecosystems...

17 Linux Log Files You Must Be Monitoring

Imagine waking up to a critical system failure that has...