solr on docker the good the bad the uggly sematext

Solr on Docker – the Good, the Bad and the Ugly – Video & Slides

Another Lucene/Solr Revolution happened on September 12-15, 2017 in Las Vegas. Sematext was there, exhibiting AND giving two talks! Thanks to everyone who stopped by our booth and attended our two talks:

This blog post is about the second presentation which puts together Solr and Docker, discussing the good, the bad and the ugly parts in this. Thus, it has two main goals:

  • First, to discuss the tradeoffs for running Solr on Docker. For example, you get dynamic allocation of operating system caches, but you also get some CPU overhead. We’ll keep in mind that Solr nodes tend to be different than your average container: Solr is usually long running, takes quite some RSS and a lot of virtual memory. This will imply, for example, that it makes more sense to use Docker on big physical boxes than on configurable-size VMs (like Amazon EC2).
  • Second, to discuss issues with deploying Solr on Docker and how to work around them. For example, many older (and some of the newer) combinations of Docker, Linux Kernel and JVM have memory leaks. The below presentation goes over Docker operations best practices, such as using container limits to cap memory usage and prevent the host OOM killer from terminating a memory-consuming process – usually a Solr node. Or running Docker in Swarm mode over multiple smaller boxes to limit the spread of a single issue.

Interested in listening to the 40-minute talk? Check it below.

Don’t have time to watch the video? You can check Solr on Docker – the Good, the Bad and the Ugly slides instead.

What’s Next

Want to learn more about Solr? Subscribe to our blog or follow @sematext. If you need any help with Solr / SolrCloud – don’t forget that we provide Solr Consulting, Solr Production Support, and offer Solr Training

Need a Solr monitoring solution? Try SPM for SolrMonitor all key Solr metrics from Request Rate & Latency to Warmup time, and more. 

GET 30 DAYS FREE TRIAL

Leave a Reply