Solr is widely adopted by startups and enterprises alike. It’s powerful and open-source, so it’s very appealing to just about everyone looking for a search platform to build off of.
Being easily accessible, many people overlook the importance of monitoring Solr. Even when that importance is put into question, a lot of people continue with the trend and use an open-source tool for their monitoring needs.
Although this is completely possible, open-source isn’t the only answer, and it’s very often not the best choice.
Today, we’re going to go over some of the most popular methods of monitoring Solr, listing tools that are both open-source and paid, so that you can get a good idea of what sort of offers are out there.
For the sake of the open-source tools, we will also give you a quick tutorial to show you how to set them up.
Before we get started, in case you’re undecided, take a quick look at the key differences between Apache Solr and Elasticsearch:
The Importance of Solr Monitoring
Operating, managing, and maintaining distributed systems is not easy. As we explored in the first part of our monitoring Solr series there are more than forty metrics that we need to have full visibility into our Solr instances and the full cluster.
Without any kind of monitoring tool, it is close to impossible to have a full view of all the needed pieces to be sure that the cluster is healthy or to react properly when things are not going the right way.
When searching for a tool to help you track Solr metrics, look at the following qualities:
- The ability to monitor and manage multiple clusters
- An easy, at-glance overview of the whole cluster and its state
- Clear information about the crucial performance metrics
- Ability to provide historical metrics for post-mortem analysis
- Combines low-level OS metrics, JVM metrics, and Solr-specific metrics
- Ability to set up alerts
Let’s now explore some of the available options.
Best Solr Monitoring Tools
We have to say this first – Sematext has been in the business of providing consulting, training, and support for Apache Solr since 2010 or so. As such, Sematext has a ton of experience troubleshooting Solr, the JVM, and infrastructure running Solr, whether on baremetal, in VMs, Kubernetes, containers… As such, how good do you think Sematext’s Solr monitoring is? We suspect it might be the best in the industry.
Infrastructure Monitoring from Sematext gives you insights into the usage of your servers, Kubernetes nodes, pods, cloud instances, containers, Solr, and more. Keep tabs on search engines, databases, queues, and more when operating within your infrastructure.
One of the things that makes Sematext stand out from the crowd are its out-of-the-box Solr monitoring dashboards and pre-built alert rules. These dashboards are Solr-specific, meaning they are optimized and ready to go for Solr, among many other things. Check Solr monitoring docs or Solr Cloud monitoring docs for a few visuals. Moreover, these dashboards contain not only observability data, but also inline tips that provide information about individual metrics, what they mean, and how you might want to tweak your Solr configuration to address issues.
What’s even better, Logs Pipelines allow you to eliminate log events that you don’t want. As a result, you have better control of your costs, allowing you to save data usage for the log events that are important to you, and only those events. Trim unwanted fields, enhance your logs, and transform them as needed.
Sematext is not limited to Solr monitoring – it’s a full-stack Monitoring solution. In addition to Infrastructure and Log monitoring, there’s also Synthetic and Real User Monitoring available to you from the same UI.
- Log Monitoring
- Full-stack observability
- Infrastructure Monitoring
- Real User Monitoring
- Synthetic Monitoring
- SSL Certificate Monitoring
- Status Pages
- Alerting with anomaly detection
- 100+ integrations
- Pre-build Solr dashboards and alerts created by Solr experts based on 15+ years of working with Solr
- Ability to monitor not just Solr metrics, but also Solr logs, JVM metrics, JVM garbage collecton logs, and underlying infrastructure
- Super simple setup due to auto-discovery of Solr instances and logs
- Fully customizable alert rules and dashboards
- Seamless setup process with accommodating support staff according to a number of reviews on G2
- Internal and external monitoring capabilities
- Flexible per-App pricing
- Logs Pipelines for granular log monitoring cost control
- No support for transaction tracing
Sematext’s pricing options are easy to scale, depending on what you need, and come with zero obligations. You can cancel, upgrade, or downgrade at any time.
Infrastructure Monitoring has a free plan, but the paid plans start at only $3.60 per host per month. The $3.6 price tag comes with 7 days of retention, but you can scale this up if you need to.
Log Monitoring has a free plan, too, and the paid options start at $50 per month. This plan starts with 1GB of ingested data per day and 7 days of retention, but again, you can scale this up to meet your needs.
Prometheus is an open-source monitoring and alerting system that was originally developed at SoundCloud. Right now it is a standalone open source project and it is maintained independently from the company that created it initially. Prometheus project, in 2016, joined the Cloud Native Computing Foundation as the second hosted project, right after Kubernetes.
Out of the box Prometheus supports flexible query language on top of the multi-dimensional data model based on TSDB where the data can be pulled using the HTTP-based protocol.
The Prometheus Solr Exporter is shipped with Solr as a contrib module located in the contrib/prometheus-exporter directory. To start working with it we need to take the solr-exporter.xml file that is located in the contrib/prometheus-exporter/conf directory. It is already preconfigured to work with Solr and we will not modify it now. However, if you are interested in additional metrics, shipping additional facet results, or sending fewer data to Prometheus you should look and modify the mentioned file.
Once we have the exporter configured we need to start it. It is very simple. Just go to the contrib/prometheus-exporter directory (or the one where you copied it in your production system) and run the appropriate command, depending on the architecture of Solr you are running.
For Solr master-slave deployments you would run:
./bin/solr-exporter -p 9854 -b http://localhost:8983/solr -f ./conf/solr-exporter-config.xml -n 8
For SolrCloud you would run:
./bin/solr-exporter -p 9854 -z localhost:2181/solr -f ./conf/solr-exporter-config.xml -n 16
The above command runs Solr exporter on the 9854 port with 8 threads for Solr master-slave and 16 for SolrCloud. In the case of SolrCloud, we are also pointing the exporter to the Zookeeper ensemble that is accessible on port 2181 on the local host. Of course, you should adjust the commands to match your environment.
After the command was successfully run you should see the following:
INFO - 2019-04-29 16:36:21.476; org.apache.solr.prometheus.exporter.SolrExporter; Start server
We have Solr master-slave/SolrCloud running and we have our Solr Exporter running, this means we are ready to take the next step and configure our Prometheus instance to fetch data from our Solr Exporter. To do that we need to adjust the prometheus.yml file and add the following:
scrape_configs: - job_name: 'solr' static_configs: - targets: ['localhost:9854']
Of course, in the production system, our Prometheus will run on a different host compared to our Solr and Solr Exporter – we can even run multiple exporters. That means that we will need to adjust the target property to match our environment.
After all the preparations we can finally look into what Prometheus gives us. We can start with the main Prometheus UI.
It allows for choosing the metrics that we are interested in, graphing them, alerting on them, and so on. The beautiful thing about it is that the UI supports the full Prometheus Query Language allowing the use of operators, functions, subqueries, and many, many more.
When using the visualization functionality of Prometheus we get the full view of the available metrics by using a simple dropdown menu, so we don’t need to be aware of each and every metric that is shipped to Solr.
The nice thing about Prometheus is that we are not limited to the default UI, but we can also use Grafana for dashboarding, alerting, and team management. Defining the new, Prometheus data source is very, very simple:
Once that is done we can start visualizing the data:
However, all of that requires us to build rich dashboards ourselves. Luckily Solr comes with an example pre-built Grafana dashboard that can be used along with the metrics scrapped to Prometheus. The example dashboard definition is stored in the contrib/prometheus-exporter/conf/grafana-solr-dashboard.json file and can be loaded to Grafana giving a basic view over our Solr cluster.
We are able to set up teams, and users, assign roles to them, set up alerts on the metrics, and include multiple data sources within a single installation of Grafana. This allows us to have everything in one place – metrics from multiple sources, logs, signals, tracing, and whatever we need and can think of.
3. Graphite & Graphite with Grafana
Graphite is a free open-sourced monitoring software that can monitor and graph numeric time-series data. It can collect, store, and display data in a real-time manner allowing for fine-grained metrics monitoring.
It is composed of three main parts – Carbon, the daemon listening for time-series data, Whisper – the database for storing time-series data and the Graphite web-app which is used for on-demand metrics rendering.
To start monitoring Solr with Graphite as the platform of choice we assume that you already have Graphite up and running, but if you don’t we can start by using the provided Docker container:
docker run -d --name graphite --restart=always -p 80:80 -p 2003-2004:2003-2004 -p 2023-2024:2023-2024 -p 8125:8125/udp -p 8126:8126 graphiteapp/graphite-statsd
To be able to get the data from Solr we will use Solr metrics registry along with the Graphite reporter. To configure that we need to adjust the solr.xml file and add the metrics part to it. For example, to monitor information about the JVM and the Solr node the metrics section would look as follows:
<metrics> <reporter name="graphite" group="node, jvm" class="org.apache.solr.metrics.reporters.SolrGraphiteReporter"> <str name="host">localhost</str> <int name="port">2003</int> <int name="period">60</int> </reporter> </metrics>
So we pointed Solr to the Graphite server that is running on the localhost on the port 2003 and we defined the period of data writing to 60, which means that Solr will push the JVM and Solr node metrics once every 60 seconds.
Keep in mind that by default Solr will write by using the plain-text protocol. This is less efficient than using the pickled protocol. If you would like to configure Solr and Graphite in production we suggest setting the pickled property to true in the reporter configuration and using the port for the pickled protocol, which in the case of our Docker container would be 2004.
We can now easily navigate to our Graphite server, available at 127.0.0.1 on port 80 with our container, and graph our data:
All the metrics are sorted out and easily accessible in the left menu allowing for rich dashboarding capabilities.
If you are using Grafana it is easy to set up Graphite as yet another data source and uses its graphing and dashboarding capabilities to correlate multiple metrics together, even ones that are coming from different data sources.
Next, we need to configure Graphite as the data source. It is as easy as providing the proper Graphite URL and setting the version:
And we are ready to create our visualizations and dashboards, which is very easy and powerful. With the autocomplete available for metrics we don’t need to recall any of the names and Grafana will just show them for us. An example of a single metric dashboard can look as follows:
Splunk is well known in the log analytics space, but it is aimed at large-budgeted organizations. Splunk was actually bought out by Cisco for roughly $28 billion. This is kind of a red flag showing how expensive Splunk can be.
That being said, Splunk is a powerful APM tool, for which it’s primarily known for. It does, however, offer a lot in terms of log management and Solr Monitoring.
- Log monitoring
- Application performance monitoring
- Infrastructure monitoring
- Real user monitoring
- Synthetic Monitoring
- Add-ons available
- Automated anomaly detection
- On-premise or cloud-based options
- Supports multiple data formats
- Expensive licensing
- Splunk Processing Language (SPL) is complicated
- Limited data modeling
- Limited machine learning capabilities
For some reason, the prices for Splunk are kind of difficult to find. When you manage to dig through their website long enough and find them, it’s still very vague, and it still doesn’t cover all their solutions.
This approach is frustrating, but it makes sense when you look at the biggest complaints on G2. They are expensive and want you to reach out!
The prices that you can find online are as follows:
Synthetic monitoring starts at just $1, but you only get 10,000 Uptime requests.
Real user monitoring (RUM) starts at $14, but it only covers 10,000 sessions.
Infrastructure monitoring starts at $15 per month, but that’s for every single host, which is very pricey.
Incident response starts at $5 per user per month, and APM starts at $55 per month per host.
Unfortunately for those looking to pair Solr monitoring with log monitoring, they do not offer log monitoring prices at this time
Not a big fan of Splunk? You should see how Sematext stacks up. Check out our page on Sematext vs Splunk.
Ganglia is an open-source, scalable distributed monitoring system. It is based on a hierarchical design targeted at a large number of clusters and nodes. It uses XML for data representation, XDR for data transport, and RRD for data storage and visualization. It has been used to connect clusters across university campuses and is proven to handle clusters with 2000 nodes.
To start monitoring Solr master-slave or SolrCloud clusters with Ganglia we will start with setting up the metrics reporter in the solr.xml configuration file. To do that we add the following section to the mentioned file:
<metrics> <reporter name="ganglia" group="node, jvm" class="org.apache.solr.metrics.reporters.SolrGangliaReporter"> <str name="host">localhost</str> <int name="port">8649</int> </reporter> </metrics>
The next thing that we need to do is allow Solr to understand the XDR protocol used for data transport. We need to download the oncrpc-1.0.7.jar jar file and place it either in your Solr classpath or include the path to it in your solrconfig.xml file using the lib directive.
Once all of the above is done and assuming our Ganglia is running on localhost on port 8649 that is everything that we need to do to have everything ready and start shipping Solr nodes and JVM metrics.
By visiting Ganglia and choosing the Solr node we can start looking into the metrics:
We can jump to the graphs right away, choose which group of metrics interested in and basically see most of the data that we are interested in right away.
Ganglia provides us with all the visibility for our metrics, but out of the box, it doesn’t support one of the crucial features that we are looking for – alerting. There is a project called ganglia-alert, which is a user-contributed extension to Ganglia.
For the next and final entry to this list, we have Dynatrace. Dyantrace is a paid solution that, you guessed it, allows you to monitor Solr performance metrics and log events.
It has a particular focus on Application Performance Monitoring (APM), but it offers all the same log management capabilities that the others on this list do.
Dynatrace is great for enterprise-level monitoring, as it is pretty expensive. It’s great for providing business metrics across multiple digital platforms and can implement casual AI to help automate workflows.
- Log management and Analytics
- Full-stack Monitoring
- Infrastructure Monitoring
- Application Security
- Real User Monitoring
- Synthetic Monitoring
- Lots of observability options
- Priced based on data that you use
- Powerful alerting
- Powered by AI
- Very expensive
- User reviews report that it is complex to use
- User reviews report bad customer service experiences
- User reviews report poor documentation
Dynatrace seems cheap at first, but we have to do some math in order to understand how much you’ll really be paying.
Let’s take log management for example. The prices start at just $0.20 per ingested and processed GiB. To retain that GiB only costs you about $0.0007 per month, but to query that logging data, you have to pay $0.0035 per GiB!
Think about it like this: for 1GB ingested and 7 days retention, that’s $6 ingestion + $0.0049 retention = $6.0049 per month. However, Dynatrace charges $0.0035 per GiB for queries. With 7GB stored and queries every 10 minutes, that’s an extra $3.5 per day. You’ll end up paying $111 per month in total!
With Infrastructure Monitoring, they charge $0.04 per hour. That means that the monthly charge per host is 0.04 * 24 * 30 = $28.8. 24 is the hours in the day, and 30 is the number of days in the month. That’s $28.8 for a single host!
Synthetic requests are $0.001. This might sound cheap, but let’s think about it. If you set up 1 HTTP monitor from a single location with 60-second intervals, that equation looks like this: 0.001 * 1440 * 30 = $43. 1440 is the total number of runs for the month and 30 is the number of days. This means that you’re paying $43 per month for a single HTTP monitor.
Real User Monitoring is charged based on the number of sessions. Each session will cost you $0.00225. Quick math shows us that 100,000 sessions would cost you $225 per month.
Want to see how Sematext stacks up? Check out our page on Sematext vs Dynatrace.
As you can see there is a wide variety of tools that help you monitor Solr. What you have to keep in mind is that each requires setting up, configuration, and manual dashboard building in order to get meaningful information. All of that may require deep knowledge across the whole ecosystem. Learn how to pick the best monitoring tool for your use case from our guide to alerting and monitoring.
If you are looking for a Solr monitoring tool, whether it’s pre-configured or open-source, then this list is a fantastic head start.
If you’re thinking about going with something open-source, please keep in mind that it’s not 100% free. Sure, you don’t have to pay any sort of subscription, but there certainly is a cost of ownership.
It can be just as expensive, if not more than the pre-configured solutions like Sematext, Splunk, and Dynatrace on this list. The only difference is you don’t have the headache of managing the solution on your own.
We also offer a one-stop shop for Solr services like consulting, training, and even support.
Continue reading about Solr: