Seems to be a mixed bag in terms of tlog size across all of our indexes,
but currently the index with the performance issues has 4 tlog files
totally ~200 MB. This still seems high to me since the collections are in
sync, and we hard commit every minute, but it's less than the ~8GB it was
before we cleaned them up. Spot checking some other indexes show some have
tlogs >3GB, but none of those indexes are having performance issues (on the
same solr node), so I'm not sure it's related. We have 13 collections of
various sizes running on our solr cloud cluster, and none of them seem to
have this issue except for this one index, which is not our largest index
in terms of size on disk or number of documents.
As far as the response intervals, just running a default search *:* sorting
on our id field so that we get consistent results across environments, and
returning 200 results (our max page size in app) with ~20 fields, we see
times of ~3.5 seconds in production, compared to ~1 second on one of our
lower environments with an exact copy of the index. Both have CDCR enabled
and have identical clusters.
Unfortunately, currently the only instance we are seeing the issue on is
production, so we are limited in the tests that we can run. I did confirm
in the lower environment that the doc cache is large enough to hold all of
the results, and that both the doc and query caches should be serving the
results. Obviously production we have much more indexing going on, but we
do utilize autowarming for our caches so our response times are still
stable across new searchers.
We did move the lower environment to the same ESX host as our production
cluster, so that it is getting resources from the same pool (CPU, RAM,
etc). The only thing that is different is the disks, but the lower
environment is running on slower disks than production. And if it was a
disk issue you would think it would be affecting all of the collections,
not just this one.
It's a mystery!
On Wed, Jun 13, 2018 at 10:38 AM, Erick Erickson <[EMAIL PROTECTED]>