Collectors, as their name implies, collect things. In X-Pack monitoring for Elasticsearch, collectors have a few rules that define them.
There is only one collector per data type gathered. In other words, for any monitoring document that is created, it comes from a single collector rather than being merged from multiple collectors. X-Pack monitoring for Elasticsearch currently has a few collectors because the goal is to minimize overlap between them for optimal performance.
Each collector can create zero or more monitoring documents. For example,
index_stats collector collects all index statistics at the same time to
avoid many unnecessary calls.
Gathers details about the cluster state, including parts of
the actual cluster state (for example
Gathers details about the indices in the cluster, both in summary and
individually. This creates many documents that represent parts of the index
statistics output (for example,
Gathers details about index recovery in the cluster. Index recovery represents the assignment of shards at the cluster level. If an index is not recovered, it is not usable. This also corresponds to shard restoration via snapshots. This information only needs to be collected once, so it is collected on the elected master node. The most common failure for this collector relates to an extreme number of shards — and therefore time to gather them — resulting in timeouts. This creates a single document that contains all recoveries by default, which can be quite large, but it gives the most accurate picture of recovery in the production cluster.
Gathers details about all allocated shards for all indices, particularly including what node the shard is allocated to. This information only needs to be collected once, so it is collected on the elected master node. The collector uses the local cluster state to get the routing table without any network timeout issues unlike most other collectors. Each shard is represented by a separate monitoring document.
Gathers details about all machine learning job statistics (for example,
Gathers details about the running node, such as memory utilization and CPU
usage (for example,
Fundamentally, each collector works on the same principle. Per collection
interval, which defaults to 10 seconds (
10s), each collector is checked to
see whether it should run and then the appropriate collectors run. The failure
of an individual collector does not impact any other collector.
Once collection has completed, all of the monitoring data is passed to the exporters to route the monitoring data to the monitoring clusters.
The collection interval can be configured dynamically and you can also disable data collection. This can be very useful when you are using a separate monitoring cluster to automatically take advantage of the cleaner service.
If gaps exist in the monitoring charts in Kibana, it is typically because either a collector failed or the monitoring cluster did not receive the data (for example, it was being restarted). In the event that a collector fails, a logged error should exist on the node that attempted to perform the collection.
Collection is currently done serially, rather than in parallel, to avoid extra overhead on the elected master node. The downside to this approach is that collectors might observe a different version of the cluster state within the same collection period. In practice, this does not make a significant difference and running the collectors in parallel would not prevent such a possibility.
For more infomration about the configuration options for the collectors, see Monitoring Collection Settings.