Elasticsearch circuit breaker and limit field data cache

problem description

online cluster elasticsearch, host memory has 28G, allocated to es cluster is 14G, using kibana as some basic graphical display, useful for some aggregation, sorting analysis. When the time range selected in kibana is large, it often causes the es service to OOM, directly and then hang up. In order to test some parameter configurations, a single instance elasticsearch service is built on this machine, and the available memory settings are -Xms500m and -Xmx500m . Try to follow the method provided by the official website ide/cn/elasticsearch/guide/current/_limiting_memory_usage.html" rel=" nofollow noreferrer "> to limit memory usage , make some settings, and add this configuration

.
indices.fielddata.cache.size:  20% 

because indices.breaker.fielddata.limit is the default 60%, there is no additional setting. Similarly, simulate a graphical display of aggregation analysis over a large time range, and OOM then dies.

at present, it is hoped that this effect can be achieved: 1. Large time ranges or behaviors that may run out of memory can be disabled. It is best if the es server does not return results and cannot be hung up. 2.es clusters can be stable and cannot be hung up all the time.

in addition:
(1) I have read some blogs: Elasticsearch memory thing is to increase memory, but this solution does not completely solve the problem.
(2) in the aggregation operation, optimize as much as possible, divide the bucket aggregation operation, slice the data as far as possible, and divide it into multiple time periods, which is OK, but it will increase the trouble of viewing.
(3) ide/cn/elasticsearch/guide/current/heap-sizing.html" rel=" nofollow noreferrer "> https://www.elastic.co/guide/., here are some performance recommendations. I want stability, but there are no performance requirements.

Menu