Elasticsearch memory leak

the production environment elasticsearch has been down a lot recently, mainly for searching posts.
analyzed the stack file after the es memory leak and found that 1G of memory was applied for in netty, but I didn"t know much about es. The initial guess was that a single document was too large and crushed es when returning the search results.

clipboard.png

clipboard.png

clipboard.png
these objects all have a 500Mbyte. This byte is the object to be returned, right?
the solution I think is to reduce the size of a single document

Sep.16,2021

The participle of

ES takes up a lot of memory, so you usually need to optimize JVM memory allocation. You can refer to ES's guide: ide/en/elasticsearch/guide/current/heap-sizing.html" rel=" nofollow noreferrer "> https://www.elastic.co/guide/.

.

in addition, the own document also needs to be optimized. For example, by default, ES participles all fields. In fact, this is not necessary. You need to customize mapping to optimize the resource usage of ES. By default, it seems that there is no word segmentation for documents with more than 255 characters, right? I haven't used ES for a long time. I don't remember clearly. It has been mentioned that I have read through the authoritative guide to elasticsearch when I used ES before.

The

problem has been found. In the index, there is a very large document 245m , which directly enters the old age. When several such results are returned at the same time, it directly crushes es

.
Menu