How to perform group by-like Statistics on json arrays by Elasticsearch

problem description

if you have 2 documents, each document has a field with json content (content) For example:
document 1, content: [{"k": "1", "v": "2016"}, {"k": "2", "v": "2017"}, {"k": "3", "v": "2018"}, {"k": "4", "v": "2016"}, {"k": "5" "v": "2018"}]
document 2, content: [{"k": "1", "v": "2016"}, {"k": "2", "v": "2018"}, {"k": "3", "v": "2019"}]

my purpose:
1. Group all documents according to v values, and count the types of v values, similar to sql: select vjournal count (*) from. Group by v;
2. Count the related k for the specified v value, similar to sql: select KJ count (*) from... where vault 2018. Group by k;

the environmental background of the problems and what methods you have tried

make an attempt in postgres. The effect is not good when the amount of data is more than one million.

related codes

/ / Please paste the code text below (do not replace the code with pictures)

Code in postgres:
1, v value statistics
select json_array_elements (content::json)-> > "vicious score count (*) from A GROUP BY json_array_elements (content::json)-> >" v"

2. Specify a v value to count the related k
select k as count (*) from (
SELECT json_array_elements (content::json)-> >"k"as k, json_array_elements (content::json)->"v" as v FROM A where content @ >"[{"v": "2018"}]"
) B GROUP BY k

what result do you expect? What is the error message actually seen?

how to count such data in Elasticsearch, and what is the method? Ask for help.

Jun.02,2022
Menu