Logstash is configured with grok, but kibana view does not take effect

Hello prawns!

   logstash:
    

100.97.73.229-- [19/Feb/2019:17:43:11 + 0800] "GET / news-spread_index-138.html HTTP/1.1" 7920 "-" Mozilla/5.0 (Linux; Android 8.1; MI 6X Build/OPM1.171019.011; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.132 MQQBrowser/6.2 TBS/044307 Mobile Safari/537.36 Imou "

    
    grok patterngrok:

% {IPORHOST:client_ip}% {USER:ident}% {USER:auth} [% {HTTPDATE:timestamp}] "(?:% {WORD:verb}% {NOTSPACE:request} (?: HTTP/% {NUMBER:http_version})? | -)"% {NUMBER:response} (?:% {NUMBER:bytes} | -)% {QUOTEDSTRING:domain}% {QUOTEDSTRING:data}

    
    logstash:
    input {
        kafka{
                bootstrap_servers=>"172.31.0.84:9092"    -sharpkafkaip
                topics=>["lcshop-log","lcshop-errorlog"]    -sharptopic
                decorate_events=>"true"
                codec=>plain
        }       
}   

filter {
    if [@metadata][kafka][topic] == "lcshop-log" {
      mutate {
         add_field => {"[@metadata][index]" => "lcshop-log-%{+YYYY-MM}"}
      }  
      grok {    
         match => { "message" => "%{IPORHOST:client_ip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version})?|-)\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QUOTEDSTRING:domain} %{QUOTEDSTRING:data}"}
         remove_field => ["message"]
        -sharp
      }
   } else if [@metadata][kafka][topic] == "lcshop-errorlog" {
      mutate {
         add_field => {"[@metadata][index]" => "lcshop-errorlog-%{+YYYY-MM}"}
      }
   }
}

output {
    elasticsearch {
            hosts=>["172.31.0.76:9200"]        -sharpesip
            index=>"%{[@metadata][index]}"        -sharptopicindex
    }
}

but in the kibana view, index has been successfully generated, but the grok part has not taken effect, where is the problem?


problem solved ~

it turns out that you can't do grok as a log, but rather do grok as kibana actually shows.

kibana automatically adds an escape character to the double quotation marks, so some fields cannot use QS, but use DATA with\ "

.

Hello, could you tell me how to use it with daka? I have the same problem with you. Ask for advice

.

input {
kafka {

bootstrap_servers => "192.168.193.133:9092,192.168.193.134:9092,192.168.193.135:9092"
topics => "nginx_access_log"
auto_offset_reset => "latest"
type => "nginx_access_log"

}
}

filter {
grok {

match => {
"message" => "%{IPORHOST:client_ip} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request} (?:HTTP/%{NUMBER:http_version})?|-)\" %{NUMBER:status} %{NUMBER:upstream_status} (?:%{NUMBER:bytes}|-) %{QS:http_referer} %{QS:user_agent} %{QS:http_x_forwarded_for} %{QS:upstream_addr} %{NUMBER:request_time}"
}

}
}

output {
elasticsearch {

hosts => ["http://192.168.193.130:9200"]
index => "nginx_access_log-%{+YYYY.MM.dd}"
-sharpuser => "elastic"
-sharppassword => "changeme"

}
stdout {

codec => rubydebug

}
}

image.png
it just feels like an extra escape, which runs well in the grok verification tool, but logstah just doesn't work

Menu