I'm using the image gcr.io/google-containers/fluentd-elasticsearch (v2.3.1) in order to make fluentd collect some logs and send them to Elastic search. I'm using the below configuration for fluentd:
<source>
type forward
port {{.Values.fluentd.forward.port}}
bind 0.0.0.0
</source>
<filter kube.**>
@type parser
@log_level debug
key_name log
reserve_data true
remove_key_name_field true
<parse>
@type json
time_key time
time_type string
time_format %iso8601
</parse>
</filter>
<filter kube.**>
@type record_transformer
@log_level debug
enable_ruby
<record>
kubernetes ${record["kubernetes"]["cluster_name"] = "{{.Values.clusterName}}"; record["kubernetes"] }
logtrail {"host": "${record['kubernetes']['pod_name']}", "program":"${record['kubernetes']['container_name']}"}
</record>
</filter>
<filter kube.**>
@type concat
key log
stream_identity_key kubernetes["docker_id"]
multiline_end_regexp /\n$/
separator ""
</filter>
The above listed configuration was supposed to parse the JSON that is associated with a key called log. But I'm seeing that the JSON is not getting parsed at all. Below is the JSON that I'm getting after fluentd does the filtering. I had expected that the JSON associated with the key log would be parsed.
{"kubernetes":{"pod_name":"api-dummy-dummy-vcpqr","namespace_name":"dummy","pod_id":"dummy","labels":{"name":"api-dummy","pod-template-hash":"dummy","tier":"dummy"},"host":"dummy","container_name":"api-dummy","docker_id":"dummy","cluster_name":"dummy Dev"},"log":"{\"name\":\"dummy\",\"json\":false,\"hostname\":\"api-dummy-dummy-vcpqr\",\"pid\":24,\"component\":\"dummy\",\"level\":30,\"version\":\"1.0\",\"timestamp\":1539645856126}","stream":"stdout","logtrail":{"host":"api-dummy-dummy-vcpqr","program":"api-dummy"}}
I have spent more than 3 days figuring out the solution for this. I even tried to use https://github.com/edsiper/fluent-plugin-docker but that did not help. Although the plugin helped to parse the JSON, it resulted in the parsed log messages getting rejected by my Elastic search.