There are 0 error messages when bringing up the Fluentd docker container, so it makes it hard to debug.
curl http://elasticsearch:9200/_cat/indices from the fluentd-container shows indices, but however doesn't show the fluentd-index.
docker logs 7b
2018-06-29 13:56:41 +0000 [info]: reading config file path="/fluentd/etc/fluent.conf"
2018-06-29 13:56:41 +0000 [info]: starting fluentd-0.12.19
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.4.0'
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-rename-key' version '0.1.3'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.12.19'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.10.61'
2018-06-29 13:56:41 +0000 [info]: adding filter pattern="**" type="record_transformer"
2018-06-29 13:56:41 +0000 [info]: adding match pattern="docker.*" type="rename_key"
2018-06-29 13:56:41 +0000 [info]: Added rename key rule: rename_rule1 {:key_regexp=>/^log$/, :new_key=>"message"}
2018-06-29 13:56:41 +0000 [info]: adding match pattern="**" type="elasticsearch"
2018-06-29 13:56:41 +0000 [info]: adding source type="forward"
2018-06-29 13:56:41 +0000 [info]: adding source type="monitor_agent"
2018-06-29 13:56:41 +0000 [info]: using configuration file: <ROOT>
<source>
@type forward
</source>
<source>
@type monitor_agent
bind 0.0.0.0
port 24220
</source>
<filter **>
type record_transformer
<record>
node /
role app
environment dev
tenant xxx
tag ${tag}
</record>
</filter>
<match docker.*>
type rename_key
rename_rule1 ^log$ message
append_tag message
</match>
<match **>
type elasticsearch
host elasticsearch
port 9200
index_name fluentd
type_name fluentd
include_tag_key true
logstash_format true
</match>
</ROOT>
2018-06-29 13:56:41 +0000 [info]: listening fluent socket on 0.0.0.0:24224
...
2018-06-29 14:16:38 +0000 [info]: listening fluent socket on 0.0.0.0:24224
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=49
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=50
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=51
... many repeats
2018-07-01 06:21:52 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 08:39:07 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 06:21:52 +0000 [warn]: suppressed same stacktrace
2018-07-01 08:39:07 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 13:02:17 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 08:39:07 +0000 [warn]: suppressed same stacktrace
2018-07-01 13:02:17 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 21:04:48 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 13:02:17 +0000 [warn]: suppressed same stacktrace
2018-07-01 21:04:48 +0000 [warn]: failed to flush the buffer. error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 21:04:48 +0000 [warn]: retry count exceededs limit.
2018-07-01 21:04:48 +0000 [warn]: suppressed same stacktrace
2018-07-01 21:04:48 +0000 [error]: throwing away old logs.
I am able to successfully insert data in a test-index in ElasticSearch by curling. How do I troubleshoot where fluentd fails?