Fluentd error: “buffer space has too many data”
Asked Answered
G

1

6

I'm using fluentd in my kubernetes cluster to collect logs from the pods and send them to the elasticseach. Once a day or two the fluetnd gets the error:

[warn]: #0 emit transaction failed: error_class=Fluent::Plugin::Buffer::BufferOverflowError error=“buffer space has too many data” location=“/fluentd/vendor/bundle/ruby/2.6.0/gems/fluentd-1.7.4/lib/fluent/plugin/buffer.rb:265:in `write’”

And the fluentd stops sending logs, until I reset the fluentd pod.

How can I avoid getting this error?

Maybe I need to change something in my configuration?

<match filter.Logs.**.System**>
  @type elasticsearch
  host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
  port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
  scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME']}"
  user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
  password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"

  logstash_format true
  logstash_prefix system
  type_name systemlog
  time_key_format %Y-%m-%dT%H:%M:%S.%NZ
  time_key time
  log_es_400_reason true
  <buffer>
    flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
    flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
    chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '8M'}"
    queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
    retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
    retry_forever true
  </buffer>
</match>
Giselegisella answered 20/2, 2020 at 9:16 Comment(1)
Have you done this steps: docs.fluentd.org/installation/before-installNationalism
L
9

The default buffer type is memory check: https://github.com/uken/fluent-plugin-elasticsearch/blob/master/lib/fluent/plugin/out_elasticsearch.rb#L63

There are two disadvantages to this type of buffer - if the pod or containers are restarted logs that in the buffer will be lost. - if all the RAM allocated to the fluentd is consumed logs will not be sent anymore

Try to use file-based buffers with the below configurations

<buffer>
    @type file
    path /fluentd/log/elastic-buffer
    flush_thread_count 8
    flush_interval 1s
    chunk_limit_size 32M
    queue_limit_length 4
    flush_mode interval
    retry_max_interval 30
    retry_forever true
  </buffer>
Lipography answered 21/2, 2020 at 21:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.