Fluentd Elasticsearch target index
Asked Answered
K

3

7

I'm using Fluentd to transfer the data into Elasticsearch.

td-agent.conf

## ElasticSearch
<match es.**>
  type elasticsearch
  target_index_key @target_index  
  logstash_format true
  flush_interval 5s
</match>

Elasticsearch index :

"logstash-2016.02.24" : {
    "aliases" : { },
    "mappings" : {
      "fluentd" : {
        "dynamic" : "strict",
        "properties" : {
          "@timestamp" : {
            "type" : "date",
            "format" : "strict_date_optional_time||epoch_millis"
          },
          "dummy" : {
            "type" : "string"
          }

        }
      }
    },

Transmit json data :

$ curl -X POST -d 'json={"@target_index": "logstash-2016.02.24","dummy":"test"}' http://localhost:8888/es.test

It should write the data to the given index instead of that It creates new index - logstash-2016.02.25 and it will write data into that. I want to write data to the given index.

Here is the Fluentd elasticsearch github link : https://github.com/uken/fluent-plugin-elasticsearch

Please correct me if I'm missing something.

Kalie answered 25/2, 2016 at 12:51 Comment(2)
I think you have a typo, type elasticsearch should read @type elasticsearch.Stipe
I'm using td-agent v2 on centOS, it's working properly without @.Kalie
C
5

Maybe this is old but actually I run into the same problem and solved by

logstash_format false
index_name fluentd

This create only fluentd as index. From official fluentd tutorial https://docs.fluentd.org/output/elasticsearch

logstash_format (optional): With this option set true, Fluentd uses the conventional index name format logstash-%Y.%m.%d (default: false). This option supersedes the index_name option.

For clean up of old indices please consider using Curator: https://github.com/elastic/curator

I hope it helps someone.

Cranwell answered 16/6, 2019 at 21:44 Comment(2)
Thanks, this little sliver of information was exactly what I needed to get unblocked.Fredfreda
Hi, I am still running into the same problem. When I change the index_name the new indices dont appear on Kibana but when I set it back to the default index logstash it appears. Any suggestions are welcomed!Migratory
D
3

try this, its due to logstash_format true, please enter your index name in below index_name field (default value is fluentd)

<match es.**>
@type elasticsearch
host localhost
port 9200
index_name <.....your_index_name_here.....>
type_name fluentd
flush_interval 5s
</match>

after run this, please check index created or not by load below url in your browser

http://localhost:9200/_plugin/head/

have a good luck

Doura answered 26/2, 2016 at 22:20 Comment(4)
Thanks for the answer. I have tried to change the logstash_format to false, but I'm getting same issue., now it creates duplicate index type.Kalie
did you declare index_name?Doura
Yes. in my case it's test_indexKalie
Does it always take index_name in lowercase? I tried giving it in uppercase but it converted index_name into lowercase values.Siobhansion
V
2

Its because you set logstash_format true, so you have to set the logstash_prefix.

Apache httpd example:

  logstash_prefix fluentd.httpd # defaults to "logstash"
  logstash_prefix_separator _   # defaults to "-"
Vulcanology answered 24/3, 2019 at 11:10 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.