Tags index with filebeat and logstash
Asked Answered
P

4

5

I use logstash-forwarder and logstash and create a dynamic index with tags with this configuration:

/etc/logstash/conf.d/10-output.conf

output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "logstash-%{tags}-%{+YYYY.MM.dd}"
  }
}

/etc/logstash-forwarder.conf

"files": [
    {
      "paths": [
        "/var/log/httpd/ssl_access_log",
        "/var/log/httpd/ssl_error_log"
       ],
      "fields": { "type": "apache", "tags": "mytag" }
    },

The associated filebeat configuration is:

/etc/filebeat/filebeat.yml

filebeat:
  prospectors:
    -
     paths:
       - /var/log/httpd/access_log
     input_type: log
     document_type: apache
     fields:
       tags: mytag

In Kibana, instead of mytag I see beats_input_codec_plain_applied on all of my indices.

Proximity answered 17/3, 2016 at 9:0 Comment(0)
P
2

I have resolved inserting a filter to logstash:

filter {
    if "beats_input_codec_plain_applied" in [tags] {
        mutate {
            remove_tag => ["beats_input_codec_plain_applied"]
        }
    }
}
Proximity answered 14/6, 2016 at 15:10 Comment(0)
G
9

I can see two problems mentioned in this topic. Let me summarize for my own benefit and hopefully for other visitors struggling with that problem too.

  1. format to add tag(s) in filebeat prospector (per prospector tags available since 5.0 or 1.2.3 as a-j noticed) configuration

bad:

 fields:
       tags: mytag

good:

 fields:
       tags: ["mytag"]

However, there's more important issue

  1. Tags are getting concatenated. We want tags to be an array, but if we ship the newly added tags to logstash we'll see them being a concatenated strings in ES.

If you are adding only one tag, the workaround (as per hellb0y77) would be to remove the automatic tag that filebeat adds, in logstash (central server side):

filter {
    if "beats_input_codec_plain_applied" in [tags] {
        mutate {
            remove_tag => ["beats_input_codec_plain_applied"]
        }
    }
}

This would not work if one wanted to add multiple tags in filebeat.

One would have to make logstash split a concatenated string and add each item to tags. Perhaps it would be better in this case, to put tags on filebeat end into some custom field, not "tags" field and extract them from that custom field on logstash.

Anyway, there seems to be no way to make it work by changing filebeat configuration. The only way is by doing some parsing on receiving logstash filter chain. See also https://github.com/elastic/filebeat/issues/220

If you can remove logstash then this could also be solution for you. When sending logs from filebeat directly to elasticsearch, the tags appear in ES as expected.

Gapin answered 30/8, 2016 at 14:30 Comment(1)
This works, but without using "fields". See documentation: elastic.co/guide/en/beats/filebeat/5.0/…Selfoperating
C
2

By default in Filebeat those fields you defined are added to the event under a key named fields. To change this behavior and add the fields to the root of the event you must set fields_under_root: true.

Additionally in Filebeat 5.X, tags is a configuration option under the prospector. And this list of tags merges with the global tags configuration. This pull request contains several example using fields, fields_under_root, and tags for Beats 5.X.

Here is how you should change your configuration for Filebeat 1.X:

filebeat:
  prospectors:
    - paths:
        - /var/log/httpd/access_log
      input_type: log
      document_type: apache
      fields:
        tags: ["mytag"]
      fields_under_root: true
Curvy answered 26/5, 2016 at 20:31 Comment(1)
Thanks, now my tag is present into index name but in this way logstash-mytagbeats_input_codec_plain_applied-2016.05.27 , i tried also use { instead of [Proximity
P
2

I have resolved inserting a filter to logstash:

filter {
    if "beats_input_codec_plain_applied" in [tags] {
        mutate {
            remove_tag => ["beats_input_codec_plain_applied"]
        }
    }
}
Proximity answered 14/6, 2016 at 15:10 Comment(0)
P
0

[xxxx@yyyy init.d]# cat /etc/filebeat/filebeat.yml

### Filebeat configuration managed by Puppet (Ruby 1.8 version) ###

filebeat: spool_size: 1024 publish_async: false idle_timeout: 10s registry_file: .filebeat config_dir: /etc/filebeat/conf.d

output: logstash: hosts: - 1.1.1.1:5033

shipper: tags: - foo-beta

The above way of specifying a tag works, but in logstash you will still see the default "beats_input_codec_plain_applied". Not sure how to get rid of it.

Postprandial answered 26/5, 2016 at 18:20 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.