How to define seperated indexes for different logs in Filebeat/ELK?
Asked Answered
S

4

7

I am wondering how to create separated indexes for different logs fetched into logstash (which were later passed onto elasticsearch), so that in kibana, I can define two indexes for them and discover them.

In my case, I have a few client servers (each of which is installed with filebeat) and a centralized log server (ELK). Each client server has different kinds of logs, e.g. redis.log, python logs, mongodb logs, that I like to sort them into different indexes and stored in elasticsearch.

Each client server also serves different purposes, e.g. databases, UIs, applications. Hence I also like to give them different index names (by changing output index in filebeat.yml?).

Sivas answered 8/8, 2016 at 13:35 Comment(0)
F
7

In your Filebeat configuration you can use document_type to identify the different logs that you have. Then inside of Logstash you can set the value of the type field to control the destination index.

However before you separate your logs into different indices you should consider leaving them in a single index and using either type or some custom field to distinguish between log types. See index vs type.

Example Filebeat prospector config:

filebeat:
  prospectors:
    - paths:
        - /var/log/redis/*.log
      document_type: redis

    - paths:
        - /var/log/python/*.log
      document_type: python

    - paths:
        - /var/log/mongodb/*.log
      document_type: mongodb

Example Logstash config:

input {
  beats {
    port => 5044
  }
}

output {
  # Customize elasticsearch output for Filebeat.
  if [@metadata][beat] == "filebeat" {
    elasticsearch {
      hosts => "localhost:9200"
      manage_template => false
      # Use the Filebeat document_type value for the Elasticsearch index name.
      index => "%{[@metadata][type]}-%{+YYYY.MM.dd}"
      document_type => "log"
    }
  }
}
Franchot answered 8/8, 2016 at 21:58 Comment(3)
Are you sure that document_type in filebeat will create a [@metadata][type] field in the logstash event and not a [type] field? I think it should read index => "%{type}-%{+YYYY.MM.dd}" instead.Runyan
The document_type value is used for both the [@metadata][type] and [type], so either field can be used for the index.Franchot
document_type is not working in es6+, but works well wihth es4, es5..I guess that's my problem..Sexennial
S
2

filebeat.yml

filebeat.prospectors:

- input_type: log
    paths:
    - /var/log/*.log
  fields: {log_type: toolsmessage}


- input_type: log
  paths:
    - /etc/httpd/logs/ssl_access_*
  fields: {log_type: toolsaccess}

in the logstash.conf.

input {
  beats {
    port => "5043"
  }
}

filter {
  if ([fields][log_type] == "toolsmessage") {
    mutate {
      replace => {
        "[type]" => "toolsmessage"
      }
    }
  }
  else if ([fields][log_type] == "toolsaccess") {
    mutate {
      replace => {
        "[type]" => "toolsaccess"
      }
    }
  }
}

output {
  elasticsearch {
    hosts => ["10.111.119.211:9200"]
    index => "%{type}_index"
  }
 #stdout { codec => rubydebug }
}
Sexennial answered 4/1, 2019 at 6:28 Comment(1)
discuss.elastic.co/t/… may works too, but I havn't test this..Sexennial
P
1

In logstash you can define multiple input, filter or output plugins with the help of tags:

input {
    file {
            type => "redis"
            path => "/home/redis/log"
    }
    file {
            type => "python"
            path => "/home/python/log"
    }
} 
filter {
    if [type] == "redis" {
            # processing .......
    }
    if [type] == "python" {
            # processing .......
    }
}
output {
    if [type] == "redis" {
            # output to elasticsearch redis
            index => "redis" 
    }
    if [type] == "python" {
            # output to elasticsearch python
            index => "python"
    }
}
Po answered 8/8, 2016 at 18:57 Comment(1)
this works actually ,but I guess the discuss.elastic.co/t/… contails more detailsSexennial
M
0

I have read all of the above. Finds out my way.

input {
    stdin {
    }
    jdbc {
      type => "jdbc"
      ....
    }
    http_poller {
        type=>"api"
      ....
    }

}
filter {
....
}
output {
    elasticsearch {
        hosts => ["jlkjkljljkljk"]
        index => "%{type}_index"
        document_id => "%{id}"
    }
    stdout {
        codec => json_lines
    }
}
Menken answered 9/7, 2019 at 3:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.