Sending json format log to kibana using filebeat, logstash and elasticsearch?
Asked Answered
I

3

7

I have logs like this:

{"logId":"57aaf6c8d32fb","clientIp":"127.0.0.1","time":"03:11:29 pm","uniqueSubId":"57aaf6c98963b","channelName":"JSPC","apiVersion":"v1","modulName":null,"actionName":"apiRequest","typeOfError":"","statusCode":"","message":"In Auth","exception":"In Auth","logType":"Info"}

{"logId":"57aaf6c8d32fb","clientIp":"127.0.0.1","time":"03:11:29 pm","uniqueSubId":"57aaf6c987206","channelName":"JSPC","apiVersion":"v2","modulName":null,"actionName":"performV2","typeOfError":"","statusCode":"","message":"in inbox api v2 5","exception":"in inbox api v2 5","logType":"Info"}

I want to push them to kibana. I am using filebeat to send data to logstash, using following configuration:

filebeat.yml

 ### Logstash as output
logstash:
# The Logstash hosts
hosts: ["localhost:5044"]

# Number of workers per Logstash host.
#worker: 1

Now using following configuration, I want to change codec type:

input {

     beats {
     port => 5000
     tags => "beats"
     codec => "json_lines"
     #ssl  => true
     #ssl_certificate => "/opt/filebeats/logs.example.com.crt"
     #ssl_key => "/opt/filebeats/logs.example.com.key"
     }


     syslog {
        type => "syslog"
        port => "5514"

    }

}

But, still I get the logs in string format:

"message": "{\"logId\":\"57aaf6c96224b\",\"clientIp\":\"127.0.0.1\",\"time\":\"03:11:29 pm\",\"channelName\":\"JSPC\",\"apiVersion\":null,\"modulName\":null,\"actionName\":\"404\",\"typeOfError\":\"EXCEPTION\",\"statusCode\":0,\"message\":\"404 page encountered http:\/\/localjs.com\/uploads\/NonScreenedImages\/profilePic120\/16\/29\/15997002iicee52ad041fed55e952d4e4e163d5972ii4c41f8845105429abbd11cc184d0e330.jpeg\",\"logType\":\"Error\"}",

Please help me solve this.

Ignoramus answered 10/8, 2016 at 9:50 Comment(0)
G
8

To parse JSON log lines in Logstash that were sent from Filebeat you need to use a json filter instead of a codec. This is because Filebeat sends its data as JSON and the contents of your log line are contained in the message field.

Logstash config:

input {
  beats {
    port => 5044
  }   
}   

filter {
  if [tags][json] {
    json {
      source => "message"
    }   
  }   
}   

output {
  stdout { codec => rubydebug { metadata => true } } 
}

Filebeat config:

filebeat:
  prospectors:
    - paths:
        - my_json.log
      fields_under_root: true
      fields:
        tags: ['json']
output:
  logstash:
    hosts: ['localhost:5044']

In the Filebeat config, I added a "json" tag to the event so that the json filter can be conditionally applied to the data.

Filebeat 5.0 is able to parse the JSON without the use of Logstash, but it is still an alpha release at the moment. This blog post titled Structured logging with Filebeat demonstrates how to parse JSON with Filebeat 5.0.

Guileful answered 10/8, 2016 at 13:45 Comment(2)
Making the changes you mentioned in filebeat.yml file, the following logstash configuration works:input { beats { port => 5044 } } filter { if [tags][json] { json { source => "message" } } } output { elasticsearch { hosts => "localhost:9200" manage_template => false index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } } , not your configuration. Thanks for the help.Ignoramus
Hi @A J, #53045758 this is my question. I am also having the same issue. My fielebat is above 5.0 and i cannot find a resolution. I have tried you solution tooGirgenti
M
4

From FileBeat 5.x You can do it without using Logstash.

Filebeat config:

filebeat.prospectors:
- input_type: log
  paths: ["YOUR_LOG_FILE_DIR/*"]
  json.message_key: logId
  json.keys_under_root: true

output.elasticsearch:
  hosts: ["<HOSTNAME:PORT>"]
  template.name: filebeat
  template.path: filebeat.template.json

Filebeat is more lightweight then Logstash. Also, even if you need to insert to elasticsearch version 2.x you can use this feature of FileBeat 5.x Real example can be found here

Marji answered 27/12, 2016 at 13:10 Comment(2)
what are the tradeoff of going direct to elastic over logstash?Cacie
@Cacie you loose the ability to control access to elastic centrally. If you have lots of filebeat nodes that all have the auth keys for elastic, then you elastic auth changes you have to update all of them vs one logstashEster
P
2

I've scoured internet for the exact same problem you are having and tried various suggestions, including those above. However, none helped so I did it the old fashioned way. I went on elasticsearch documentation on filebeat configuration

and all that was required (no need for filters config in logstash)

Filebeat config:

filebeat.prospectors:
- input_type: log
  document_type: #whatever your type is, this is optional
  json.keys_under_root: true
  paths:
    - #your path goes here

keys_under_root

copies nested json keys to top level in the output document.

My filebeat version is 5.2.2.

Penoyer answered 8/3, 2017 at 14:25 Comment(1)
You are right. But, I am using previous version of filebeat, and in that case only @aj answer works.Ignoramus

© 2022 - 2024 — McMap. All rights reserved.