I've been asked to consolidate our log4j log files (NOT using Socket calls for now) into a Logstash JSON file that I'll then feed to Elasticsearch. Our code use the RollingFileAppender. Here's an example log entry.
2016-04-22 16:43:25,172 ERROR :SomeUser : 2 [com.mycompany.SomeClass] AttributeSchema 'Customer |Customer |Individual|Individual|Quarter|Date' : 17.203 The Log Message.
Here's the ConversionPattern value in our log4j.properties file
<param name="ConversionPattern" value="%d{ISO8601} %p %x %X{username}:%t [%c] %m %n" />
Can someone please help me write a Logstash Grok filter that will parse the line? I have the following so far
filter {
if [type] == "log4j" {
grok {
match => ["message", "%{TIMESTAMP_ISO8601:logdate} %{LOGLEVEL:loglevel} %{GREEDYDATA:messsage}"]
}
date {
match => ["logdate", "yyyy-MM-dd HH:mm:ss,SSS", "ISO8601"]
}
}
}
But of course it takes everything after the priority as the message. I want to further segregate AT LEAST the following fields (defined in Log4j Pattern Layout)
- User (%X{username})
- Classpath ([%c])
- Thread (%t)
- Nested Diagnostic Content (%x)
- Message itself (%m)