How do I wrap some of the methods of logstash logback encoder into an inner field?
Asked Answered
L

1

6

I want to be able to have some of the fields that get generated by logstash logback encoder to be wrapped within another field. Can this be done by the XML configuration inside of logback-spring.xml or do I have to implement some class and then refer to this in the configuration?

I tried reading about implementing the Factory and Decorator methods but it didn't seem to get me anywhere.

<appender name="FILE"
    class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>/Users/name/dev/test.log
    </file>
    <rollingPolicy
        class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
        <!-- daily rollover -->
        <fileNamePattern>/Users/name/dev/log/test.%d{yyyy-MM-dd}.log
        </fileNamePattern>
        <maxHistory>30</maxHistory>
    </rollingPolicy>
    <encoder class="net.logstash.logback.encoder.LogstashEncoder">
        <customFields>{"component":"webserver","datacenter":"ord"}
        </customFields>

    </encoder>
</appender>

The current JSON I get when something logs is:

{
  "@timestamp": "2019-07-18T18:12:49.431-07:00",
  "@version": "1",
  "message": "Application shutdown requested.",
  "logger_name":     "org.springframework.boot.admin.SpringApplicationAdminMXBeanRegistrar$SpringApplicationAdmin",
  "thread_name": "RMI TCP Connection(2)-127.0.0.1",
  "level": "INFO",
  "level_value": 20000,
  "component": "webserver",
  "datacenter": "ord"
}

What I want it to be is:

{
  "@timestamp": "2019-07-18T18:12:49.431-07:00",
  "@version": "1",
  "component": "webserver",
  "datacenter": "ord",
  "data": {
    "message": "Application shutdown requested.",
    "logger_name": "org.springframework.boot.admin.SpringApplicationAdminMXBeanRegistrar$SpringApplicationAdmin",
    "thread_name": "RMI TCP Connection(2)-127.0.0.1",
    "level": "INFO",
    "level_value": 20000
  }
}

As you can see a select set of fields are wrapped withiin 'data' instead of being in outer later.

Labe answered 19/7, 2019 at 1:37 Comment(1)
A doubt: If you are using the LogstashEncoder inside RollingFileAppender where do you let logstash know that this logs have to be indexed into ELK. I have learnt that to index the log into ELK I must use the LogstashTcpSocketAppender wherein I can specify the logstash destination ip/port. So in your approach above how would the logstash know it has to index these logs? Through a separate <some-configuration.conf> file?Buddhi
E
10

Instead of using net.logstash.logback.encoder.LogstashEncoder, you'll need to use a net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder and configure its set of providers. Use the nestedField provider to create the nested data field.

Configuring LoggingEventCompositeJsonEncoder is more complex than configuring LogstashEncoder, because LoggingEventCompositeJsonEncoder starts with no providers configured, and you have to build it up with all the providers you want. LogstashEncoder is just a subclass of LoggingEventCompositeJsonEncoder with a pre-configured set of providers.

<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
  <providers>
    <timestamp/>
    <version/>
    <pattern>
      <pattern>
        {
          "component": "webserver",
          "datacenter":"ord"
        }
      </pattern>
    </pattern>
    <nestedField>
      <fieldName>data</fieldName>
      <providers>
        <message/>
        <loggerName/>
        <threadName/>
        <logLevel/>
        <callerData/>
        <stackTrace/>
        <context/>
        <mdc/>
        <tags/>
        <logstashMarkers/>
        <arguments/>
      </providers>
    </nestedField>
  </providers>
</encoder>

Be sure to check out the provider configuration documentation for the various configuration options for each provider.

Edenedens answered 20/7, 2019 at 2:32 Comment(6)
Wow, thanks @Phil. This is exactly what I was looking for.Labe
Getting lower level with this CompositeJsonEncoder, earlier I was able to rename the providers with <fieldNames><timestamp>custom_name</timestamp></fieldNames> is this feature gone now?Labe
I guess the question is: Does 'LoggingEventCompositeJsonEncoder' support 'Customizing Standard Field Names'?Labe
Yes, but the field names are configured under the provider that emits the field, similar to how the nestedField’s fieldName is configured in the example.Edenedens
Thanks for all of the help @Phil Clay. I was able to achieve what I wanted with your help. I appreciate everythingLabe
A doubt: If you are using the LogstashEncoder inside RollingFileAppender where do you let logstash know that this logs have to be indexed into ELK. I have learnt that to index the log into ELK I must use the LogstashTcpSocketAppender wherein I can specify the logstash destination ip/port. So in your approach above how would the logstash know it has to index these logs? Through a separate <some-configuration.conf> file?Buddhi

© 2022 - 2024 — McMap. All rights reserved.