OpenShift Aggregated Logging: Parse Apache access log
Asked Answered
D

1

6

When using OpenShift Aggregated Logging I get logs nicely fed into elasticsearch. However, the line as logged by apache ends up in a message field.

I'd like to create queries in Kibana where I can access the url, the status code and other fields individually. For that the special apache access log parsing needs to be done.

How can I do that?

This is an example entry as seen in kibana:

{
  "_index": "42-steinbruchsteiner-staging.3af0bedd-eebc-11e6-af4b-005056a62fa6.2017.03.29",
  "_type": "fluentd",
  "_id": "AVsY3aSK190OXhxv4GIF",
  "_score": null,
  "_source": {
    "time": "2017-03-29T07:00:25.595959397Z",
    "docker_container_id": "9f4fa85a626d2f5197f0028c05e8e42271db7a4c674cc145204b67b6578f3378",
    "kubernetes_namespace_name": "42-steinbruchsteiner-staging",
    "kubernetes_pod_id": "56c61b65-0b0e-11e7-82e9-005056a62fa6",
    "kubernetes_pod_name": "php-app-3-weice",
    "kubernetes_container_name": "php-app",
    "kubernetes_labels_deployment": "php-app-3",
    "kubernetes_labels_deploymentconfig": "php-app",
    "kubernetes_labels_name": "php-app",
    "kubernetes_host": "itsrv1564.esrv.local",
    "kubernetes_namespace_id": "3af0bedd-eebc-11e6-af4b-005056a62fa6",
    "hostname": "itsrv1564.esrv.local",
    "message": "10.1.3.1 - - [29/Mar/2017:01:59:21 +0200] "GET /kwf/status/health HTTP/1.1" 200 2 "-" "Go-http-client/1.1"\n",
    "version": "1.3.0"
  },
  "fields": {
    "time": [
      1490770825595
    ]
  },
  "sort": [
    1490770825595
  ]
}
Diarthrosis answered 20/3, 2017 at 8:32 Comment(1)
For that the special apache access log parsing needs to be done. How can I do that? Is this your problem?Hyaluronidase
G
0

Disclaimer: I did not test this out in openshift. I don't know which tech stack you are using for your microservice.

This is how I do this in a spring boot application (with logback) deployed in Kubernetes.

1. Use logstash encoder for logback (This will write logs in Json format which is more ELK stack friendly)

I have a gradle dependency to enable this

compile "net.logstash.logback:logstash-logback-encoder:3.5"

Then configure LogstashEncoder as encoder in the appender, in logback-spring.groovy/logback-spring.xml (or logabck.xml)

2. Have some filters or libraries to write the access log

For 2. Either use

A. Use "net.rakugakibox.springbootext:spring-boot-ext-logback-access:1.6" library

(This is what I am using)

It gives in a nice json format, as follows

{  
   "@timestamp":"2017-03-29T09:43:09.536-05:00",
   "@version":1,
   "@message":"0:0:0:0:0:0:0:1 - - [2017-03-29T09:43:09.536-05:00] \"GET /orders/v1/items/42 HTTP/1.1\" 200 991",
   "@fields.method":"GET",
   "@fields.protocol":"HTTP/1.1",
   "@fields.status_code":200,
   "@fields.requested_url":"GET /orders/v1/items/42 HTTP/1.1",
   "@fields.requested_uri":"/orders/v1/items/42",
   "@fields.remote_host":"0:0:0:0:0:0:0:1",
   "@fields.HOSTNAME":"0:0:0:0:0:0:0:1",
   "@fields.content_length":991,
   "@fields.elapsed_time":48,
   "HOSTNAME":"ABCD"
}

OR

B. Use Logback's Tee Filter

OR

C. Spring's CommonsRequestLoggingFilter (Did not really test this out)

Add a bean definition

    @Bean
    public CommonsRequestLoggingFilter requestLoggingFilter() {
        CommonsRequestLoggingFilter crlf = new CommonsRequestLoggingFilter();
        crlf.setIncludeClientInfo(true);
        crlf.setIncludeQueryString(true);
        crlf.setIncludePayload(true);
        return crlf;
    }

Then set org.springframework.web.filter.CommonsRequestLoggingFilter to DEBUG, this can be done using the application.properties by adding:

logging.level.org.springframework.web.filter.CommonsRequestLoggingFilter=DEBUG
Garibold answered 29/3, 2017 at 18:28 Comment(2)
Does fluentd detect and parse JSON log messages automatically?Diarthrosis
It should (take with a grain of salt as I haven't tested it myself). I don't see why fluentd will not respect the structured log.Garibold

© 2022 - 2024 — McMap. All rights reserved.