Structured logging to journald from within a docker container
Asked Answered
D

2

7

What is the best way to write structured logs to journald from within a docker container?

For example, I have an application that writes using sd_journal_send Rather than change the app, I have tried passing through

-v /var/log/systemd/journal:/var/log/systemd/journal

It works on the my Ubuntu 16.04 desktop, but not on the CoreOS instances where the app runs (which use the Ubuntu 16.04 base image). I don't quite understand why. Perhaps there is a better way to send to the journal?

What limitations does the docker journald output logging option have? It didn't appear it supported apps writing more than just the message field.

--

So I have found I need -v /dev/log:/dev/log

But there is also another problem in that there is no association to the service file that starts the docker container. Manually adding UNIT: servicename.service didn't solve it. And so when looking at and shipping logs for the service, it's associated with the exe but not the container or service. Whose encountered these problems and how have you solved them?

-- Ok let me expand this a bit.

A C program can write to the systemd journal like this:

#include <systemd/sd-journal.h>
#include <unistd.h>
#include <stdlib.h>

int main(int argc, char *argv[]) {
        sd_journal_send("MESSAGE=Hello World!",
                        "MESSAGE_ID=52fb62f99e2c49d89cfbf9d6de5e3555",
                        "PRIORITY=5",
                        "HOME=%s", getenv("HOME"),
                        "TERM=%s", getenv("TERM"),
                        "PAGE_SIZE=%li", sysconf(_SC_PAGESIZE),
                        "N_CPUS=%li", sysconf(_SC_NPROCESSORS_ONLN),
                        NULL);
        return 0;
}

This writes to the journal and adds custom fields like HOME, TERM, PAGE_SIZE etc. When I use journalbeat to ship them to the ELK stack, those fields end up nicely in elasticsearch and I can search on them directly.

However, it seems that docker, simply takes the stdout of apps, and feeds it to the journald with only a few fields it adds itself. e.g. CONTAINER_ID.

When using programs like this inside a docker container and then running them from a service file it creates a slight problem.

1) I have to pass through some directories and device files to get it to write with sd_journal_send.

2) if you start the container from a systemd .service file and expect to use journalctl -u servicename and see the messages, those log messages aren't seen because they went into the journal a different route and don't get associated with the service that ran them.

3) You can add some arbitary fields/tags using docker's journald logging driver, they are fixed, one time additions that will appear on every message sent and are unchanging. They are not dynamic fields like I want from the C code above.

Essentially, the journald log driver is insuffient in my case.

Any suggestions on how both to link the service name so journalctl -u shows up the log messages from sd_journal_send? as that would fix it then.

-- I've found a solution. I'll put the answer below in case others are interested in how I've solved it.

Declaim answered 14/8, 2017 at 20:58 Comment(8)
Will the Docker journald logging driver do what you want? In addition to the log message, it also stores other metadata like CONTAINER_ID, CONTAINER_NAME, CONTAINER_TAG etc.Propitiatory
I may not have made it clear. I need to add my own fields and tie it all together within the same unit. What's happening is basically journalctl -u sees it as another unit when the docker container app is started from systemd service file. Docker is the issue. I'll have to dive into the source and see what the sysyemd logging driver in docker it supports. Might have to extend it.Declaim
Does this help unix.stackexchange.com/questions/332274/…?Acuate
@TarunLalwani - nope.Declaim
How do i compile this on Ubuntu 16.04? I am getting vagrant@vagrant:~/journald$ gcc test.c /tmp/ccLHPQPH.o: In function main': test.c:(.text+0x8b): undefined reference to sd_journal_send_with_location' collect2: error: ld returned 1 exit statusAcuate
gcc test.c -l systemdDeclaim
I'm exploring another option. It seems logstash can take json data in an arbitrary field (e.g. MESSAGE) and explode it into fields before submitting it into elasticsearch. That would solve it for me.Declaim
@Matt, thanks for that command, figure out a solution to the original problem.Acuate
D
2

The eventual solution turned out to be really simple.

I switched to writing my messages as pure json. So journalctl -u now works and shows the MESSAGE field now containing json data.

I then used journalbeat to send that to logstash.

To logstash.conf I added:

filter {
  json {
    source => "message"
  }
}

What this does is it expands the json data from the message field into separate fields at the top level before sending them to elasticsearch.

Details about the json filter for logstash can be found here

Declaim answered 15/8, 2017 at 21:36 Comment(0)
A
6

You need to mount the socket on which journald listens. In case of ubuntu it is /run/systemd/journal/socket. Map this insider you docker container and it would work fine

Figured that out using strace on your sample code

sendmsg(3, {msg_name(29)={sa_family=AF_LOCAL, sun_path="/run/systemd/journal/socket"}, 
msg_iov(23)=[{"CODE_FILE=test.c", 16}, {"\n", 1}, {"CODE_LINE=13", 12}, {"\n", 1}, {"CODE_FUNC=main", 14}, {"\n", 1}, 
{"MESSAGE=Hello World!", 20}, {"\n", 1}, {"MESSAGE_ID=52fb62f99e2c49d89cfbf"..., 43}, {"\n", 1}, {"PRIORITY=5", 10}, {"\n", 1}, 
{"HOME=/home/vagrant", 18}, {"\n", 1}, {"TERM=xterm-256color", 19}, {"\n", 1}, {"PAGE_SIZE=4096", 14}, {"\n", 1}, 
{"N_CPUS=1", 8}, {"\n", 1}, {"SYSLOG_IDENTIFIER=", 18}, {"a.out", 5}, {"\n", 1}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 208

And test it in ubuntu docker container using below

docker run -v /run/systemd/journal/socket:/run/systemd/journal/socket -v $PWD:/jd -it -w /jd ubuntu:16.04 ./a.out

And I get a entry in journalctl -f (on host)

Aug 15 21:40:33 vagrant a.out[11263]: Hello World!
Acuate answered 15/8, 2017 at 21:41 Comment(1)
Yes but it looses the association with the unit file when started as a systemd service. And so, the journalbeat unit filter drops those messages. Anyway, I've solved it by using the json filter in logstash see my answer.Declaim
D
2

The eventual solution turned out to be really simple.

I switched to writing my messages as pure json. So journalctl -u now works and shows the MESSAGE field now containing json data.

I then used journalbeat to send that to logstash.

To logstash.conf I added:

filter {
  json {
    source => "message"
  }
}

What this does is it expands the json data from the message field into separate fields at the top level before sending them to elasticsearch.

Details about the json filter for logstash can be found here

Declaim answered 15/8, 2017 at 21:36 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.