How does Spring Cloud Sleuth propagate the MDC context in Webflux ouf of the box so that its content can be logged in different threads?
Asked Answered
C

4

14

I'd like to know how Spring Cloud Sleuth propagates the MDC context between threads making the MDC params available each of them.

I've read this article https://simonbasle.github.io/2018/02/contextual-logging-with-reactor-context-and-mdc/ that suggests using the subscriber context to propagate MDC state between threads.

I need to do something similar to with other parars from some request headers so I created that puts it into the MDC for the current thread and stores it in the subscriber context. However the next thread that resumes the execution doesn't have it available. Somehow I should call MDC again in the new thread using the value from the context, but how?

@Slf4j
@RequiredArgsConstructor
@Order(Ordered.HIGHEST_PRECEDENCE)
public class ReactiveRequestCorrelationFilter implements WebFilter {

    @Override
    public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
        String principal = exchange.getRequest().getHeaders().getFirst("principal-header");
        MDC.put("principal", principal);
        return chain.filter(exchange)
                .subscriberContext(Context.of("principal", principal))
                .doOnTerminate(MDC::clear);
    }

}

Have a look at this controller mapping:

    @GetMapping(value = "/v1/departments", produces = MediaType.APPLICATION_JSON_VALUE)
    public Flux<Department> getDepartments() {
        log.info("getDepartaments");
        return webClient.get().uri("http://someService:8080/api/v1/departamentosFoo").retrieve().bodyToFlux(Departments.class)
                .doOnNext(dep -> log.info("found department {}", dep));
    }

It generates this log:

logging.pattern.console="%magenta([%thread]) [%X{traceId}] [%X{spanId}] [%X{principal}]  - %m%n"
[boundedElastic-2] [d0a916db8be0cbf7] [d0a916db8be0cbf7] []  - getDepartments
[boundedElastic-2] [d0a916db8be0cbf7] [fb2367685db70201] []  - found department Department(id=1, nombre=dep, fechaAlta=Fri Apr 24 14:16:20 CEST 2020, staff=1, tag=, empresa=true)
[reactor-http-nio-3] [d0a916db8be0cbf7] [d0a916db8be0cbf7] []  - found department Department(id=1, nombre=dep, fechaAlta=Fri Apr 24 14:16:20 CEST 2020, staff=1, tag=, empresa=true)

The request started in a reactor-http-nio thread , and then swiched to boundedElastic-2. As you can see the traceId and spanId values are shown but not the one I put in my filter. How did Sleuth manage to propagate its context (traceId spanId) into the other threads?

Crepuscule answered 24/4, 2020 at 13:4 Comment(0)
D
10

UPDATE 2023: a new article series was published about the current state of context propagation in reactive systems: https://spring.io/blog/2023/03/28/context-propagation-with-project-reactor-1-the-basics


UPDATE 2021: note there was quite a few changes to this, be sure to check out latest implementation as below link points to an old commit.


They use Reactor Context along with Reactor Hooks. Relevant code from spring-cloud-sleuth: link

Couple of other links which might be useful:

Darbie answered 24/4, 2020 at 22:15 Comment(4)
Hello Martin, I am using the same code as given in the example here, but when I am using .subscribeOn(Schedulers.boundedElastic()) the context is propagated to some threads but not to others. Can you please help.Indoaryan
@Indoaryan I updated my answer with a new link. Hope it helps.Syllabogram
I have read this article, the way this article is using Schedulers.onScheduleHook("context.propagation", WrappedRunnable::new); to wrap each thread of Scheduler to provide context Via ThreadLocal is complex and maybe an overkill, Anyway thanks for the help.Indoaryan
I was trying to get MDC propagation to work. From the linked article in "UPDATE 2023", there were a few missing pieces I needed to getting this to work. If you're here to propagate the MDC across the reactive stack, make sure to check out: ttddyy.github.io/mdc-with-webclient-in-webmvc and ultimately gist.github.com/chemicL/0e0d8e95e28414f0ecb769a5b8ca326ePasargadae
B
8

You don't need any customization to get this to work. All that is required to propagate custom HTTP headers is to tell Sleuth about them:

#The one is on by default
spring.sleuth.log.slf4j.enabled=true

spring.sleuth.propagation-keys=principal-header
spring.sleuth.log.slf4j.whitelisted-mdc-keys=principal-header

(I'm using the properties version, but the same thing works with yaml)

Then in your logback configuration use

[%X{principal-header}]

That's it!

N.B. Personally I would prefer removing the -header bit from the header name.

Barilla answered 7/5, 2020 at 5:13 Comment(1)
These have now been changed to spring.sleuth.baggage.remote-fields & spring.sleuth.baggage.correlation-fields respectively. Also, if someone else is also wondering, only setting these properties and using the header in logging pattern is required, nothing else like setting a tag is necessary, Sleuth handles all that automagically.Aubarta
C
6

Doing some research, with the info provided by Martin in his answer, I worked it out as follows:

Updated filter:

@Slf4j
@RequiredArgsConstructor
@Order(Ordered.HIGHEST_PRECEDENCE + 6)
public class ReactiveRequestCorrelationFilter implements WebFilter {

    @Override
    public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
        return chain.filter(exchange)
                .doOnSubscribe(s -> {
                    String principal = exchange.getRequest().getHeaders().getFirst("principal-header");
                    ExtraFieldPropagation.set("principal", principal);
                });
    }

}

application.yml

spring:
  sleuth:
    log:
      slf4j:
        whitelisted-mdc-keys: principal
    baggage-keys: principal

With this my custom field started showing up in the logs, with Sleuth taking care of setting it and clearing it from MDC:

[boundedElastic-2] [fadba73bf6447d02] [fadba73bf6447d02] [myprincipal] getDepartaments
Crepuscule answered 25/4, 2020 at 12:12 Comment(0)
I
4

Adding custom headers in request to print in logs and to propagate, please follow the steps from 2.2.6.RELEASE

for example my httpheader contains a header with key 'correlationID' and i want that to print in logs and also need to tag and propagate it to other, i don't need to update anything in java code. Like this you can get access to any header directly by updating the applicaiton.yml file

application.yml

spring:
  application:
    name: my-app-api
  sleuth:
   propagation:
    tag:
      whitelisted-keys:
        - correlationID
    enabled: true
  baggage:
    correlation-enabled: true
    correlation-fields:
      - correlationID
    remote-fields:
      - correlationID
    tag-fields:       
      - correlationID
  trace-id128: true
  log:
    slf4j:
      enabled: true
logging:
  pattern:
    level: "%5p [${spring.zipkin.service.name:${spring.application.name:-}},%X{X-Span-Export:-},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{correlationID:-}]"
  level:
    org:
      springframework: info

Make sure that the logging pattern updated with new value to print that on logs. (pls check logging.pattern.level field)

In req header

 correlationID : sample-correlation-Id-123

and in logs

2021-01-15 12:02:52.938  INFO [my-app-api,true,6001846c68912f933a714816b62e04a0,3a714816b62e04a0,sample-correlation-Id-123]
Isley answered 15/1, 2021 at 12:26 Comment(2)
is it possible just put values in MDC like in old way? MDC.put(CLIENT_OS_VERSION, getHeader(httpHeaders, OS_VERSION)); I don't want to change logback.xml fileMarthmartha
i have not tried but i believe it possible. I have no code changes just this configuration in my yml file. This is just one way of doing it.Isley

© 2022 - 2025 — McMap. All rights reserved.