How to log multiple threads in different log files?
Asked Answered
P

6

11

I have a JAVA class that starts various threads that have unique IDs. Each thread should log into a unique log file, named after the ID.log.

Because I only get the unique ID at runtime, I have to configure Log4J programatically:

// Get the jobID
myJobID = aJobID;
// Initialize the logger
myLogger = Logger.getLogger(myJobID);
FileAppender myFileAppender;
try
{
    myFileAppender = new FileAppender(new SimpleLayout(), myJobID + ".log", false);
    BasicConfigurator.resetConfiguration();
    BasicConfigurator.configure(myFileAppender);
} catch (IOException e1) {
// TODO Auto-generated catch block
    e1.printStackTrace();
}

Now this works fine if I start jobs sequentially - but when I start 2 threads (of the same class) simultaneously, both logs are created but the logs are mixed up: The second thread logs into the first as well as the second log.

How could I make sure that each instance is unique ? I already tried to give a unique name to each logger instance, but it did not change anything.

Phalanx answered 2/12, 2011 at 11:44 Comment(2)
Is there a really good reason you want to do this distinction in the java code? General practice is to keep log in one file and then run some post processing to extract/aggregate? Also refer to logging.apache.org/log4j/1.2/faq.html#a3.1Swore
Duplicate of How do I setup log4j properties so that each thread outputs to its own log file?Boroughenglish
S
17

Logback has a special appender called SiftingAppender which provides a very nice solution to the type of problems you describe. A SiftingAppender can be used to separate (or sift) logging according to any runtime attribute, including thread id.

Springwood answered 4/12, 2011 at 16:35 Comment(6)
Is there any java implementation to do the same as this mkyong.com/logging/logback-different-log-file-for-each-thread i.e. to have sifting appender to do the same as described in that blog.Misshapen
The link you show is entirely based on logback's SiftingAppender.Springwood
Hmm. Got it. can we have multiple siftingAppenders. Basically I need two loggers to write to two different locations. Both will be used in separate threads, so has to use sifting I think. But when I have two it wont work anyway. I'm having 2 MDC values as well.Misshapen
Hmm I'm doing this, LoggerContext context = (LoggerContext) LoggerFactory.getILoggerFactory(); context.reset(); And I'm doing this before creating each Appenders and use that context to set those appenders. But then only one Appender will work. But if I have a common context for both then both will work. Any idea why it does that ?Misshapen
Hint: What do you think context.reset() does?Springwood
Clear any previous configuration. Hmm is that the problem here.Misshapen
S
12

For log4j v2 you can use RoutingAppender to dynamically route messages. You can put value for key 'threadId' into the ThreadContext map and then use this id as a part of the file name. There is an example which I have easily applied for the same purpose as yours. See http://logging.apache.org/log4j/2.x/faq.html#separate_log_files

Be aware when putting values into ThreadContext map: "A child thread automatically inherits a copy of the mapped diagnostic context of its parent." So if you have put a value for key 'threadId' into the parent thread and eventually created multiple threads from it, then all child threads will inherit the value of 'threadId' value. I was not able to simply override this value by using put() one more time - you need to use ThreadContext.clear() or explicitly remove() the value from thread context map.

Here is my working log4j.xml:

<?xml version="1.0" encoding="UTF-8"?>
<configuration status="WARN">
    <properties>
        <property name="logMsgPattern">%d{HH:mm:ss} %-5level - %msg%n</property>
        <property name="logDir">test logs</property><!-- ${sys:testLogDir} -->
    </properties>
    <appenders>
         <Console name="Console" target="SYSTEM_OUT">           
            <PatternLayout pattern="${logMsgPattern}"/>
         </Console>

        <Routing name="Routing">
                    <Routes pattern="$${ctx:threadId}">             
                        <Route>
                            <RollingFile name="RollingFile-${ctx:threadId}" fileName="${logDir}/last-${ctx:threadId}.log" filePattern="${logDir}/%d{yyyy-MM-dd}/archived_%d{HH-mm}-${ctx:threadId}.log">
                                    <PatternLayout pattern="${logMsgPattern}"/>
                                    <Policies>
                                <OnStartupTriggeringPolicy />
                            </Policies> 
                    </RollingFile>
                        </Route>
                    </Routes>
            </Routing>  
    </appenders>

    <loggers>               
        <root level="debug">
            <appender-ref ref="Console" level="debug" />
            <appender-ref ref="Routing" level="debug"/>
        </root>                     
    </loggers>  
</configuration>
Sniperscope answered 3/3, 2014 at 15:24 Comment(0)
A
5

@havexz 's approach is quite good: writing everything to the same log file and using nested diagnostic contexts.

If your concern is about several JVMs writing to the same FileAppender, then i'd suggest two things:

In prudent mode, FileAppender will safely write to the specified file, even in the presence of other FileAppender instances running in different JVMs, potentially running on different hosts.

Arctic answered 2/12, 2011 at 12:34 Comment(0)
R
4

Here is the snippet of the Routing code from a working log4j.xml file.

<Appenders>

        <Console name="ConsoleAppender" target="SYSTEM_OUT">
            <PatternLayout>
                <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %-50logger{4}: %msg%n</pattern>
            </PatternLayout>
        </Console>
    <Routing name="RoutingAppender">
                <Routes pattern="${ctx:logFileName}">


                    <!-- This route is chosen if ThreadContext has a value for logFileName. 
                        The value dynamically determines the name of the log file. -->

                    <Route>
                        <RollingFile name="Rolling-${ctx:logFileName}"
                            fileName="${sys:log.path}/${ctx:logFileName}.javalog"
                            filePattern="./logs/${date:yyyy-MM}/${ctx:logFileName}_%d{yyyy-MM-dd}-%i.log.gz">
                            <PatternLayout>
                                <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %-50logger{4}: %msg%n</pattern>
                            </PatternLayout>
                            <Policies>
                                <TimeBasedTriggeringPolicy interval="6"
                                    modulate="true" />
                                <SizeBasedTriggeringPolicy size="10 MB" />
                            </Policies>
                        </RollingFile>
                    </Route>

                    <!-- This route is chosen if ThreadContext has no value for key logFileName. -->
                    <Route key="${ctx:logFileName}" ref="ConsoleAppender" />

                </Routes>
            </Routing>
</Appenders>
    <loggers>
        <root level="debug">
            <appender-ref ref="RoutingAppender" level="debug" />
        </root>
    </loggers>

The key 'logFileName' can be added to the thread context map in the run() method of the Runnable class as follows,

public class SomeClass implements Runnable{

 private int threadID;

 public SomeClass(int threadID){
   this.threadID=threadID;
   }
 @Override
 public void run() {
    String logFileName = "thread_log_"+ threadID;
    ThreadContext.put("logFileName", logFileName);
    //Some code
    ThreadContext.remove("threadId");
   }
}

In addition, the correct log4j packages must be imported, as shown below.

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.ThreadContext;

Please note that the following imports will not work. LogManager and Logger must also come from org.apache.logging.log4j.

import org.apache.log4j.LogManager;
import org.apache.log4j.Logger;
import org.apache.logging.log4j.ThreadContext;
Reba answered 1/6, 2017 at 6:25 Comment(0)
F
1

As far as I can tell ThreadLocal API was designed to do what you describe.

Code like below would establish per-thread loggers each using own (per-thread) FileAppender:

/**
 * usage: threadLocalLogger.get().info("hello thread local logger")
 */
static ThreadLocal<Logger> threadLocalLogger = newThreadLocalLogger("myJobId");

private static ThreadLocal<Logger> newThreadLocalLogger(final String myJobID) {
    return new ThreadLocal<Logger>() {
        @Override
        protected Logger initialValue() {
            return logger(myJobID, Thread.currentThread().getId());
        }
    };
}

private static Logger logger(String myJobID, long threadId) {
    // Initialize the logger
    String loggerId = myJobID + "-" + threadId;
    Logger myLogger = Logger.getLogger(loggerId);
    FileAppender myFileAppender;
    try
    {
        myFileAppender = new FileAppender(new SimpleLayout(),
                loggerId + ".log", false);
        BasicConfigurator.resetConfiguration();
        BasicConfigurator.configure(myFileAppender);
    } catch (IOException e1) {
    // TODO Auto-generated catch block
        e1.printStackTrace();
    }
    return myLogger;
}
Forewent answered 2/12, 2011 at 13:1 Comment(0)
R
0

What about adding a static instance counter variable to your class. Then you would need a synchronized method which increases the counter for each object created and create the log file name from that value. Something like this:

class yourClass {

  private static int cnt = 0;

  public yourClass(){
    ...
    initLogger();
  }

  private synchronized initLogger(){
     yourClass.cnt++;
     myJobid = yourClass.cnt;

     //include your logging code here
  }
}
Recognizance answered 2/12, 2011 at 11:55 Comment(2)
My JobID is a simple UUID. The problem is no the name of the log file, but the appenders which get mixed up.Phalanx
I found this: manniwood.com/log4j_stuff/index.html but it implies that I have to know how many jobs (threads) will be started.Phalanx

© 2022 - 2024 — McMap. All rights reserved.