#StackBounty: #files #logging #compression #logrotate Why do only some of my logs get rotated?

Bounty: 50

I’m using Ubuntu 14.04. I have the following in my /etc/logrotate.conf file …

/home/rails/myproject/log {
        daily
        rotate 3
        compress
        delaycompress
        missingok
        notifempty
        create 644 rails rails
}

/var/log/postgresql {
        daily
        rotate 3
        compress
        delaycompress
        missingok
        notifempty
        create 644 root root
}

Every night, I would look at my rails logs and it would always be bigger — i.e. it didn’t seem like the logs were getting rotated …

myuser@myproject:~$ ls -al /home/rails/myproject/log
total 4574368
drwxr-xr-x  2 rails rails       4096 May 30 12:04 .
drwxr-xr-x 15 rails rails       4096 May 30 12:03 ..
-rw-rw-r--  1 rails rails      14960 Jun  1 22:39 development.log
-rw-rw-r--  1 rails rails          0 Oct 22  2016 .keep
-rw-r--r--  1 rails rails 4523480004 Jun 22 10:19 production.log
-rw-rw-r--  1 rails rails  156358087 Jun 22 10:19 sidekiq.log
-rw-rw-r--  1 rails rails      54246 Apr 10 14:34 test.log

When I run the command manually, I see that some of the logs seem to get rotated …

myuser@myproject:~$ sudo logrotate /etc/logrotate.conf
myuser@myproject:~$ ls -al /home/rails/myproject/log
total 4570288
drwxr-xr-x  2 rails rails       4096 Jun 22 10:22 .
drwxr-xr-x 15 rails rails       4096 May 30 12:03 ..
-rw-rw-r--  1 rails rails          0 Jun 22 10:22 development.log
-rw-rw-r--  1 rails rails      14960 Jun  1 22:39 development.log.1
-rw-rw-r--  1 rails rails          0 Oct 22  2016 .keep
-rw-r--r--  1 rails rails          0 Jun 22 10:22 production.log
-rw-r--r--  1 rails rails 4523505906 Jun 22 10:23 production.log.1
-rw-rw-r--  1 rails rails  156369048 Jun 22 10:23 sidekiq.log
-rw-rw-r--  1 rails rails      54246 Apr 10 14:34 test.log

How do I figure out why my rails logs are not rotated nightly? Note that other logs in the system seem to be. Above, I included my postgres configuration, and when I look at the logs there, seem to be rotating normally …

myuser@myproject:~$ ls -al /var/log/postgresql
total 1832
drwxrwxr-t  2 root     postgres    4096 May  2 20:42 .
drwxr-xr-x 13 root     root        4096 Jun 22 10:22 ..
-rw-r-----  1 postgres adm      1861361 Jun 22 10:14 postgresql-9.6-main.log

Thanks, – Dave

Edit: Putting the configuration in a separate file didn’t seem to do anything. Below is my configuration and also the logs that didn’t appear to get rotated …

myuser@myapp:~$ sudo cat /etc/logrotate.d/myapp
[sudo] password for myuser:
/home/rails/myapp/log/*.log {
   daily
   missingok
   compress
   notifempty
   rotate 12
   create
   delaycompress
   missingok
   su rails rails
}

Here are the logs. Doesn’t appear anything happened …

myuser@myapp:~$ ls -al /home/rails/myapp/log
total 4635956
drwxr-xr-x  2 rails rails       4096 Jun 22 10:22 .
drwxr-xr-x 15 rails rails       4096 May 30 12:03 ..
-rw-rw-r--  1 rails rails          0 Jun 22 10:22 development.log
-rw-rw-r--  1 rails rails      14960 Jun  1 22:39 development.log.1
-rw-rw-r--  1 rails rails          0 Oct 22  2016 .keep
-rw-r--r--  1 rails rails          0 Jun 22 10:22 production.log
-rw-r--r--  1 rails rails 4546785231 Jun 24 12:12 production.log.1
-rw-rw-r--  1 rails rails  200336693 Jun 24 12:51 sidekiq.log
-rw-rw-r--  1 rails rails      54246 Apr 10 14:34 test.log


Get this bounty!!!

#StackBounty: #java #file #http #logging Scanning through logs (tail -f fashion) parsing and sending to a remote server

Bounty: 100

I have a task at hand to build a utility which

  1. Scans through a log file.

  2. Rolls over if a log file is reset.

  3. Scans through each line of the log file.

  4. Each line is sent to an executor service and checks are performed: which include looking for a particular word in the line, if a match is found I forward this line for further processing which includes splitting up the line and forming JSON.

  5. This JSON is sent across to a server using a CloseableHttpCLient with connection keep alive and ServiceUnavailableRetryStrategy patterns.

EntryPoint FileTailReader:(Started from Main)

   public class FileTailReader implements Runnable {

    private final File file;
    private long filePointer;
    private String url;
    private static volatile boolean keepLooping = true; // TODO move to main class
    private static final Logger logger = LogManager.getLogger(Main.class);
    private ExecutorService executor;
    private List<Future<?>> futures;


    public FileTailReader(File file, String url, ExecutorService executor, List<Future<?>> futures) {
        this.file = file;
        this.url = url;
        this.executor = executor;
        this.futures = futures;

    }

    private HttpPost getPost() {
        HttpPost httpPost = new HttpPost(url);
        httpPost.setHeader("Accept", "application/json");
        httpPost.setHeader("Content-type", "application/json");
        return httpPost;
    }

    @Override
    public void run() {
        long updateInterval = 100;
        try {
            ArrayList<String> batchArray = new ArrayList<>();
            HttpPost httpPost = getPost();
            CloseableHttpAsyncClient closeableHttpClient = getCloseableClient();
            Path path = Paths.get(file.toURI());
            BasicFileAttributes basicFileAttributes = Files.readAttributes(path, BasicFileAttributes.class);
            Object fileKey = basicFileAttributes.fileKey();
            String iNode = fileKey.toString();  // iNode is common during file roll
            long startTime = System.nanoTime();
            while (keepLooping) {

                Thread.sleep(updateInterval);
                long len = file.length();

                if (len < filePointer) {

                    // Log must have been rolled
                    // We can spawn a new thread here to read the remaining part of the rolled file.
                    // Compare the iNode of the file in tail with every file in the dir, if a match is found
                    // - we have the rolled file
                    // This scenario will occur only if our reader lags behind the writer - No worry

                    RolledFileReader rolledFileReader = new RolledFileReader(iNode, file, filePointer, executor,
                            closeableHttpClient, httpPost, futures);
                    new Thread(rolledFileReader).start();

                    logger.info("Log file was reset. Restarting logging from start of file.");
                    this.appendMessage("Log file was reset. Restarting logging from start of file.");
                    filePointer = len;
                } else if (len > filePointer) {
                    // File must have had something added to it!
                    RandomAccessFile randomAccessFile = new RandomAccessFile(file, "r");
                    randomAccessFile.seek(filePointer);
                    FileInputStream fileInputStream = new FileInputStream(randomAccessFile.getFD());
                    BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(fileInputStream));
                    String bLine;
                    while ((bLine = bufferedReader.readLine()) != null) {
                        // We will use an array to hold 100 lines, so that we can batch process in a
                        // single thread
                        batchArray.add(bLine);
                        switch (batchArray.size()) {

                            case 1000:
                                appendLine((ArrayList<String>) batchArray.clone(), closeableHttpClient, httpPost);
                                batchArray.clear();
                                break;
                        }
                    }

                    if (batchArray.size() > 0) {
                        appendLine((ArrayList<String>) batchArray.clone(), closeableHttpClient, httpPost);
                    }

                    filePointer = randomAccessFile.getFilePointer();
                    randomAccessFile.close();
                    fileInputStream.close();
                    bufferedReader.close();
                   // logger.info("Total time taken: " + ((System.nanoTime() - startTime) / 1e9));

                }

                //boolean allDone = checkIfAllExecuted();
               // logger.info("isAllDone" + allDone + futures.size());

            }
            executor.shutdown();
        } catch (Exception e) {
            e.printStackTrace();
            this.appendMessage("Fatal error reading log file, log tailing has stopped.");
        }
    }

    private void appendMessage(String line) {
        System.out.println(line.trim());
    }

    private void appendLine(ArrayList<String> batchArray, CloseableHttpAsyncClient client, HttpPost httpPost) {
        Future<?> future = executor.submit(new LocalThreadPoolExecutor(batchArray, client, httpPost));
        futures.add(future);

    }

    private boolean checkIfAllExecuted() {
        boolean allDone = true;
        for (Future<?> future : futures) {
            allDone &= future.isDone(); // check if future is done
        }
        return allDone;
    }

    //Reusable connection
    private RequestConfig getConnConfig() {
        return RequestConfig.custom()
                .setConnectionRequestTimeout(5 * 1000)
                .setConnectTimeout(5 * 1000)
                .setSocketTimeout(5 * 1000).build();
    }

    private PoolingNHttpClientConnectionManager getPoolingConnManager() throws IOReactorException {
        ConnectingIOReactor ioReactor = new DefaultConnectingIOReactor();
        PoolingNHttpClientConnectionManager cm = new PoolingNHttpClientConnectionManager(ioReactor);
        cm.setMaxTotal(1000);
        cm.setDefaultMaxPerRoute(1000);

        return cm;
    }

    private CloseableHttpAsyncClient getCloseableClient() throws IOReactorException {
        CloseableHttpAsyncClient httpAsyncClient = HttpAsyncClientBuilder.create()
                .setDefaultRequestConfig(getConnConfig())
                .setConnectionManager(getPoolingConnManager()).build();

        httpAsyncClient.start();

        return httpAsyncClient;


                /*.setServiceUnavailableRetryStrategy(new ServiceUnavailableRetryStrategy() {
                    @Override
                    public boolean retryRequest(
                            final HttpResponse response, final int executionCount, final HttpContext context) {
                        int statusCode = response.getStatusLine().getStatusCode();
                        return statusCode != HttpURLConnection.HTTP_OK && executionCount < 5;
                    }

                    @Override
                    public long getRetryInterval() {
                        return 0;
                    }
                }).build();*/
    }


}

I am using an implementation of Rabin Karp for string find:

public class RabinKarp {
    private final String pat;      // the pattern  // needed only for Las Vegas
    private long patHash;    // pattern hash value
    private int m;           // pattern length
    private long q;          // a large prime, small enough to avoid long overflow
    private final int R;           // radix
    private long RM;         // R^(M-1) % Q

    /**
     * Preprocesses the pattern string.
     *
     * @param pattern the pattern string
     * @param R       the alphabet size
     */
    public RabinKarp(char[] pattern, int R) {
        this.pat = String.valueOf(pattern);
        this.R = R;
        throw new UnsupportedOperationException("Operation not supported yet");
    }

    /**
     * Preprocesses the pattern string.
     *
     * @param pat the pattern string
     */
    public RabinKarp(String pat) {
        this.pat = pat;      // save pattern (needed only for Las Vegas)
        R = 256;
        m = pat.length();
        q = longRandomPrime();

        // precompute R^(m-1) % q for use in removing leading digit
        RM = 1;
        for (int i = 1; i <= m - 1; i++)
            RM = (R * RM) % q;
        patHash = hash(pat, m);
    }

    // Compute hash for key[0..m-1].
    private long hash(String key, int m) {
        long h = 0;
        for (int j = 0; j < m; j++)
            h = (R * h + key.charAt(j)) % q;
        return h;
    }

    // Las Vegas version: does pat[] match txt[i..i-m+1] ?
    private boolean check(String txt, int i) {
        for (int j = 0; j < m; j++)
            if (pat.charAt(j) != txt.charAt(i + j))
                return false;
        return true;
    }

    // Monte Carlo version: always return true
    // private boolean check(int i) {
    //    return true;
    //}

    /**
     * Returns the index of the first occurrrence of the pattern string
     * in the text string.
     *
     * @param txt the text string
     * @return the index of the first occurrence of the pattern string
     * in the text string; n if no such match
     */
    public int search(String txt) {
        int n = txt.length();
        if (n < m) return n;
        long txtHash = hash(txt, m);

        // check for match at offset 0
        if ((patHash == txtHash) && check(txt, 0))
            return 0;

        // check for hash match; if hash match, check for exact match
        for (int i = m; i < n; i++) {
            // Remove leading digit, add trailing digit, check for match.
            txtHash = (txtHash + q - RM * txt.charAt(i - m) % q) % q;
            txtHash = (txtHash * R + txt.charAt(i)) % q;

            // match
            int offset = i - m + 1;
            if ((patHash == txtHash) && check(txt, offset))
                return offset;
        }

        // no match
        return -1;
    }


    // a random 31-bit prime
    private static long longRandomPrime() {
        BigInteger prime = BigInteger.probablePrime(31, new Random());
        return prime.longValue();
    }
}

Here is my RolledFileReader

public class RolledFileReader implements Runnable {

    private static final Logger logger = LogManager.getLogger(RolledFileReader.class);

    private String iNode;
    private File tailedFile;
    private long filePointer;
    private ExecutorService executor;
    private CloseableHttpAsyncClient client;
    private HttpPost httpPost;
    List<Future<?>> futures;

    public RolledFileReader(String iNode, File tailedFile, long filePointer, ExecutorService executor,
                            CloseableHttpAsyncClient client, HttpPost httpPost, List<Future<?>> futures) {
        this.iNode = iNode;
        this.tailedFile = tailedFile;
        this.filePointer = filePointer;
        this.executor = executor;
        this.client = client;
        this.httpPost = httpPost;
        this.futures = futures;
    }

    @Override
    public void run() {
        try {
            inodeReader();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }


    public void inodeReader() throws Exception {
        String fParent = tailedFile.getParentFile().toString();
        File[] files = new File(fParent).listFiles();
        if (files != null) {
            Arrays.sort(files, Collections.reverseOrder()); // Probability of finding the file at top increases
            for (File file : files) {
                if (file.isFile()) {
                    Path path = Paths.get(file.toURI());
                    BasicFileAttributes basicFileAttributes = Files.readAttributes(path, BasicFileAttributes.class);
                    Object fileKey = basicFileAttributes.fileKey();
                    String matchInode = fileKey.toString();
                    if (matchInode.equalsIgnoreCase(iNode) && file.length() > filePointer) {
                        //We found a match - now process the remaining file - we are in a separate thread
                        readRolledFile(file, filePointer);

                    }
                }
            }

        }
    }


    public void readRolledFile(File rolledFile, long filePointer) throws Exception {
        ArrayList<String> batchArray = new ArrayList<>();
        RandomAccessFile randomAccessFile = new RandomAccessFile(rolledFile, "r");
        randomAccessFile.seek(filePointer);
        FileInputStream fileInputStream = new FileInputStream(randomAccessFile.getFD());
        BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(fileInputStream));
        String bLine;
        while ((bLine = bufferedReader.readLine()) != null) {

            batchArray.add(bLine);
            switch (batchArray.size()) {
                case 1000:
                    executor.execute(new LocalThreadPoolExecutor((ArrayList<String>) batchArray.clone(), client, httpPost));
            }
        }

        if (batchArray.size() > 0) {
            executor.execute(new LocalThreadPoolExecutor((ArrayList<String>) batchArray.clone(), client, httpPost));
        }
    }


}

And my executor service LocalThreadPoolExecutor:

   public class LocalThreadPoolExecutor implements Runnable {
    private static final Logger logger = LogManager.getLogger(Main.class);

    private final ArrayList<String> payload;
    private final CloseableHttpAsyncClient client;
    private final HttpPost httpPost;
    private HttpContext context;
    private final RabinKarp searcher = new RabinKarp("JioEvents");

    public LocalThreadPoolExecutor(ArrayList<String> payload, CloseableHttpAsyncClient client,
                                   HttpPost httpPost) {
        this.payload = payload;
        this.client = client;
        this.httpPost = httpPost;
    }

    @Override
    public void run() {
        try {
            for (String line : payload) {
                int offset = searcher.search(line);
                switch (offset) {
                    case -1:
                        break;
                    default:
                        String zeroIn = line.substring(offset).toLowerCase();
                        String postPayload = processLogs(zeroIn);
                        if (null != postPayload) {
                            postData(postPayload, client, httpPost);
                        }
                }
            }
       // logger.info("Processed a batch of: "+payload.size());
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
    private String processLogs(String line) {
        String[] jsonElements = line.split("\|");
        switch (jsonElements.length) {
            case 15:
                JSONObject jsonObject = new JSONObject();
                jsonObject.put("customerID", jsonElements[1]);
                jsonObject.put("mobileNumber", jsonElements[2]);
                jsonObject.put("eventID", jsonElements[3]);
                jsonObject.put("eventType", jsonElements[4]);
                jsonObject.put("eventDateTime", jsonElements[5]);
                jsonObject.put("eventResponseCode", jsonElements[6]);
                jsonObject.put("sourceSystem", jsonElements[7]);
                jsonObject.put("clientID", jsonElements[8]);
                jsonObject.put("serverHostName", jsonElements[9]);
                jsonObject.put("serverIPAddress", jsonElements[10]);
                jsonObject.put("serverSessionID", jsonElements[11]);
                jsonObject.put("softwareVersion", jsonElements[12]);
                jsonObject.put("deviceInfo", jsonElements[13]);
                jsonObject.put("userAgent", jsonElements[14]);
                return jsonObject.toString();
        }
        return null;
    }

    private void postData(String data, CloseableHttpAsyncClient client, HttpPost httpPost) throws Exception {

        StringEntity entity = new StringEntity(data);
        httpPost.setEntity(entity);
        Future<HttpResponse> future = client.execute(httpPost, context, null);
     //   HttpResponse response = future.get();
     //   logger.info("Resp is: "+response.getStatusLine().getStatusCode());

    }

}

And finally the Main class:

public class Main {
    private static final Logger logger = LogManager.getLogger(Main.class);
    private static final ExecutorService executor = Executors.newFixedThreadPool(25);
    private static final List<Future<?>> futures = new ArrayList<>();

    private static void usage() {
        System.out.println("Invalid usage");
    }

    public static void main(String[] args) {

        if (args.length < 2) {
            usage();
            System.exit(0);
        }
        String url = args[0];
        String fPath = args[1];

        File log = new File(fPath);
        FileTailReader fileTailReader = new FileTailReader(log, url, executor, futures);

        new Thread(fileTailReader).start(); // Can issue multiple threads with an executor like so, for multiple files


    }

}

The purpose of declaring member variables in Main is that I can later on add ShutdownHooks.

I am interested in knowing how I can make this code faster. Right now I am getting a throughput of 300000 lines per 8876 millis. Which is not going well with my peers.

Edit:

I changed the way RandomAccessFile is reading from the file and I have observed a considerable increase in speed, however I am still looking for fresh pointers to enhance and optimize this utility:

else if (len > filePointer) {
                    // File must have had something added to it!
                    long startTime = System.nanoTime();
                    RandomAccessFile randomAccessFile = new RandomAccessFile(file, "r");
                    randomAccessFile.seek(filePointer);
                    FileInputStream fileInputStream = new FileInputStream(randomAccessFile.getFD());
                    BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(fileInputStream));
                    String bLine;
                    logger.info("Pointer: "+filePointer+" fileLength: "+len);
                    while ((bLine = bufferedReader.readLine()) != null) {
                        this.appendLine(bLine, httpclient, httpPost);
                    }
                    logger.info("Total time taken: " + ((System.nanoTime() - startTime) / 1e9));
                    filePointer = randomAccessFile.getFilePointer();
                    logger.info("FilePointer reset to: "+filePointer);
                    randomAccessFile.close();
                    fileInputStream.close();
                    bufferedReader.close();
                }

I also added a bit of batch processing in the above snippet (Code from FileTailReader is edited to demonstrate the same in particular addition of batchArray which is a list) – I see an improvement of 10 seconds. Now the program executes in 21 point some milli seconds.


Get this bounty!!!

#StackBounty: #macos #router #logging #syslog #asl Running OSX as a syslog server

Bounty: 50

I want to receive the logs from my router (an ASUS RT68U) on my laptop (OSX 10.9). It supports syslog and OSX has ASL (a superset of syslog, apparently). I’ve followed the instructions in OS X Lion as a syslog server but the Console shows nothing under /var/log/network (though the directory does show).

The steps I’ve taken:

  • Set the IP of my laptop in the router’s admin page for syslogging.
  • Updated the syslog plist to listen on the network.
  • Created the directory (/var/log/network) to log into.

This is where I diverge slightly from the instructions as with many things in /etc on OSX, if it also has a sub folder structure you’re better off adding your conf in there and leaving the main one alone. So,

  • Added an ASL conf. This is where I think the problem lies.

/etc/asl/asus-router

# Asus router logs
? [A= Host router.asus.com] store_directory /var/log/network uid=0 gid=20 mode=0644 format=bsd rotate=seq compress file_max=5M all_max=50M
# I've also tried:
#? [= Host 192.168.1.1] …
#? [A= Host 192.168.1.1] …
#? [= Host router.asus.com] …
#? [= Sender router.asus.com] …
#? [A= Sender router.asus.com] …
#? [= IP router.asus.com] …
#? [A= IP router.asus.com] …
  • Unloaded and loaded the syslog plist to pick up the new conf.
  • Logged in to the router via SSH. This helpfully adds a log entry and I got the following info:

ssh’d into the router

nvram show | grep log_level
size: 50509 bytes (15027 left)
log_level=6

ps | grep syslog
 9358 iain  1488 S    /sbin/syslogd -m 0 -S -O /tmp/syslog.log -s 256 -l 6 -R 192.168.1.140:514 -L

Finally, I turned off the firewall and ran sudo tcpdump udp port 514. I can see logs coming in but nothing shows up in the Console even if I reload the plist.

06:21:38.983497 IP router.asus.com.40420 > iains-air.syslog: SYSLOG authpriv.info, length: 86 

I’ve even taken a look at RFC5424 to see if I could glean how I might match on the hostname, but as ever with RFC’s, they’re pretty abstract. The only thing I can think to do is edit /etc/syslog.conf, but I wouldn’t know with what.

Any suggestions or insights would be gratefully accepted.


Get this bounty!!!

#StackBounty: echo_read_request_body doesn't work. How to log POST request body (nginx)?

Bounty: 100

I need to log POST data.
I have added to my config

 location / {
     echo_read_request_body;
     access_log     /var/log/nginx/domains/site.post.log postdata;
 }

And also added this to http section

log_format  postdata '$time_local $request_body';

But in my log I can see only local time and dash:

23/Jul/2016:16:07:49 +0000 - 

What’s the problem?


Get this bounty!!!

#StackBounty: echo_read_request_body doesn't work. How to log POST request body (nginx)?

Bounty: 100

I need to log POST data.
I have added to my config

 location / {
     echo_read_request_body;
     access_log     /var/log/nginx/domains/site.post.log postdata;
 }

And also added this to http section

log_format  postdata '$time_local $request_body';

But in my log I can see only local time and dash:

23/Jul/2016:16:07:49 +0000 - 

What’s the problem?


Get this bounty!!!

#StackBounty: echo_read_request_body doesn't work. How to log POST request body (nginx)?

Bounty: 100

I need to log POST data.
I have added to my config

 location / {
     echo_read_request_body;
     access_log     /var/log/nginx/domains/site.post.log postdata;
 }

And also added this to http section

log_format  postdata '$time_local $request_body';

But in my log I can see only local time and dash:

23/Jul/2016:16:07:49 +0000 - 

What’s the problem?


Get this bounty!!!

#StackBounty: echo_read_request_body doesn't work. How to log POST request body (nginx)?

Bounty: 100

I need to log POST data.
I have added to my config

 location / {
     echo_read_request_body;
     access_log     /var/log/nginx/domains/site.post.log postdata;
 }

And also added this to http section

log_format  postdata '$time_local $request_body';

But in my log I can see only local time and dash:

23/Jul/2016:16:07:49 +0000 - 

What’s the problem?


Get this bounty!!!

See exact query execution in Oracle ADF

Challenge

You want to be able to see all the sql-statements and DML-statements that your ADF application is executing.

Solution

  1. Just add the following java option to the run configuration : -Djbo.debugoutput=console
    While this will give you everything you need, you get a lot of clutter and it is difficult to see the information you are looking for.
  2. ADF Logger: While you just run your application, no special run configuration or debug setting is needed, you can change the log level of the different ADF components.  The change is immediate active, no rerun or stopping of the jvm is needed.
    In case you want to see the SQL and DML statements you need to set the oracle.jbo.common.ADFLoggerDiagnosticImpl to FINEST, anything lower will not show the statements.

How to set this logger level?

Follow this procedure to use the logger level:

  1. Just run or debug your application.
  2. In the Log-pane you should see your application starting up.
  3. In this pane, there is an Action-menu.  Choose here the “Oracle Diagnostic Logging”-option.
  4. This will open a new tab, called “logging.xml”.
  5. Now walk through the tree until you find the

    oralce.jbo.common.ADFLoggerDiagnosticImpl

  6. And then select in the Level column “Finest” or type “TRACE:32”.
  7. This change is active immediately. You should see appearing the SQL-statements in the Log pane when you are walking through your application.

ADFLogger_1

ADFLogger_2

ADFLogger_3

Ref:blog

How to use logger in your Java application

How to log Messages in your application?

Java’s logging facility has two parts: a configuration file, and an API for using logging services. It is suitable for simple and moderate logging needs. Log entries can be sent to the following destinations, as either simple text or as XML:
· The console
· A file
· A stream
· Memory
· TCP socket on a remote host

The LEVEL class defines seven levels of logging enlightenment :
FINEST, FINER, FINE, CONFIG, INFO, WARNING, SEVERE .ALL and OFF are defined values as well

The levels in code may be modified as required :
· Upon startup, by using CONFIG to log configuration parameters
· During normal operation, by using INFO to log high-level “heartbeat” information
· When bugs or critical conditions occur, by using SEVERE.
· Debugging information might default to FINE, with FINER and FINEST used occasionally, according to user need.

There is flexibility in how logging levels can be changed at runtime, without the need for a restart:
· By simply changing the configuration file and calling LogManager.readConfiguration.
· By changing the level in the body of code , using the logging API ;
For example, one might automatically increase the logging level in response to unexpected events

The logging levels are in descending order SEVERE, WARNING, INFO, CONFIG, FINE, FINER and FINEST. If we specify log level as

INFO then all the log messages which are equal to INFO and greater (WARNING, SEVERE) levels will be logged.

Levels are attached to the following items:
· An originating logging request (from a single line of code)
· A Logger (usually attached to the package containing the above line of code)
· A Handler (attached to an application)

Here is an example of a logging configuration file :

Properties file which configures the operation of the JDK

logging facility. # The system will look for this config file, first using

a System property specified at startup:

>java -Djava.util.logging.config.file=myLoggingConfigFilePath

If this property is not specified, then the config file is retrieved

from its default location at:

JDK_HOME/jre/lib/logging.properties

Global logging properties.

——————————————

The set of handlers to be loaded upon startup.

Comma-separated list of class names.

(? LogManager docs say no comma here, but JDK example has comma.)

handlers=java.util.logging.FileHandler, java.util.logging.ConsoleHandler

Default global logging level.

Loggers and Handlers may override this level

.level=INFO

Loggers

——————————————

Loggers are usually attached to packages.

Here, the level for each package is specified.

The global level is used by default, so levels

specified here simply act as an override.

myapp.ui.level=ALL
myapp.business.level=CONFIG
myapp.data.level=SEVERE

Handlers

—————————————–

— ConsoleHandler —

Override of global logging level

java.util.logging.ConsoleHandler.level=SEVERE
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter

— FileHandler —

Override of global logging level

java.util.logging.FileHandler.level=ALL

Naming style for the output file:

(The output file is placed in the directory

defined by the “user.home” System property.)

java.util.logging.FileHandler.pattern=%h/java%u.log

Limiting size of output file in bytes:

java.util.logging.FileHandler.limit=50000

Number of output files to cycle through, by appending an

integer to the base file name:

java.util.logging.FileHandler.count=1

Style of output (Simple or XML):

java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter

Here is an example of using the logging API :

package myapp.business;
import java.util.logging.*;
/**
* Demonstrate Java's logging facilities, in conjunction
* with a logging config file.
*/
public final class SimpleLogger {

  public static void main(String argv[]) {
    SimpleLogger thing = new SimpleLogger();
    thing.doSomething();
  }
   public void doSomething() {
   //Log messages, one for each level
   //The actual logging output depends on the configured
    //level for this package. Calls to "inapplicable"
    //messages are inexpensive.
     fLogger.finest("this is finest");
    fLogger.finer("this is finer");
    fLogger.fine("this is fine");
    fLogger.config("this is config");
    fLogger.info("this is info");
    fLogger.warning("this is a warning");
    fLogger.severe("this is severe");

    //In the above style, the name of the class and
    //method which has generated a message is placed
    //in the output on a best-efforts basis only.
    //To ensure that this information is always
    //included, use the following "precise log"
    //style instead :
    fLogger.logp(Level.INFO, this.getClass().toString(), "doSomething", "blah");

    //For the very common task of logging exceptions, there is a
    //method which takes a Throwable :
    Throwable ex = new IllegalArgumentException("Some exception text");
    fLogger.log(Level.SEVERE, "Some message", ex);

    //There are convenience methods for exiting and
    //entering a method, which are at Level.FINER :
    fLogger.exiting(this.getClass().toString(), "doSomething");

    //Display user.home directory, if desired.
    //(This is the directory where the log files are generated.)
    //System.out.println("user.home dir: " + System.getProperty("user.home") );
  }

  // PRIVATE //

  //This logger will inherit the config of its parent, and add
  //any further config as an override. A simple style is to use
  //all config of the parent except, perhaps, for logging level.

  //This style uses a hard-coded literal and should likely be avoided:
  //private static final Logger fLogger = Logger.getLogger("myapp.business");

  //This style has no hard-coded literals, but forces the logger
  //to be non-static.
  //private final Logger fLogger=Logger.getLogger(this.getClass().getPackage().getName());

  //This style uses a static field, but hard-codes a class literal.
  //This is probably acceptable.
  private static final Logger fLogger = Logger.getLogger(SimpleLogger.class.getPackage().getName());

}