You rotate your logs and then find the log files are empty. It appears no more data is getting written to them.
Having done server maintenance for several years, this is a common issue that is not specific to the app writing to the log file, such as
forever. I understand what's going to be this:
While you appear to be logging to a file, you are really logging to a file descriptor. After log rotation by an external application, the application continues to log to a file descriptor, but now the file descriptor is no longer connected with the file. The file has been re-created through log rotation. While the log file may be empty, your disk space may well be continuing to increase.
logrotate and copytruncate
copytruncate option to
logrotate is one solution. This is designed to workaround the file-descriptor-vs-file issue described above by leaving the relationship intact. Instead of renaming the file, the contents of the file are irst copied to the rotated location and then the original file is truncated back to an empty state, as opposed to renaming the file. This works, but feels like a workaround.
Restart the app
logrotate and similar solutions can help you send a command to restart the app during log rotation so that filename-vs-file-descriptor relationship gets refreshed. This works too. If like me, you are also on-call to respond to problems with apps restarting at midnight, you would probably prefer to find another solution that doesn't mess with your application in the middle of the night. (What could go wrong with simply restarting an app in the middle of the night?)
Build log rotation into your app
You could build log rotation into whatever app is doing the logging, but this is a general problem. Does it make sense for every single server or process supervisor to roll-its-own log rotation solution? Surely there's a more general solution to this.
Log directly from your app over the network to syslog or a 3rd-party service
Logging directly from the app over the network avoids the direct use of log files, but most of the options I've looked for this in Node.js share the same design flaw: They don't (or didn't recently) handle the "sad path" of the remote logging server being unavailable. If they coped with it at all, the solution was to put buffered records into an in-memory queue of unlimited size. Given enough logging or a long enough outage, memory would eventually fill up and things would crash. Limiting the buffer queue size would address that issue, but it illustrates a point: designing robust network services is hard. Your are likely busy building and maintaining your main application. Do you want to also be responsible for the memory, latency and CPU concerns of a network logging client embedded in your application?
For reference, here are the related bug reports I've opened about this issue:
- in node-bunyan-syslog lack of queue size management could lead to memory exhaustion, crash. #24
- in logentries-stream If Logentries connection fails, unbounded memory growth occurs, causing potential crash and loss of logs #1 reported to be fixed
- in winston-logentries-simple Logs lost if LogEntries is unreachable. #2 author responded that "I used this in a project where losing logs was OK"
- in logentries-client Bad DNS or Bad API tokens cause silent failure, infinite queue growth #4 reported fix.
If you are using a module that logs over the network directly, you might wish to check how it handles the possibility that the network or logging service is down.
Log to STDOUT and STDERR, use syslog
If your application simply logs to STDOUT and STDERR instead of a log file, then you've eliminated the problematic direct-use of logging files and created a foundation for something that specializes in logging to handle the logs.
I recommend reading the post Logs are Streams, Not Files which makes a good case for why you should log to STDOUT and shows how you can then pipe to logs to
rsyslog (or another syslog server) from there, which specialize in being good at logging. They can do things like forward your logs to a third party service like Logentries, and handle potential networks issues there outside your application.
Logging to STDOUT and STDERR is also considered a best practice in the App Container Spec. I expect to see more of this logging pattern as containerization catches on.
Log to STDOUT, use systemd
systemd can do process-supervision (like
pm2 in the Node.js ecosystem), including user-owned services, not just root. It's also designed to handle logging that services send to STDOUT and STDERR and has a powerful
journalctl tool built-in. There's no requirement that your process supervisor be written in the same language your app is.
Systemd is included with Ubuntu starting with 16.04 and is already standard in Fedora. CoreOs uses Systemd inside its container to handle process supervision and logging, but also because it starts in under a second.
How to Log to STDOUT when your app is designed to log to a file?
If you aren't using systemd as your process manager, logging to STDOUT may be tricky. Combining pipes with backgrounded services can be hard, and some apps and process monitors only support options for logging to a file, not to STDOUT.
Bash process substition may help you in that case. The syntax of process subsitution in bash is a greater-than or less-than symbol immediately followed by a parenthesis. Within the parens you can put the command that you'd like to pipe STDOUT to.
Bash will then substitute a temporary path to a file descriptor that pipes the content on to your command, like a named pipe. Here's an example using
forever -a -l >(logger -t myapp) start ./myapp.js
When you run
forever list, you'll see an entry in the
logfile column that looks like
/dev/fd/63. Just like a regular log file, this syntax works even when
start runs the app in the background.
logger is a
syslogd client that forwards the logs on to
syslogd in this case.