From: Rahul on
I had a log file to which multiple processes append single line status-
messages at intervals.

echo "foo long line" >> logfile

I guess the writing of a line takes a finite time. Is there a chance of a
corrupted line if another process tries to write its line to the log at the
same time?

echo "bar long line" >> logfile

If so then I'd have to impliment some kind of lock.

Also, what about operations that read from the log. Is there ever a chance
that they get a partial log line if it is in the process of being written?

Does it matter if logfile is on a NFS mounted system?

--
Rahul
From: Keith Keller on
On 2010-06-08, Rahul <nospam(a)nospam.invalid> wrote:
> I had a log file to which multiple processes append single line status-
> messages at intervals.
>
> echo "foo long line" >> logfile
>
> I guess the writing of a line takes a finite time. Is there a chance of a
> corrupted line if another process tries to write its line to the log at the
> same time?

Yes. This is why utilities like syslogd exist, but you could certainly
write your own program to accept logging events and output them to a
logfile (which is then not subject to be corrupted, since only one
process is writing to it). But I strongly believe that you should use
something that already exists.

> If so then I'd have to impliment some kind of lock.

If your programs are doing frequent logging, locking the file is going
to be a pain, as one will be held up waiting for the other to release
the lock. And I believe you mentioned that these are existing programs
that you may or may not be able to modify; if not, then you won't be
able to implement locking anyway. It will be a lot easier to rig
something up to log the output to syslog than to implement locking
outside your running programs.

> Also, what about operations that read from the log. Is there ever a chance
> that they get a partial log line if it is in the process of being written?

Yes. In practice this is rare unless there is a lot of writing
occurring. If you're writing one line per second, you probably won't
see this very often, but if you are writing a program to read and take
action based on what it sees you will need to account for not receiving
a full line. (You could probably just toss a line that doesn't have a
newline.)

> Does it matter if logfile is on a NFS mounted system?

No, but locking on NFS is more complicated than on a local filesystem.

--keith

--
kkeller-usenet(a)wombat.san-francisco.ca.us
(try just my userid to email me)
AOLSFAQ=http://www.therockgarden.ca/aolsfaq.txt
see X- headers for PGP signature information

From: Robert Heller on
At Tue, 8 Jun 2010 21:36:29 -0700 Keith Keller <kkeller-usenet(a)wombat.san-francisco.ca.us> wrote:

>
> On 2010-06-08, Rahul <nospam(a)nospam.invalid> wrote:
> > I had a log file to which multiple processes append single line status-
> > messages at intervals.
> >
> > echo "foo long line" >> logfile
> >
> > I guess the writing of a line takes a finite time. Is there a chance of a
> > corrupted line if another process tries to write its line to the log at the
> > same time?
>
> Yes. This is why utilities like syslogd exist, but you could certainly
> write your own program to accept logging events and output them to a
> logfile (which is then not subject to be corrupted, since only one
> process is writing to it). But I strongly believe that you should use
> something that already exists.
>
> > If so then I'd have to impliment some kind of lock.
>
> If your programs are doing frequent logging, locking the file is going
> to be a pain, as one will be held up waiting for the other to release
> the lock. And I believe you mentioned that these are existing programs
> that you may or may not be able to modify; if not, then you won't be
> able to implement locking anyway. It will be a lot easier to rig
> something up to log the output to syslog than to implement locking
> outside your running programs.

man 1 logger

logger -f logfile "foo long line"
program | logger -f logfile

>
> > Also, what about operations that read from the log. Is there ever a chance
> > that they get a partial log line if it is in the process of being written?
>
> Yes. In practice this is rare unless there is a lot of writing
> occurring. If you're writing one line per second, you probably won't
> see this very often, but if you are writing a program to read and take
> action based on what it sees you will need to account for not receiving
> a full line. (You could probably just toss a line that doesn't have a
> newline.)
>
> > Does it matter if logfile is on a NFS mounted system?
>
> No, but locking on NFS is more complicated than on a local filesystem.
>
> --keith
>

--
Robert Heller -- Get the Deepwoods Software FireFox Toolbar!
Deepwoods Software -- Linux Installation and Administration
http://www.deepsoft.com/ -- Web Hosting, with CGI and Database
heller(a)deepsoft.com -- Contract Programming: C/C++, Tcl/Tk


From: pk on
Rahul wrote:

> I had a log file to which multiple processes append single line status-
> messages at intervals.
>
> echo "foo long line" >> logfile
>
> I guess the writing of a line takes a finite time. Is there a chance of a
> corrupted line if another process tries to write its line to the log at
> the same time?
>
> echo "bar long line" >> logfile
>
> If so then I'd have to impliment some kind of lock.
>
> Also, what about operations that read from the log. Is there ever a chance
> that they get a partial log line if it is in the process of being written?
>
> Does it matter if logfile is on a NFS mounted system?

In short, appends are atomic. However when using NFS I wouldn't bet that
everything works as expected. See this link:

http://stackoverflow.com/questions/1154446/is-file-append-atomic-in-unix

From: Rahul on
Robert Heller <heller(a)deepsoft.com> wrote in
news:efGdna24vP4Py5LRnZ2dnUVZ_gCdnZ2d(a)posted.localnet:

> man 1 logger
>
> logger -f logfile "foo long line"
> program | logger -f logfile

Somethings strange. I get no output in logfile if I use these logger
commands. logger does return 0 in $? so the exit status seems ok.

Could it be because I have moved to rsyslogd instead of syslogd.

[root(a)euclid ~]# service rsyslog status
rsyslogd (pid 10415) is running...
rklogd (pid 10419) is running...
[root(a)euclid ~]# service syslog status
syslogd is stopped
klogd is stopped

I wonder if rsyslogd has a different logger command.



--
Rahul