Prev: For which type of Linux distro are *.bin install packages?
Next: "Hadron" is a little lying nebbish
From: Andreas Moroder on 5 Aug 2010 06:03 Hello, today we hade big problem with our server. We discovered that the reason were missing filehandles. I looked at /proc/sys/fs/file-nr and discovered that the limit of 131070 used filehandles was reached. Can anyone please tell me how such a big number of used filehandles can be reached while lsof | wc -l tells me that there are "only " 48110 open files ? I changed file-max to 180000, but this can not be the solution. Is there a way to get back the "lost" filehandles ? Thanks Andreas
From: John Reiser on 5 Aug 2010 12:07 > the limit of 131070 used filehandles was reached. > > Can anyone please tell me how such a big number of used filehandles can > be reached while lsof | wc -l tells me that there are "only " 48110 open > files ? fork() increases the number of open handles [file descriptors] but not the number of open files [inodes]. pipe() creates 2 descriptors but only one file. open() the same existing file N times [for read only: O_RDONLY] creates N descriptors for just one file. > I changed file-max to 180000, but this can not be the solution. Is there > a way to get back the "lost" filehandles ? close() them. Find the process, nest of processes, or even unrelated processes that have many open descriptors, and kill those processes. See also O_CLOEXEC <fcntl.h>, and the "<&-" shell syntax. Many shell commands (especially user-written programs) do not use stdin, yet inherit an open stdin from their parent, thus creating another file descriptor for the same stdin (which may be /dev/ttyN, etc.) --
|
Pages: 1 Prev: For which type of Linux distro are *.bin install packages? Next: "Hadron" is a little lying nebbish |