memonic

Welcome to Gyazo : Seriously Instant Screen-Grabbing

Save
Gyazo lets you instantly grab the screen and upload the image to the web. You can easily share them on Chat, Twitter, Blog, Tumblr, etc. Available for Windows, Mac and Linux.

Comments (1)

Patrice Neff

Patrice Neff May 16, 2011

Finally a no-frills simple screenshot tools. Works great on Windows.

How Many Open Files?

Save
How Many Open Files?
Topic:GNU/Linux   Date: 2003-07-30
Printer Friendly: Print

spacerspacer
<<  <   >  >>

Subject

We knew the answer to this question once, back when the world was young and full of truth. Without hesitation, we'd have spouted "Just take the output of lsof | wc -l!" And it's true enough, in a general sort of way. But if you asked me the same question now, my answer would be: "Why do you want to know?"

Are you, for instance, attempting to determine whether the number of open files is exceeding the limit set in the kernel? Then the output of lsof will be practically useless.

Let's back up...

On *nix systems, there is a limit set in the kernel on how many open file descriptors are allowed on the system. This may be compiled in, or it may be tunable on the fly. In Linux, the value of this parameter can be read from and written to the proc filesystem.

[root@srv-4 proc]# cat /proc/sys/fs/file-max 
52427

On this system 52,427 open file descriptors are permitted. We are unlikely to run out. If we wanted to change it, we'd do something like this:

[root@srv-4 proc]# echo "104854" > /proc/sys/fs/file-max 
[root@srv-4 proc]# cat /proc/sys/fs/file-max 
104854

(Warning, change kernel parameters on the fly only with great caution and good cause)

But how do we know how many file descriptors are being used?

[root@srv-4 proc]# cat /proc/sys/fs/file-nr
3391    969     52427
|        |       |
|        |       |
|        |       maximum open file descriptors
|        total free allocated file descriptors
total allocated file descriptors
(the number of file descriptors allocated since boot)

The number of open file descriptors is column 1 - column 2; 2325 in this case. (Note: we have read contradictory definitions of the second column in newsgroups. Some people say it is the number of used allocated file descriptors - just the opposite of what we've stated here. Luckily, we can prove that the second number is free descriptors. Just launch an xterm or any new process and watch the second number go down.)

What about the method we mentioned earlier, running lsof to get the number of open files?

[root@srv-4 proc]# lsof | wc -l
4140

Oh my. There is quite a discrepancy here. The lsof utility reports 4140 open files, while /proc/sys/fs/file-nr says, if our math is correct, 2325. So how many open files are there?

What is an open file?

Is an open file a file that is being used, or is it an open file descriptor? A file descriptor is a data structure used by a program to get a handle on a file, the most well know being 0,1,2 for standard in, standard out, and standard error. The file-max kernel parameter refers to open file descriptors, and file-nr gives us the current number of open file descriptors. But lsof lists all open files, including files which are not using file descriptors - such as current working directories, memory mapped library files, and executable text files. To illustrate, let's examine the difference between the output of lsof for a given pid and the file descriptors listed for that pid in /proc.

Pick a PID, any PID

Let's look at this process:

usr-4    2381  0.0  0.5  5168 2748 pts/14   S    14:42   0:01 vim openfiles.html
[root@srv-4 usr-4]# lsof | grep 2381
vim        2381 usr-4  cwd    DIR        3,8    4096   2621517 /n
vim        2381 usr-4  rtd    DIR        3,5    4096         2 /
vim        2381 usr-4  txt    REG        3,2 2160520     34239 /usr/bin/vim
vim        2381 usr-4  mem    REG        3,5   85420    144496 /lib/ld-2.2.5.so
vim        2381 usr-4  mem    REG        3,2     371     20974 /usr/lib/locale/LC_IDENTIFICATION
vim        2381 usr-4  mem    REG        3,2   20666    192622 /usr/lib/gconv/gconv-modules.cache
vim        2381 usr-4  mem    REG        3,2      29     20975 /usr/lib/locale/LC_MEASUREMENT
vim        2381 usr-4  mem    REG        3,2      65     20979 /usr/lib/locale/LC_TELEPHONE
vim        2381 usr-4  mem    REG        3,2     161     19742 /usr/lib/locale/LC_ADDRESS
vim        2381 usr-4  mem    REG        3,2      83     20977 /usr/lib/locale/LC_NAME
vim        2381 usr-4  mem    REG        3,2      40     20978 /usr/lib/locale/LC_PAPER
vim        2381 usr-4  mem    REG        3,2      58     51851 /usr/lib/locale/LC_MESSAGES/SYSL
vim        2381 usr-4  mem    REG        3,2     292     20976 /usr/lib/locale/LC_MONETARY
vim        2381 usr-4  mem    REG        3,2   22592     99819 /usr/lib/locale/LC_COLLATE
vim        2381 usr-4  mem    REG        3,2    2457     20980 /usr/lib/locale/LC_TIME
vim        2381 usr-4  mem    REG        3,2      60     35062 /usr/lib/locale/LC_NUMERIC
vim        2381 usr-4  mem    REG        3,2  290511     64237 /usr/lib/libncurses.so.5.2
vim        2381 usr-4  mem    REG        3,2   24565     64273 /usr/lib/libgpm.so.1.18.0
vim        2381 usr-4  mem    REG        3,5   11728    144511 /lib/libdl-2.2.5.so
vim        2381 usr-4  mem    REG        3,5   22645    144299 /lib/libcrypt-2.2.5.so
vim        2381 usr-4  mem    REG        3,5   10982    144339 /lib/libutil-2.2.5.so
vim        2381 usr-4  mem    REG        3,5  105945    144516 /lib/libpthread-0.9.so
vim        2381 usr-4  mem    REG        3,5  169581    144512 /lib/libm-2.2.5.so
vim        2381 usr-4  mem    REG        3,5 1344152    144297 /lib/libc-2.2.5.so
vim        2381 usr-4  mem    REG        3,2  173680    112269 /usr/lib/locale/LC_CTYPE
vim        2381 usr-4  mem    REG        3,5   42897    144321 /lib/libnss_files-2.2.5.so
vim        2381 usr-4    0u   CHR     136,14                16 /dev/pts/14
vim        2381 usr-4    1u   CHR     136,14                16 /dev/pts/14
vim        2381 usr-4    2u   CHR     136,14                16 /dev/pts/14
vim        2381 usr-4    4u   REG        3,8   12288   2621444 /n/.openfiles.html.swp
[root@srv-4 usr-4]# lsof | grep 2381 | wc -l
30
[root@srv-4 fd]# ls -l /proc/2381/fd/
total 0
lrwx------    1 usr-4   usr-4         64 Jul 30 15:16 0 -> /dev/pts/14
lrwx------    1 usr-4   usr-4         64 Jul 30 15:16 1 -> /dev/pts/14
lrwx------    1 usr-4   usr-4         64 Jul 30 15:16 2 -> /dev/pts/14
lrwx------    1 usr-4   usr-4         64 Jul 30 15:16 4 -> /n/.openfiles.html.swp



Quite a difference. This process has only four open file descriptors, but there are thirty open files associated with it. Some of the open files which are not using file descriptors: library files, the program itself (executable text), and so on as listed above. These files are accounted for elsewhere in the kernel data structures (cat /proc/PID/maps to see the libraries, for instance), but they are not using file descriptors and therefore do not exhaust the kernel's file descriptor maximum.

Linux Increase The Maximum Number Of Open Files / File Descriptors (FD)

Save

Linux Increase The Maximum Number Of Open Files / File Descriptors (FD)

How do I increase the maximum number of open files under CentOS Linux? How do I open more file descriptors under Linux?

The ulimit command provides control over the resources available to the shell and/or to processes started by it, on systems that allow such control. The maximum number of open file descriptors displayed with following command (login as the root user).

Command To List Number Of Open File Descriptors

Use the following command command to display maximum number of open file descriptors:
cat /proc/sys/fs/file-max
Output:

75000

75000 files normal user can have open in single login session. To see the hard and soft values, issue the command as follows:
# ulimit -Hn
# ulimit -Sn

To see the hard and soft values for httpd or oracle user, issue the command as follows:
# su - username
In this example, su to oracle user, enter:
# su - oracle
$ ulimit -Hn
$ ulimit -Sn

System-wide File Descriptors (FD) Limits

The number of concurrently open file descriptors throughout the system can be changed via /etc/sysctl.conf file under Linux operating systems.

The Number Of Maximum Files Was Reached, How Do I Fix This Problem?

Many application such as Oracle database or Apache web server needs this range quite higher. So you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root):
# sysctl -w fs.file-max=100000
Above command forces the limit to 100000 files. You need to edit /etc/sysctl.conf file and put following line so that after reboot the setting will remain as it is:
# vi /etc/sysctl.conf
Append a config directive as follows:
fs.file-max = 100000
Save and close the file. Users need to log out and log back in again to changes take effect or just type the following command:
# sysctl -p
Verify your settings with command:
# cat /proc/sys/fs/file-max
OR
# sysctl fs.file-max

User Level FD Limits

The above procedure sets system-wide file descriptors (FD) limits. However, you can limit httpd (or any other users) user to specific limits by editing /etc/security/limits.conf file, enter:
# vi /etc/security/limits.conf
Set httpd user soft and hard limits as follows:
httpd soft nofile 4096
httpd hard nofile 10240

Save and close the file. To see limits, enter:
# su - httpd
$ ulimit -Hn
$ ulimit -Sn

Bug#600586: open-vm-tools: vmxnet_init_ring alloc_page failed with kernel 2.6.32-25

Save

Bug#600586: open-vm-tools: vmxnet_init_ring alloc_page failed with kernel 2.6.32-25

Guido Bozzetto
Mon, 18 Oct 2010 05:09:21 -0700

Package: open-vm-tools
Version: 2010.06.16-268169-3
Severity: normal

After the last upgrade of the system the network do not start.
The error appears in the networking startup script execution is:

eth0: vmxnet_init_ring alloc_page failed.
SIOCSIFFLAGS: Cannot allocate memory

with the old kernel: 2.6.32-23 (linux-image-2.6.32-5-686)
the vmxnet module work correctly.
I've the "VMXNET 2 (Enhanced)" network adapter configured in the
virtual guest.

The "Flexible" network adapter, that use the pcnet32 module,
work correctly.

-- System Information:
Debian Release: squeeze/sid
  APT prefers testing
  APT policy: (560, 'testing'), (545, 'testing-proposed-updates'), (540, 
'testing'), (460, 'stable'), (445, 'proposed-updates'), (440, 'stable'), (50, 
'unstable')
Architecture: i386 (i686)

Kernel: Linux 2.6.32-5-686 (SMP w/1 CPU core)
Locale: LANG=C, LC_CTYPE=C (charmap=ANSI_X3.4-1968)
Shell: /bin/sh linked to /bin/bash

Versions of packages open-vm-tools depends on:
ii  libc6                         2.11.2-6   Embedded GNU C Library: Shared lib
ii  libfuse2                      2.8.4-1.1  Filesystem in USErspace library
ii  libgcc1                       1:4.4.5-2  GCC support library
ii  libglib2.0-0                  2.24.2-1   The GLib library of C routines
ii  libicu44                      4.4.1-6    International Components for Unico
ii  libstdc++6                    4.4.5-2    The GNU Standard C++ Library v3

Versions of packages open-vm-tools recommends:
ii  ethtool              1:2.6.34-3          display or change Ethernet device 
ii  open-vm-source       2010.06.16-268169-3 Source for VMware guest systems dr
ii  zerofree             1.0.1-2             zero free blocks from ext2/3 file-

Versions of packages open-vm-tools suggests:
ii  open-vm-toolbox      2010.06.16-268169-3 tools and components for VMware gu

-- Configuration Files:
/etc/vmware-tools/tools.conf changed:


-- no debconf information

Comments (1)

Patrice Neff

Patrice Neff Nov 1, 2010

I was able to fix this issue by running vmware-tools-upgrader.