lsof shows lots of same file with a device called 0,0 - lsof

We have begun experiencing outages on a Java application with 'Too many files open' on a soft limit of 2,000. On closer inspection we see hundreds of files that have a device name of 0,0 and are all roughly the same size.
I suspect the device name is significant but can't find anything in the documentation. Any ideas?
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 20381818 vteam 965r VREG 0,0 459374 0 /PRDdbcls_share (dbcls:/u09)
java 20381818 vteam 966r VREG 0,0 458866 0 /PRDdbcls_share (dbcls:/u09)
java 20381818 vteam 967r VREG 0,0 459180 0 /PRDdbcls_share (dbcls:/u09)
Thanks,
EddieK

For future reference.
The device number of 0,0 indicates that this is a remote mount so it is not significant here. What was done in the end was to use 'procfiles -n ' and that gave us the open file names. One caveat to this was the command was very slow. If you run it on the host where the files are local it runs mush faster.
I hope this helps someone if they come across the same issue.
EddieK

Related

fsck finds Multiply-claimed block(s) and files are shared with badblock inode #1

I have an LVM hard drive. It holds all my media for use by Kodi. It occasionally (about once a week) cannot access the media. Attempting to remount the device with sudo mount -a resulted in Input/Output error.
The solution from various sources was that it contains badblocks so I have run fsck -cc /dev/icybox/media to do a non-destructive read-write badblocks check.
It took 5 days but finally it finished, good news, no read or write errors but a couple of hundred corrupted blocks.
Here is some of the output:
# fsck -cc -V /dev/icybox/media
fsck from util-linux 2.34
[/usr/sbin/fsck.ext4 (1) -- /mnt/icybox] fsck.ext4 -cc /dev/mapper/icybox-media
e2fsck 1.45.5 (07-Jan-2020)
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: done
/dev/mapper/icybox-media: Updating bad block inode.
Pass 1: Checking inodes, blocks, and sizes
Running additional passes to resolve blocks claimed by more than one inode...
Pass 1B: Rescanning for multiply-claimed blocks
Multiply-claimed block(s) in inode 55640069: 849596509
Multiply-claimed block(s) in inode 55640261: 448514694
Multiply-claimed block(s) in inode 55641058: 465144485
Multiply-claimed block(s) in inode 55641147: 470406248
...and lots more Multiply-claimed block(s)
Then this:
Pass 1C: Scanning directories for inodes with multiply-claimed blocks
Pass 1D: Reconciling multiply-claimed blocks
(There are 190 inodes containing multiply-claimed blocks.)
File /TV Shows/Arrested Development/Arrested Development - Season 1/Arrested Development - 119 - Best Man for the Gob.mkv (inode #55640069, mod time Sat May 5 11:19:03 2018)
has 1 multiply-claimed block(s), shared with 1 file(s):
<The bad blocks inode> (inode #1, mod time Thu May 20 22:36:40 2021)
Clone multiply-claimed blocks<y>? yes
There are a bunch more files saying they have 1 multiply-claimed blocks shared with 1 file on inode #1. Should I say yes to the clone question?
All the files shown are shared with bad block inode #1, according to https://unix.stackexchange.com/questions/198673/why-does-have-the-inode-2 inode#1 stores the badblocks.
So I have a bunch of question:
How can this file be shared with badblocks?
Is the badblocks list incorrect/corrupted?
Is there a way to clear the badblocks list and do another scan to start over to fill it correctly?
I am not too bothered about losing the data of individual media files so long as I can take a list to re-download them.
P.S. Not sure if it is relevant, I had run the same fsck command before this and it was interrupted by a power outage so I don't know if that would cause a corrupted badblock inode #1.
I ran it another time which got to about 70% and then something went wrong and every block was becoming a read error (I think it became Input/Output error again) so I am worried all those blocks were listed as badblocks, I cancelled the process when I noticed it at about 70% so it didn't finish.
Thanks for any help and answers

understanding TYPE in lsof output

I opened a file through python. So, i did a lsof on the python process. output of lsof has the following line
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python 15855 inaflash 3w REG 0,25 0 4150810088 /home/inaflash/he.txt
Thing is, it has 3w. which means that the file is open for writing. But, i actually opened the file as follows
a = open('he.txt','r')
I read that, w means file is open for write. Can anyone help me understand why its w instead of r
I tried the same code in Python 3 and my file is opened in read mode.
Are you sure your file is the same opened with python and same python process ?
Maybe you forgot to close the file somewhere in your code after opened it in write mode.
Edit: Also tried in Python 2, same result (read mode)

Systrace - error truncating /sys/kernel/debug/tracing/set_ftrace_filter: No such device (19) unable to start

I am currently working on a project which aims to find out what the system is doing behind a series of user interaction on the android UI. For example, if user click send button in Facebook Messenger, the measured response time for such action is 1.2 seconds. My goal is to figure out what the 1.2 seconds consist of. My friend suggested that I should take a look into 'Systrace'.
However, when I tried systrace on my HTC one M8, I have encountered some problems:
First, error opening /sys/kernel/debug/tracing/options/overwrite - no such file or directory. I solved this problem by building up the support of the kernel following http://opensourceforu.com/2010/11/kernel-tracing-with-ftrace-part-1/ and mount -t debugfs none /sys/kernel/debug. Then I could find the tracing directory. Besides, I set ro.debuggable=1 in file default.prop within Ramdisk and burn the boot.img into my phone.
Now I encounter another problem: when I run - python systrace.py --time=10 -o mynewtrace.html sched gfx view wm, the following error(19) pop up: error truncating /sys/kernel/debug/tracing/set_ftrace_filter: No such device (19). I don't know if the way my building up kernel support for systrace is incorrect or anything is missing.
Could anyone helps me out with this problem, please?
I think I have worked out the solution. My environment is Ubuntu 16.04 + HTC one M8. I will write the steps as followed:
open terminal and enter: $adb shell
(1) $su (2) $mount -t debugfs none /sys/kernel/debug. Now you should be able to see many directories under /sys/kernel/debug/. (You may cd into /sys/kernel/debug to confirm this)
New a new terminal and enter: dd if=/dev/block/platform/msm_sdcc.1/by-name/boot of=/sdcard/boot.img to generate the boot.img kernel image from your device.
Use AndroidImageKitchen to unpack the boot.img and find the default.prop within Ramdisk folder. Then change ro.debuggable=0 to ro.debuggable=1. Repack the boot.img and flash boot it to your device.
Once the device boot, under terminal, enter: adb root and message like: restarting adbd as root may pop up. Disconnect the USB and connect again.
cd to the systrace folder, e.g. ~/androidSDK/platform-tools/systrace and use:
python systrace.py --time=10 -o mynewtrace.html sched gfx view wm
Now you may able to generate your own systrace files.

How to solve "Device 0 (vif) could not be connected. Hotplug scripts not working."?

When starting a virtual machine, xm shows:
Device 0 (vif) could not be connected. Hotplug scripts not working.
Why does xm show this? How to solve it?
From the Xen wiki:
Error: Device 0 (vif) could not be connected. Hotplug scripts not working.
This problem is often caused by not having "xen-netback" driver loaded in dom0 kernel.
The hotplug scripts are located in /etc/xen/scripts by default, and are labeled with the prefix vif-*. Those scripts log to /var/log/xen/xen-hotplug.log, and more detailed information can be found there.
http://wiki.xen.org/wiki/Xen_Common_Problems
As weird as it sound, I encountered this error in a situation where the sum of vm memory I assigned left the dom0 with too little memory to complete the addition of a virtual interface. Sizing down the virtual machines was the solution.
I agree with PypeBros. Once I put a new entry in /etc/fstab to mount /tmp as tempfs and allocate 10G memory to it. Then the Xen guest won't start and gives me this error:
Error: Device 0 (vif) could not be connected. Hotplug scripts not working.
It worked fine when I removed /tmp as tempfs. So I think this error could be due to memory problem.

What are file descriptors, explained in simple terms?

What would be a more simplified description of file descriptors compared to Wikipedia's? Why are they required? Say, take shell processes as an example and how does it apply for it?
Does a process table contain more than one file descriptor. If yes, why?
In simple words, when you open a file, the operating system creates an entry to represent that file and store the information about that opened file. So if there are 100 files opened in your OS then there will be 100 entries in OS (somewhere in kernel). These entries are represented by integers like (...100, 101, 102....). This entry number is the file descriptor.
So it is just an integer number that uniquely represents an opened file for the process.
If your process opens 10 files then your Process table will have 10 entries for file descriptors.
Similarly, when you open a network socket, it is also represented by an integer and it is called Socket Descriptor.
I hope you understand.
A file descriptor is an opaque handle that is used in the interface between user and kernel space to identify file/socket resources. Therefore, when you use open() or socket() (system calls to interface to the kernel), you are given a file descriptor, which is an integer (it is actually an index into the processes u structure - but that is not important). Therefore, if you want to interface directly with the kernel, using system calls to read(), write(), close() etc. the handle you use is a file descriptor.
There is a layer of abstraction overlaid on the system calls, which is the stdio interface. This provides more functionality/features than the basic system calls do. For this interface, the opaque handle you get is a FILE*, which is returned by the fopen() call. There are many many functions that use the stdio interface fprintf(), fscanf(), fclose(), which are there to make your life easier. In C, stdin, stdout, and stderr are FILE*, which in UNIX respectively map to file descriptors 0, 1 and 2.
Hear it from the Horse's Mouth : APUE (Richard Stevens).
To the kernel, all open files are referred to by File Descriptors. A file descriptor is a non-negative number.
When we open an existing file or create a new file, the kernel returns a file descriptor to the process. The kernel maintains a table of all open file descriptors, which are in use. The allotment of file descriptors is generally sequential and they are allotted to the file as the next free file descriptor from the pool of free file descriptors. When we closes the file, the file descriptor gets freed and is available for further allotment.
See this image for more details :
When we want to read or write a file, we identify the file with the file descriptor that was returned by open() or create() function call, and use it as an argument to either read() or write().
It is by convention that, UNIX System shells associates the file descriptor 0 with Standard Input of a process, file descriptor 1 with Standard Output, and file descriptor 2 with Standard Error.
File descriptor ranges from 0 to OPEN_MAX. File descriptor max value can be obtained with ulimit -n. For more information, go through 3rd chapter of APUE Book.
Other answers added great stuff. I will add just my 2 cents.
According to Wikipedia we know for sure: a file descriptor is a non-negative integer. The most important thing I think is missing, would be to say:
File descriptors are bound to a process ID.
We know most famous file descriptors are 0, 1 and 2.
0 corresponds to STDIN, 1 to STDOUT, and 2 to STDERR.
Say, take shell processes as an example and how does it apply for it?
Check out this code
#>sleep 1000 &
[12] 14726
We created a process with the id 14726 (PID).
Using the lsof -p 14726 we can get the things like this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sleep 14726 root cwd DIR 8,1 4096 1201140 /home/x
sleep 14726 root rtd DIR 8,1 4096 2 /
sleep 14726 root txt REG 8,1 35000 786587 /bin/sleep
sleep 14726 root mem REG 8,1 11864720 1186503 /usr/lib/locale/locale-archive
sleep 14726 root mem REG 8,1 2030544 137184 /lib/x86_64-linux-gnu/libc-2.27.so
sleep 14726 root mem REG 8,1 170960 137156 /lib/x86_64-linux-gnu/ld-2.27.so
sleep 14726 root 0u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 1u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 2u CHR 136,6 0t0 9 /dev/pts/6
The 4-th column FD and the very next column TYPE correspond to the File Descriptor and the File Descriptor type.
Some of the values for the FD can be:
cwd – Current Working Directory
txt – Text file
mem – Memory mapped file
mmap – Memory mapped device
But the real file descriptor is under:
NUMBER – Represent the actual file descriptor.
The character after the number i.e "1u", represents the mode in which the file is opened. r for read, w for write, u for read and write.
TYPE specifies the type of the file. Some of the values of TYPEs are:
REG – Regular File
DIR – Directory
FIFO – First In First Out
But all file descriptors are
CHR – Character special file (or character device file)
Now, we can identify the File Descriptors for STDIN, STDOUT and STDERR easy with lsof -p PID, or we can see the same if we ls /proc/PID/fd.
Note also that file descriptor table that kernel keeps track of is not the same as files table or inodes table. These are separate, as some other answers explained.
You may ask yourself where are these file descriptors physically and what is stored in /dev/pts/6 for instance
sleep 14726 root 0u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 1u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 2u CHR 136,6 0t0 9 /dev/pts/6
Well, /dev/pts/6 lives purely in memory. These are not regular files, but so called character device files. You can check this with: ls -l /dev/pts/6 and they will start with c, in my case crw--w----.
Just to recall most Linux like OS define seven types of files:
Regular files
Directories
Character device files
Block device files
Local domain sockets
Named pipes (FIFOs) and
Symbolic links
File Descriptors (FD) :
In Linux/Unix, everything is a file. Regular file, Directories,
and even Devices are files. Every File has an associated number called File Descriptor (FD).
Your screen also has a File Descriptor. When a program is executed
the output is sent to File Descriptor of the screen, and you see
program output on your monitor. If the output is sent to File
Descriptor of the printer, the program output would have been
printed.
Error Redirection :
Whenever you execute a program/command at the terminal, 3 files are always open
standard input standard output
standard error.
These files are always present whenever a program is run. As explained before a file descriptor, is associated with each of
these files.
File File Descriptor
Standard Input STDIN 0
Standard Output STDOUT 1
Standard Error STDERR 2
For instance, while searching for files, one
typically gets permission denied errors or some other kind of errors. These errors can be saved to a particular file.
Example 1
$ ls mydir 2>errorsfile.txt
The file descriptor for standard error is 2.
If there is no any directory named as mydir then the output of command will be save to file errorfile.txt
Using "2>" we re-direct the error output to a file named "errorfile.txt"
Thus, program output is not cluttered with errors.
I hope you got your answer.
More points regarding File Descriptor:
File Descriptors (FD) are non-negative integers (0, 1, 2, ...) that are associated with files that are opened.
0, 1, 2 are standard FD's that corresponds to STDIN_FILENO, STDOUT_FILENO and STDERR_FILENO (defined in unistd.h) opened by default on behalf of shell when the program starts.
FD's are allocated in the sequential order, meaning the lowest possible unallocated integer value.
FD's for a particular process can be seen in /proc/$pid/fd (on Unix based systems).
As an addition to other answers, unix considers everything as a file system. Your keyboard is a file that is read only from the perspective of the kernel. The screen is a write only file. Similarly, folders, input-output devices etc are also considered to be files. Whenever a file is opened, say when the device drivers[for device files] requests an open(), or a process opens an user file the kernel allocates a file descriptor, an integer that specifies the access to that file such it being read only, write only etc. [for reference : https://en.wikipedia.org/wiki/Everything_is_a_file ]
File descriptors
To Kernel all open files are referred to by file descriptors.
A file descriptor is a non - negative integer.
When we open an existing or create a new file, the kernel returns a file descriptor to a process.
When we want to read or write on a file, we identify the file with file descriptor that was retuned by open or create, as an argument to either read or write.
Each UNIX process has 20 file descriptors and it disposal, numbered 0 through 19 but
it was extended to 63 by many systems.
The first three are already opened when the process begins
0: The standard input
1: The standard output
2: The standard error output
When the parent process forks a process, the child process inherits the file descriptors of the parent
All answer that are provided is great here is mine version --
File Descriptors are non-negative integers that act as an abstract handle to “Files” or I/O resources (like pipes, sockets, or data streams). These descriptors help us interact with these I/O resources and make working with them very easy. The I/O system is visible to a user process as a stream of bytes (I/O stream). A Unix process uses descriptors (small unsigned integers) to refer to I/O streams. The system calls related to the I/O operations take a descriptor as as argument.
Valid file descriptor ranges from 0 to a max descriptor number that is configurable (ulimit, /proc/sys/fs/file-max). Kernel assigns desc. for std input(0), std output(1) and std error(2) of the FD table. If a file open is not successful, fd return -1.
When a process makes a successful request to open a file, the kernel returns a file descriptor which points to an entry in the kernel's global file table. The file table entry contains information such as the inode of the file, byte offset, and the access restrictions for that data stream (read-only, write-only, etc.).
Any operating system has processes (p's) running, say p1, p2, p3 and so forth. Each process usually makes an ongoing usage of files.
Each process is consisted of a process tree (or a process table, in another phrasing).
Usually, Operating systems represent each file in each process by a number (that is to say, in each process tree/table).
The first file used in the process is file0, second is file1, third is file2, and so forth.
Any such number is a file descriptor.
File descriptors are usually integers (0, 1, 2 and not 0.5, 1.5, 2.5).
Given we often describe processes as "process-tables", and given that tables has rows (entries) we can say that the file descriptor cell in each entry, uses to represent the whole entry.
In a similar way, when you open a network socket, it has a socket descriptor.
In some operating systems, you can run out of file descriptors, but such case is extremely rare, and the average computer user shouldn't worry from that.
File descriptors might be global (process A starts in say 0, and ends say in 1 ; Process B starts say in 2, and ends say in 3) and so forth, but as far as I know, usually in modern operating systems, file descriptors are not global, and are actually process-specific (process A starts in say 0 and ends say in 5, while process B starts in 0 and ends say in 10).
File descriptors are nothing but references for any open resource. As soon as you open a resource the kernel assumes you will be doing some operations on it. All the communication via your program and the resource happens over an interface and this interface is provided by the file-descriptor.
Since a process can open more than one resource, it is possible for a resource to have more than one file-descriptors.
You can view all file-descriptors linked to the process by simply running,
ls -li /proc/<pid>/fd/ here pid is the process-id of your process
Addition to above all simplified responses.
If you are working with files in bash script, it's better to use file descriptor.
For example:
If you want to read and write from/to the file "test.txt", use the file descriptor as show below:
FILE=$1 # give the name of file in the command line
exec 5<>$FILE # '5' here act as the file descriptor
# Reading from the file line by line using file descriptor
while read LINE; do
echo "$LINE"
done <&5
# Writing to the file using descriptor
echo "Adding the date: `date`" >&5
exec 5<&- # Closing a file descriptor
I'm don't know the kernel code, but I'll add my two cents here since I've been thinking about this for some time, and I think it'll be useful.
When you open a file, the kernel returns a file descriptor to interact with that file.
A file descriptor is an implementation of an API for the file you're opening. The kernel creates this file descriptor, stores it in an array, and gives it to you.
This API requires an implementation that allows you to read and write to the file, for example.
Now, think about what I said again, remembering that everything is a file — printers, monitors, HTTP connections etc.
That's my summary after reading https://www.bottomupcs.com/file_descriptors.xhtml.

Resources