What are file descriptors, explained in simple terms? - unix

What would be a more simplified description of file descriptors compared to Wikipedia's? Why are they required? Say, take shell processes as an example and how does it apply for it?
Does a process table contain more than one file descriptor. If yes, why?

In simple words, when you open a file, the operating system creates an entry to represent that file and store the information about that opened file. So if there are 100 files opened in your OS then there will be 100 entries in OS (somewhere in kernel). These entries are represented by integers like (...100, 101, 102....). This entry number is the file descriptor.
So it is just an integer number that uniquely represents an opened file for the process.
If your process opens 10 files then your Process table will have 10 entries for file descriptors.
Similarly, when you open a network socket, it is also represented by an integer and it is called Socket Descriptor.
I hope you understand.

A file descriptor is an opaque handle that is used in the interface between user and kernel space to identify file/socket resources. Therefore, when you use open() or socket() (system calls to interface to the kernel), you are given a file descriptor, which is an integer (it is actually an index into the processes u structure - but that is not important). Therefore, if you want to interface directly with the kernel, using system calls to read(), write(), close() etc. the handle you use is a file descriptor.
There is a layer of abstraction overlaid on the system calls, which is the stdio interface. This provides more functionality/features than the basic system calls do. For this interface, the opaque handle you get is a FILE*, which is returned by the fopen() call. There are many many functions that use the stdio interface fprintf(), fscanf(), fclose(), which are there to make your life easier. In C, stdin, stdout, and stderr are FILE*, which in UNIX respectively map to file descriptors 0, 1 and 2.

Hear it from the Horse's Mouth : APUE (Richard Stevens).
To the kernel, all open files are referred to by File Descriptors. A file descriptor is a non-negative number.
When we open an existing file or create a new file, the kernel returns a file descriptor to the process. The kernel maintains a table of all open file descriptors, which are in use. The allotment of file descriptors is generally sequential and they are allotted to the file as the next free file descriptor from the pool of free file descriptors. When we closes the file, the file descriptor gets freed and is available for further allotment.
See this image for more details :
When we want to read or write a file, we identify the file with the file descriptor that was returned by open() or create() function call, and use it as an argument to either read() or write().
It is by convention that, UNIX System shells associates the file descriptor 0 with Standard Input of a process, file descriptor 1 with Standard Output, and file descriptor 2 with Standard Error.
File descriptor ranges from 0 to OPEN_MAX. File descriptor max value can be obtained with ulimit -n. For more information, go through 3rd chapter of APUE Book.

Other answers added great stuff. I will add just my 2 cents.
According to Wikipedia we know for sure: a file descriptor is a non-negative integer. The most important thing I think is missing, would be to say:
File descriptors are bound to a process ID.
We know most famous file descriptors are 0, 1 and 2.
0 corresponds to STDIN, 1 to STDOUT, and 2 to STDERR.
Say, take shell processes as an example and how does it apply for it?
Check out this code
#>sleep 1000 &
[12] 14726
We created a process with the id 14726 (PID).
Using the lsof -p 14726 we can get the things like this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sleep 14726 root cwd DIR 8,1 4096 1201140 /home/x
sleep 14726 root rtd DIR 8,1 4096 2 /
sleep 14726 root txt REG 8,1 35000 786587 /bin/sleep
sleep 14726 root mem REG 8,1 11864720 1186503 /usr/lib/locale/locale-archive
sleep 14726 root mem REG 8,1 2030544 137184 /lib/x86_64-linux-gnu/libc-2.27.so
sleep 14726 root mem REG 8,1 170960 137156 /lib/x86_64-linux-gnu/ld-2.27.so
sleep 14726 root 0u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 1u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 2u CHR 136,6 0t0 9 /dev/pts/6
The 4-th column FD and the very next column TYPE correspond to the File Descriptor and the File Descriptor type.
Some of the values for the FD can be:
cwd – Current Working Directory
txt – Text file
mem – Memory mapped file
mmap – Memory mapped device
But the real file descriptor is under:
NUMBER – Represent the actual file descriptor.
The character after the number i.e "1u", represents the mode in which the file is opened. r for read, w for write, u for read and write.
TYPE specifies the type of the file. Some of the values of TYPEs are:
REG – Regular File
DIR – Directory
FIFO – First In First Out
But all file descriptors are
CHR – Character special file (or character device file)
Now, we can identify the File Descriptors for STDIN, STDOUT and STDERR easy with lsof -p PID, or we can see the same if we ls /proc/PID/fd.
Note also that file descriptor table that kernel keeps track of is not the same as files table or inodes table. These are separate, as some other answers explained.
You may ask yourself where are these file descriptors physically and what is stored in /dev/pts/6 for instance
sleep 14726 root 0u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 1u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 2u CHR 136,6 0t0 9 /dev/pts/6
Well, /dev/pts/6 lives purely in memory. These are not regular files, but so called character device files. You can check this with: ls -l /dev/pts/6 and they will start with c, in my case crw--w----.
Just to recall most Linux like OS define seven types of files:
Regular files
Directories
Character device files
Block device files
Local domain sockets
Named pipes (FIFOs) and
Symbolic links

File Descriptors (FD) :
In Linux/Unix, everything is a file. Regular file, Directories,
and even Devices are files. Every File has an associated number called File Descriptor (FD).
Your screen also has a File Descriptor. When a program is executed
the output is sent to File Descriptor of the screen, and you see
program output on your monitor. If the output is sent to File
Descriptor of the printer, the program output would have been
printed.
Error Redirection :
Whenever you execute a program/command at the terminal, 3 files are always open
standard input standard output
standard error.
These files are always present whenever a program is run. As explained before a file descriptor, is associated with each of
these files.
File File Descriptor
Standard Input STDIN 0
Standard Output STDOUT 1
Standard Error STDERR 2
For instance, while searching for files, one
typically gets permission denied errors or some other kind of errors. These errors can be saved to a particular file.
Example 1
$ ls mydir 2>errorsfile.txt
The file descriptor for standard error is 2.
If there is no any directory named as mydir then the output of command will be save to file errorfile.txt
Using "2>" we re-direct the error output to a file named "errorfile.txt"
Thus, program output is not cluttered with errors.
I hope you got your answer.

More points regarding File Descriptor:
File Descriptors (FD) are non-negative integers (0, 1, 2, ...) that are associated with files that are opened.
0, 1, 2 are standard FD's that corresponds to STDIN_FILENO, STDOUT_FILENO and STDERR_FILENO (defined in unistd.h) opened by default on behalf of shell when the program starts.
FD's are allocated in the sequential order, meaning the lowest possible unallocated integer value.
FD's for a particular process can be seen in /proc/$pid/fd (on Unix based systems).

As an addition to other answers, unix considers everything as a file system. Your keyboard is a file that is read only from the perspective of the kernel. The screen is a write only file. Similarly, folders, input-output devices etc are also considered to be files. Whenever a file is opened, say when the device drivers[for device files] requests an open(), or a process opens an user file the kernel allocates a file descriptor, an integer that specifies the access to that file such it being read only, write only etc. [for reference : https://en.wikipedia.org/wiki/Everything_is_a_file ]

File descriptors
To Kernel all open files are referred to by file descriptors.
A file descriptor is a non - negative integer.
When we open an existing or create a new file, the kernel returns a file descriptor to a process.
When we want to read or write on a file, we identify the file with file descriptor that was retuned by open or create, as an argument to either read or write.
Each UNIX process has 20 file descriptors and it disposal, numbered 0 through 19 but
it was extended to 63 by many systems.
The first three are already opened when the process begins
0: The standard input
1: The standard output
2: The standard error output
When the parent process forks a process, the child process inherits the file descriptors of the parent

All answer that are provided is great here is mine version --
File Descriptors are non-negative integers that act as an abstract handle to “Files” or I/O resources (like pipes, sockets, or data streams). These descriptors help us interact with these I/O resources and make working with them very easy. The I/O system is visible to a user process as a stream of bytes (I/O stream). A Unix process uses descriptors (small unsigned integers) to refer to I/O streams. The system calls related to the I/O operations take a descriptor as as argument.
Valid file descriptor ranges from 0 to a max descriptor number that is configurable (ulimit, /proc/sys/fs/file-max). Kernel assigns desc. for std input(0), std output(1) and std error(2) of the FD table. If a file open is not successful, fd return -1.
When a process makes a successful request to open a file, the kernel returns a file descriptor which points to an entry in the kernel's global file table. The file table entry contains information such as the inode of the file, byte offset, and the access restrictions for that data stream (read-only, write-only, etc.).

Any operating system has processes (p's) running, say p1, p2, p3 and so forth. Each process usually makes an ongoing usage of files.
Each process is consisted of a process tree (or a process table, in another phrasing).
Usually, Operating systems represent each file in each process by a number (that is to say, in each process tree/table).
The first file used in the process is file0, second is file1, third is file2, and so forth.
Any such number is a file descriptor.
File descriptors are usually integers (0, 1, 2 and not 0.5, 1.5, 2.5).
Given we often describe processes as "process-tables", and given that tables has rows (entries) we can say that the file descriptor cell in each entry, uses to represent the whole entry.
In a similar way, when you open a network socket, it has a socket descriptor.
In some operating systems, you can run out of file descriptors, but such case is extremely rare, and the average computer user shouldn't worry from that.
File descriptors might be global (process A starts in say 0, and ends say in 1 ; Process B starts say in 2, and ends say in 3) and so forth, but as far as I know, usually in modern operating systems, file descriptors are not global, and are actually process-specific (process A starts in say 0 and ends say in 5, while process B starts in 0 and ends say in 10).

File descriptors are nothing but references for any open resource. As soon as you open a resource the kernel assumes you will be doing some operations on it. All the communication via your program and the resource happens over an interface and this interface is provided by the file-descriptor.
Since a process can open more than one resource, it is possible for a resource to have more than one file-descriptors.
You can view all file-descriptors linked to the process by simply running,
ls -li /proc/<pid>/fd/ here pid is the process-id of your process

Addition to above all simplified responses.
If you are working with files in bash script, it's better to use file descriptor.
For example:
If you want to read and write from/to the file "test.txt", use the file descriptor as show below:
FILE=$1 # give the name of file in the command line
exec 5<>$FILE # '5' here act as the file descriptor
# Reading from the file line by line using file descriptor
while read LINE; do
echo "$LINE"
done <&5
# Writing to the file using descriptor
echo "Adding the date: `date`" >&5
exec 5<&- # Closing a file descriptor

I'm don't know the kernel code, but I'll add my two cents here since I've been thinking about this for some time, and I think it'll be useful.
When you open a file, the kernel returns a file descriptor to interact with that file.
A file descriptor is an implementation of an API for the file you're opening. The kernel creates this file descriptor, stores it in an array, and gives it to you.
This API requires an implementation that allows you to read and write to the file, for example.
Now, think about what I said again, remembering that everything is a file — printers, monitors, HTTP connections etc.
That's my summary after reading https://www.bottomupcs.com/file_descriptors.xhtml.

Related

fsck finds Multiply-claimed block(s) and files are shared with badblock inode #1

I have an LVM hard drive. It holds all my media for use by Kodi. It occasionally (about once a week) cannot access the media. Attempting to remount the device with sudo mount -a resulted in Input/Output error.
The solution from various sources was that it contains badblocks so I have run fsck -cc /dev/icybox/media to do a non-destructive read-write badblocks check.
It took 5 days but finally it finished, good news, no read or write errors but a couple of hundred corrupted blocks.
Here is some of the output:
# fsck -cc -V /dev/icybox/media
fsck from util-linux 2.34
[/usr/sbin/fsck.ext4 (1) -- /mnt/icybox] fsck.ext4 -cc /dev/mapper/icybox-media
e2fsck 1.45.5 (07-Jan-2020)
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: done
/dev/mapper/icybox-media: Updating bad block inode.
Pass 1: Checking inodes, blocks, and sizes
Running additional passes to resolve blocks claimed by more than one inode...
Pass 1B: Rescanning for multiply-claimed blocks
Multiply-claimed block(s) in inode 55640069: 849596509
Multiply-claimed block(s) in inode 55640261: 448514694
Multiply-claimed block(s) in inode 55641058: 465144485
Multiply-claimed block(s) in inode 55641147: 470406248
...and lots more Multiply-claimed block(s)
Then this:
Pass 1C: Scanning directories for inodes with multiply-claimed blocks
Pass 1D: Reconciling multiply-claimed blocks
(There are 190 inodes containing multiply-claimed blocks.)
File /TV Shows/Arrested Development/Arrested Development - Season 1/Arrested Development - 119 - Best Man for the Gob.mkv (inode #55640069, mod time Sat May 5 11:19:03 2018)
has 1 multiply-claimed block(s), shared with 1 file(s):
<The bad blocks inode> (inode #1, mod time Thu May 20 22:36:40 2021)
Clone multiply-claimed blocks<y>? yes
There are a bunch more files saying they have 1 multiply-claimed blocks shared with 1 file on inode #1. Should I say yes to the clone question?
All the files shown are shared with bad block inode #1, according to https://unix.stackexchange.com/questions/198673/why-does-have-the-inode-2 inode#1 stores the badblocks.
So I have a bunch of question:
How can this file be shared with badblocks?
Is the badblocks list incorrect/corrupted?
Is there a way to clear the badblocks list and do another scan to start over to fill it correctly?
I am not too bothered about losing the data of individual media files so long as I can take a list to re-download them.
P.S. Not sure if it is relevant, I had run the same fsck command before this and it was interrupted by a power outage so I don't know if that would cause a corrupted badblock inode #1.
I ran it another time which got to about 70% and then something went wrong and every block was becoming a read error (I think it became Input/Output error again) so I am worried all those blocks were listed as badblocks, I cancelled the process when I noticed it at about 70% so it didn't finish.
Thanks for any help and answers

understanding TYPE in lsof output

I opened a file through python. So, i did a lsof on the python process. output of lsof has the following line
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python 15855 inaflash 3w REG 0,25 0 4150810088 /home/inaflash/he.txt
Thing is, it has 3w. which means that the file is open for writing. But, i actually opened the file as follows
a = open('he.txt','r')
I read that, w means file is open for write. Can anyone help me understand why its w instead of r
I tried the same code in Python 3 and my file is opened in read mode.
Are you sure your file is the same opened with python and same python process ?
Maybe you forgot to close the file somewhere in your code after opened it in write mode.
Edit: Also tried in Python 2, same result (read mode)

How does execlp work exactly?

So I am looking at my professor's code that he handed out to try and give us an idea of how to implement >, <, | support into our unix shell. I ran his code and was amazed at what actually happened.
if( pid == 0 )
{
close(1); // close
fd = creat( "userlist", 0644 ); // then open
execlp( "who", "who", NULL ); // and run
perror( "execlp" );
exit(1);
}
This created a userlist file in the directory I was currently in, with the "who" data inside that file. I don't see where any connection between fd, and execlp are being made. How did execlp manage to put the information into userlist? How did execlp even know userlist existed?
Read Advanced Linux Programming. It has several chapters related to the issue. And we cannot explain all this in a few sentences. See also the standard stream and process wikipages.
First, all the system calls (see syscalls(2) for a list, and read the documentation of every individual system call that you are using) your program is doing should be tested against failure. But assume they all succeed. After close(1); the file descriptor 1 (STDOUT_FILENO) is free. So creat("userlist",0644) is likely to re-use it, hence fd is 1; you have redirected your stdout to the newline created userlist file.
At last, you are calling execlp(3) which will call execve(2). When successful, your entire process is restarted with the new executable (so a fresh virtual address space is given to it), and its stdout is still the userlist file descriptor. In particular (unless execve fails) the perror call is not reached.
So your code is a bit what a shell running who > userlist is doing; it does a redirection of stdout to userlist and runs the who command.
If you are coding a shell, use strace(1) -notably with -f option- to understand what system calls are done. Try also strace -f /bin/sh -c ls to look into the behavior of a shell. Study also the source code of existing free software shells (e.g. bash and sash).
See also this and the references I gave there.
execlp knowns nothing. Before execing stdout was closed and a file opened, so the descriptor is the one corresponding to stdout (opens always returns the lowest free descriptor). At that point the process has an "stdout" plugged to the file. Then exec is called and this replaces to whole address space, but some properties remains as the descriptors, so know the code of who is executed with an stdout that correspond to the file. This is the way redirections are managed by shells.
Remember that when you use printf (for example) you never specify what stdout exactly is... That can be a file, a terminal, etc.
Basile Starynkevitch correctly explained:
After close(1); the file descriptor 1 (STDOUT_FILENO) is free. So creat("userlist",0644) is likely to re-use it…
This is because, as Jean-Baptiste Yunès wrote, "opens always returns the lowest free descriptor".
It should be stressed that the professor's code only likely works; it fails if file descriptor 0 is closed.

Command : exec echo "this is for test" > test ; how does shell arrange for output re-direction?

I am trying to understand how, for the above command, the shell arranges for output re-direction when the process associated with the shell itself is replaced by that of echo? Would very much appreciate any help.
Regards
Rupam
Output redirection of an external command is achieved by manipulating file descriptor, typically involving dup2, a system call that assigns a file descriptor to an existing open file. In this case, the standard output of the process that executes echo is instructed to point to the target output file.
The shell does a close equivalent of the following steps:
create a copy of the current process by invoking fork(); the subsequent steps happen in the child process. (This step is omitted when the command is invoked with exec, as in your example.)
open test for writing and remember the file descriptor
make file descriptor 1 equal to the descriptor, by calling dup2(fd, 1)
call execlp or similar to replace current process execution with execution of echo
The echo command simply writes to standard output, a fancy name for file descriptor number 1. It doesn't care if that ends up writing to a TTY, to a file, or to another program. The above steps make sure that file descriptor 1 really points to the open file test.

Unix : Memory mapped files , constraints applicable?

This question is for understanding the kind-off constraints applicable for a Mem-Mapped file in unix environ.
We have an APP running in unix environment that hosts and serves files with Mem-mapped files of Key-Value with read only access, also is capable of refreshing on runtime when a new version of file is copied ( probabaly with more key-value pairs).
What i observe is , since the file is Mem-Mapped , as we refresh the file with more key-value pairs VIRT memory consumption increases with not much RES mem consumption.
PID PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12948 16 0 43240 9936 2996 S 0.0 0.1 0:00.00 lookup_server
12951 16 0 562m 16m 9972 S 0.0 0.1 0:00.09 lookup_server
As i understand this is because the whole file is copied as virtual memory pages in harddrive , and only few pages that are in demand are in RES mem.
Is my assumptions right , that
with Mem-Mapping files , the file size is not limited to available physical RAM , as the files would be paged-in/out by the OS on-demand.
and only limiting factor could be disk space configured for virtual memory.
in this case how can i identify the disk space identified by OS for virtual memory extension? where does the virtual memory foot print of the file get stored in harddisk ?
I think 2) applies only if you map the file with MAP_PRIVATE and then only if you modify the pages in memory. If you map the pages without MAP_PRIVATE the file is already on disk and does not need to be copied into the swap file.
1) Is correct -- you can map larger files than available memory.
But remember, that the OS still has to allocate page tables -- so don't try to map a 1TB file.

Resources