Unix : Memory mapped files , constraints applicable? - unix

This question is for understanding the kind-off constraints applicable for a Mem-Mapped file in unix environ.
We have an APP running in unix environment that hosts and serves files with Mem-mapped files of Key-Value with read only access, also is capable of refreshing on runtime when a new version of file is copied ( probabaly with more key-value pairs).
What i observe is , since the file is Mem-Mapped , as we refresh the file with more key-value pairs VIRT memory consumption increases with not much RES mem consumption.
PID PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12948 16 0 43240 9936 2996 S 0.0 0.1 0:00.00 lookup_server
12951 16 0 562m 16m 9972 S 0.0 0.1 0:00.09 lookup_server
As i understand this is because the whole file is copied as virtual memory pages in harddrive , and only few pages that are in demand are in RES mem.
Is my assumptions right , that
with Mem-Mapping files , the file size is not limited to available physical RAM , as the files would be paged-in/out by the OS on-demand.
and only limiting factor could be disk space configured for virtual memory.
in this case how can i identify the disk space identified by OS for virtual memory extension? where does the virtual memory foot print of the file get stored in harddisk ?

I think 2) applies only if you map the file with MAP_PRIVATE and then only if you modify the pages in memory. If you map the pages without MAP_PRIVATE the file is already on disk and does not need to be copied into the swap file.
1) Is correct -- you can map larger files than available memory.
But remember, that the OS still has to allocate page tables -- so don't try to map a 1TB file.

Related

fsck finds Multiply-claimed block(s) and files are shared with badblock inode #1

I have an LVM hard drive. It holds all my media for use by Kodi. It occasionally (about once a week) cannot access the media. Attempting to remount the device with sudo mount -a resulted in Input/Output error.
The solution from various sources was that it contains badblocks so I have run fsck -cc /dev/icybox/media to do a non-destructive read-write badblocks check.
It took 5 days but finally it finished, good news, no read or write errors but a couple of hundred corrupted blocks.
Here is some of the output:
# fsck -cc -V /dev/icybox/media
fsck from util-linux 2.34
[/usr/sbin/fsck.ext4 (1) -- /mnt/icybox] fsck.ext4 -cc /dev/mapper/icybox-media
e2fsck 1.45.5 (07-Jan-2020)
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: done
/dev/mapper/icybox-media: Updating bad block inode.
Pass 1: Checking inodes, blocks, and sizes
Running additional passes to resolve blocks claimed by more than one inode...
Pass 1B: Rescanning for multiply-claimed blocks
Multiply-claimed block(s) in inode 55640069: 849596509
Multiply-claimed block(s) in inode 55640261: 448514694
Multiply-claimed block(s) in inode 55641058: 465144485
Multiply-claimed block(s) in inode 55641147: 470406248
...and lots more Multiply-claimed block(s)
Then this:
Pass 1C: Scanning directories for inodes with multiply-claimed blocks
Pass 1D: Reconciling multiply-claimed blocks
(There are 190 inodes containing multiply-claimed blocks.)
File /TV Shows/Arrested Development/Arrested Development - Season 1/Arrested Development - 119 - Best Man for the Gob.mkv (inode #55640069, mod time Sat May 5 11:19:03 2018)
has 1 multiply-claimed block(s), shared with 1 file(s):
<The bad blocks inode> (inode #1, mod time Thu May 20 22:36:40 2021)
Clone multiply-claimed blocks<y>? yes
There are a bunch more files saying they have 1 multiply-claimed blocks shared with 1 file on inode #1. Should I say yes to the clone question?
All the files shown are shared with bad block inode #1, according to https://unix.stackexchange.com/questions/198673/why-does-have-the-inode-2 inode#1 stores the badblocks.
So I have a bunch of question:
How can this file be shared with badblocks?
Is the badblocks list incorrect/corrupted?
Is there a way to clear the badblocks list and do another scan to start over to fill it correctly?
I am not too bothered about losing the data of individual media files so long as I can take a list to re-download them.
P.S. Not sure if it is relevant, I had run the same fsck command before this and it was interrupted by a power outage so I don't know if that would cause a corrupted badblock inode #1.
I ran it another time which got to about 70% and then something went wrong and every block was becoming a read error (I think it became Input/Output error again) so I am worried all those blocks were listed as badblocks, I cancelled the process when I noticed it at about 70% so it didn't finish.
Thanks for any help and answers

How to free memory which is used by big.matrix objects of crashed R sessions

I use the bigmemory package to access big matrix objects in parallel, e.g. like this
a <- bigmemory::big.matrix(nrow = 200, ncol = 100, shared = TRUE) # shared = TRUE is the default
However, working with the resulting objects sometimes cause R to crash. This means that the memory used by the matrix objects is not released. The bigmemory manual warns of such a case but presents no solution:
Abruptly closed R (using e.g. task manager) will not have a chance to
finalize the big.matrix objects, which will result in a memory leak, as
the big.matrices will remain in the memory (perhaps under obfuscated names)
with no easy way to reconnect R to them
After a few crashes and restarts of my R process, I get the following error:
No space left on device
Error in CreateSharedMatrix(as.double(nrow),
as.double(ncol), as.character(colnames), :
The shared matrix could not be created
Obviously, my memory is blocked by orphaned big matrices. I tried the command ipcs, which is advertised to list shared memory blocks, but the sizes of the segments listed there are much too small compared to my matrix objects. This means also that ipcrm is of no use here to remove my orphaned objects.
Where does bigmemory store its objects on different operating systems and how do I delete orphaned ones?
Linux
A call to df -h solved the mystery for my operating system (Linux/CentOS).
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
tmpfs 1008G 1008G 0 100% /dev/shm
...
There is a temporary file system in the folder /dev/shm. Files therein exist only in RAM. This file system is used to share data between processes. In this folder were several files with random strings as names, and multiple files with the same prefix, which seem to be related to the same big.matrix object:
$ ls -l /dev/shm
-rw-r--r-- 1 user grp 320000 Apr 26 13:42 gBDEDtvwNegvocUQpYNRMRWP
-rw-r--r-- 1 user grp 8 Apr 26 13:42 gBDEDtvwNegvocUQpYNRMRWP_counter
-rw-r--r-- 1 user grp 32 Apr 26 13:42 sem.gBDEDtvwNegvocUQpYNRMRWP_bigmemory_counter_mutex
Unfortunately, I don't know which matrix belongs to which file, but if you have no R processes running at the time, deleting files with this name pattern should remove the orphaned objects.
Windows
I don't know how other OS's do this, so feel free to add it into this community wiki if you know

uncompress a big .gz file

I need to uncompress a transactions.gz file downloaded from Kaggle; approximately (2.86 GB), 350 million rows, 11 columns.
I tried on RStudio, windows Vista, 32 bits, RAM: 3 GB:
transactions <- read.table(gzfile("E:/2014/Proyectos/Kaggle/transactions.gz"))
write.table(transactions, file="E:/2014/Proyectos/Kaggle/transactions.csv")
But i receive this error message on the console
> transactions <- read.table(gzfile("E:/2014/Proyectos/Kaggle/transactions.gz"))
Error: cannot allocate vector of size 64.0 Mb
> write.table(transactions, file="E:/2014/Proyectos/Kaggle/transactions.csv")
Error: cannot allocate vector of size 64.0 Mb
I checked this case, but it didn't work for me: Decompress gz file using R
I would appreciate any suggestions.
This file decompresses to a 22GB .csv file. You can't process it all at once in R on your 6GB machine because R needs to read everything into memory. It would be best to process it in an RDBMS like postgresql. If you are intent on using R you could process it in chunks, reading a manageable number of rows at a time: read a chunk, process it, and then overwrite with the next chunk. For this data.table::fread would be better than the standard read.table.
Oh, and don't decompress in R, just run gunzip from the command line and then process the csv. If you're on Windows you can use winzip or 7zip.

How find which process generate max read/write disk operation

Cloud server begin generate big disk read/write operation. I want find some script who generate top file with process(process name | TOTAL | READ | WRITE )
You can use iotop to see the reads and writes of each process using a top like interface.
Another way is to look at the /proc/[PID]/io files.
Example:
$ cat /proc/1944/io
read_bytes: 17961091072
write_bytes: 8192000
cancelled_write_bytes: 32768
There's a monitor much like top available: Iotop.
Since you're using Debian Linux, you can simply retrieve it via APT:
apt-get install iotop
Done.

What are file descriptors, explained in simple terms?

What would be a more simplified description of file descriptors compared to Wikipedia's? Why are they required? Say, take shell processes as an example and how does it apply for it?
Does a process table contain more than one file descriptor. If yes, why?
In simple words, when you open a file, the operating system creates an entry to represent that file and store the information about that opened file. So if there are 100 files opened in your OS then there will be 100 entries in OS (somewhere in kernel). These entries are represented by integers like (...100, 101, 102....). This entry number is the file descriptor.
So it is just an integer number that uniquely represents an opened file for the process.
If your process opens 10 files then your Process table will have 10 entries for file descriptors.
Similarly, when you open a network socket, it is also represented by an integer and it is called Socket Descriptor.
I hope you understand.
A file descriptor is an opaque handle that is used in the interface between user and kernel space to identify file/socket resources. Therefore, when you use open() or socket() (system calls to interface to the kernel), you are given a file descriptor, which is an integer (it is actually an index into the processes u structure - but that is not important). Therefore, if you want to interface directly with the kernel, using system calls to read(), write(), close() etc. the handle you use is a file descriptor.
There is a layer of abstraction overlaid on the system calls, which is the stdio interface. This provides more functionality/features than the basic system calls do. For this interface, the opaque handle you get is a FILE*, which is returned by the fopen() call. There are many many functions that use the stdio interface fprintf(), fscanf(), fclose(), which are there to make your life easier. In C, stdin, stdout, and stderr are FILE*, which in UNIX respectively map to file descriptors 0, 1 and 2.
Hear it from the Horse's Mouth : APUE (Richard Stevens).
To the kernel, all open files are referred to by File Descriptors. A file descriptor is a non-negative number.
When we open an existing file or create a new file, the kernel returns a file descriptor to the process. The kernel maintains a table of all open file descriptors, which are in use. The allotment of file descriptors is generally sequential and they are allotted to the file as the next free file descriptor from the pool of free file descriptors. When we closes the file, the file descriptor gets freed and is available for further allotment.
See this image for more details :
When we want to read or write a file, we identify the file with the file descriptor that was returned by open() or create() function call, and use it as an argument to either read() or write().
It is by convention that, UNIX System shells associates the file descriptor 0 with Standard Input of a process, file descriptor 1 with Standard Output, and file descriptor 2 with Standard Error.
File descriptor ranges from 0 to OPEN_MAX. File descriptor max value can be obtained with ulimit -n. For more information, go through 3rd chapter of APUE Book.
Other answers added great stuff. I will add just my 2 cents.
According to Wikipedia we know for sure: a file descriptor is a non-negative integer. The most important thing I think is missing, would be to say:
File descriptors are bound to a process ID.
We know most famous file descriptors are 0, 1 and 2.
0 corresponds to STDIN, 1 to STDOUT, and 2 to STDERR.
Say, take shell processes as an example and how does it apply for it?
Check out this code
#>sleep 1000 &
[12] 14726
We created a process with the id 14726 (PID).
Using the lsof -p 14726 we can get the things like this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sleep 14726 root cwd DIR 8,1 4096 1201140 /home/x
sleep 14726 root rtd DIR 8,1 4096 2 /
sleep 14726 root txt REG 8,1 35000 786587 /bin/sleep
sleep 14726 root mem REG 8,1 11864720 1186503 /usr/lib/locale/locale-archive
sleep 14726 root mem REG 8,1 2030544 137184 /lib/x86_64-linux-gnu/libc-2.27.so
sleep 14726 root mem REG 8,1 170960 137156 /lib/x86_64-linux-gnu/ld-2.27.so
sleep 14726 root 0u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 1u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 2u CHR 136,6 0t0 9 /dev/pts/6
The 4-th column FD and the very next column TYPE correspond to the File Descriptor and the File Descriptor type.
Some of the values for the FD can be:
cwd – Current Working Directory
txt – Text file
mem – Memory mapped file
mmap – Memory mapped device
But the real file descriptor is under:
NUMBER – Represent the actual file descriptor.
The character after the number i.e "1u", represents the mode in which the file is opened. r for read, w for write, u for read and write.
TYPE specifies the type of the file. Some of the values of TYPEs are:
REG – Regular File
DIR – Directory
FIFO – First In First Out
But all file descriptors are
CHR – Character special file (or character device file)
Now, we can identify the File Descriptors for STDIN, STDOUT and STDERR easy with lsof -p PID, or we can see the same if we ls /proc/PID/fd.
Note also that file descriptor table that kernel keeps track of is not the same as files table or inodes table. These are separate, as some other answers explained.
You may ask yourself where are these file descriptors physically and what is stored in /dev/pts/6 for instance
sleep 14726 root 0u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 1u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 2u CHR 136,6 0t0 9 /dev/pts/6
Well, /dev/pts/6 lives purely in memory. These are not regular files, but so called character device files. You can check this with: ls -l /dev/pts/6 and they will start with c, in my case crw--w----.
Just to recall most Linux like OS define seven types of files:
Regular files
Directories
Character device files
Block device files
Local domain sockets
Named pipes (FIFOs) and
Symbolic links
File Descriptors (FD) :
In Linux/Unix, everything is a file. Regular file, Directories,
and even Devices are files. Every File has an associated number called File Descriptor (FD).
Your screen also has a File Descriptor. When a program is executed
the output is sent to File Descriptor of the screen, and you see
program output on your monitor. If the output is sent to File
Descriptor of the printer, the program output would have been
printed.
Error Redirection :
Whenever you execute a program/command at the terminal, 3 files are always open
standard input standard output
standard error.
These files are always present whenever a program is run. As explained before a file descriptor, is associated with each of
these files.
File File Descriptor
Standard Input STDIN 0
Standard Output STDOUT 1
Standard Error STDERR 2
For instance, while searching for files, one
typically gets permission denied errors or some other kind of errors. These errors can be saved to a particular file.
Example 1
$ ls mydir 2>errorsfile.txt
The file descriptor for standard error is 2.
If there is no any directory named as mydir then the output of command will be save to file errorfile.txt
Using "2>" we re-direct the error output to a file named "errorfile.txt"
Thus, program output is not cluttered with errors.
I hope you got your answer.
More points regarding File Descriptor:
File Descriptors (FD) are non-negative integers (0, 1, 2, ...) that are associated with files that are opened.
0, 1, 2 are standard FD's that corresponds to STDIN_FILENO, STDOUT_FILENO and STDERR_FILENO (defined in unistd.h) opened by default on behalf of shell when the program starts.
FD's are allocated in the sequential order, meaning the lowest possible unallocated integer value.
FD's for a particular process can be seen in /proc/$pid/fd (on Unix based systems).
As an addition to other answers, unix considers everything as a file system. Your keyboard is a file that is read only from the perspective of the kernel. The screen is a write only file. Similarly, folders, input-output devices etc are also considered to be files. Whenever a file is opened, say when the device drivers[for device files] requests an open(), or a process opens an user file the kernel allocates a file descriptor, an integer that specifies the access to that file such it being read only, write only etc. [for reference : https://en.wikipedia.org/wiki/Everything_is_a_file ]
File descriptors
To Kernel all open files are referred to by file descriptors.
A file descriptor is a non - negative integer.
When we open an existing or create a new file, the kernel returns a file descriptor to a process.
When we want to read or write on a file, we identify the file with file descriptor that was retuned by open or create, as an argument to either read or write.
Each UNIX process has 20 file descriptors and it disposal, numbered 0 through 19 but
it was extended to 63 by many systems.
The first three are already opened when the process begins
0: The standard input
1: The standard output
2: The standard error output
When the parent process forks a process, the child process inherits the file descriptors of the parent
All answer that are provided is great here is mine version --
File Descriptors are non-negative integers that act as an abstract handle to “Files” or I/O resources (like pipes, sockets, or data streams). These descriptors help us interact with these I/O resources and make working with them very easy. The I/O system is visible to a user process as a stream of bytes (I/O stream). A Unix process uses descriptors (small unsigned integers) to refer to I/O streams. The system calls related to the I/O operations take a descriptor as as argument.
Valid file descriptor ranges from 0 to a max descriptor number that is configurable (ulimit, /proc/sys/fs/file-max). Kernel assigns desc. for std input(0), std output(1) and std error(2) of the FD table. If a file open is not successful, fd return -1.
When a process makes a successful request to open a file, the kernel returns a file descriptor which points to an entry in the kernel's global file table. The file table entry contains information such as the inode of the file, byte offset, and the access restrictions for that data stream (read-only, write-only, etc.).
Any operating system has processes (p's) running, say p1, p2, p3 and so forth. Each process usually makes an ongoing usage of files.
Each process is consisted of a process tree (or a process table, in another phrasing).
Usually, Operating systems represent each file in each process by a number (that is to say, in each process tree/table).
The first file used in the process is file0, second is file1, third is file2, and so forth.
Any such number is a file descriptor.
File descriptors are usually integers (0, 1, 2 and not 0.5, 1.5, 2.5).
Given we often describe processes as "process-tables", and given that tables has rows (entries) we can say that the file descriptor cell in each entry, uses to represent the whole entry.
In a similar way, when you open a network socket, it has a socket descriptor.
In some operating systems, you can run out of file descriptors, but such case is extremely rare, and the average computer user shouldn't worry from that.
File descriptors might be global (process A starts in say 0, and ends say in 1 ; Process B starts say in 2, and ends say in 3) and so forth, but as far as I know, usually in modern operating systems, file descriptors are not global, and are actually process-specific (process A starts in say 0 and ends say in 5, while process B starts in 0 and ends say in 10).
File descriptors are nothing but references for any open resource. As soon as you open a resource the kernel assumes you will be doing some operations on it. All the communication via your program and the resource happens over an interface and this interface is provided by the file-descriptor.
Since a process can open more than one resource, it is possible for a resource to have more than one file-descriptors.
You can view all file-descriptors linked to the process by simply running,
ls -li /proc/<pid>/fd/ here pid is the process-id of your process
Addition to above all simplified responses.
If you are working with files in bash script, it's better to use file descriptor.
For example:
If you want to read and write from/to the file "test.txt", use the file descriptor as show below:
FILE=$1 # give the name of file in the command line
exec 5<>$FILE # '5' here act as the file descriptor
# Reading from the file line by line using file descriptor
while read LINE; do
echo "$LINE"
done <&5
# Writing to the file using descriptor
echo "Adding the date: `date`" >&5
exec 5<&- # Closing a file descriptor
I'm don't know the kernel code, but I'll add my two cents here since I've been thinking about this for some time, and I think it'll be useful.
When you open a file, the kernel returns a file descriptor to interact with that file.
A file descriptor is an implementation of an API for the file you're opening. The kernel creates this file descriptor, stores it in an array, and gives it to you.
This API requires an implementation that allows you to read and write to the file, for example.
Now, think about what I said again, remembering that everything is a file — printers, monitors, HTTP connections etc.
That's my summary after reading https://www.bottomupcs.com/file_descriptors.xhtml.

Resources