How to make writings in a readonly filesystem? - qt

I'm writing a Qt\Qml app for raspberry pi 3b+, and I need to save informations every time an event is triggered.
On production the raspberry's file system will be mounted in readonly and I wish to found the best way to write a file in a not tmpfs partition, for restoring it in case of powerloss.
Only three ways come to mind:
in the same thread remount rw before writing and then remount ro after writing, but this slow down my program because it takes around 600 ms to carry out both operations.
do the same of 1 but in another thread.
Write temporary files in tmpfs partition and then append it to final position through a thread that check for temporary files.
I don't want change the default raspberry filesystem structure (small fat + extendable ext4 ) and I don't want add an external drive.

Related

creating an sqlite DB in memory vs using a tmpfs

Is there any functional difference or advantage to using the SQLITE3 :memory db vs defining a db on a memory disk, like a tmpfs mount? is there some use case where I should use one vs the other?
Putting the database on a disk, even if it is a RAM disk, allows you to access the resulting file.
On the other hand, going through the OS to manage the file might be slower. (This is unlikely to be noticeable in practice.)
And you don't need to bother about inventing a file name.
If you want to access database from multiple processes => go with tmpfs.
If you want to keep data until operating system restarts => go with tmpfs.
If you want to keep data in a single process and flush data when process is killed => go with memory db.
SQLite is fast, and even faster when runs on ram (memory db/tmpfs). You really don't need to worry about performance of memory db vs tmpfs.

MPI one-sided file I/O

I have some questions on performing File I/Os using MPI.
A set of files are distributed across different processes.
I want the processes to read the files in the other processes.
For example, in one-sided communication, each process sets a window visible to other processors. I need the exactly same functionality. (Create 'windows' for all files and share them so that any process can read any file from any offset)
Is it possible in MPI? I read lots of documentations about MPI, but couldn't find the exact one.
The simple answer is that you can't do that automatically with MPI.
You can convince yourself by seeing that MPI_File_open() is a collective call taking an intra-communicator as first argument and returning a file handler to the opened file as last argument. In this communicator, all processes open the file and therefore, all processes must see the file. So unless a process sees a file, it cannot get a MPI_file handler to access it.
Now, that doesn't mean there's no solution. A possibility could be to do by hand exactly what you described, namely:
Each MPI process opens individually the file they see and are responsible of; then
Each of theses processes reads this local file into a buffer;
Theses individual buffers are all exposed, using either a global MPI_Win memory windows, or several individual ones, ready for one-sided read accesses; and finally
All read accesses to any data that were previously stored in these individual local files, are now done through MPI_Get() calls using the memory window(s).
The true limitation of this approach is that it requires to fully read all of the individual files, therefore, you need to have sufficient memory per node for storing each of them. I'm well aware that this is a very very big caveat that could just make the solution completely impractical. However, if the memory is sufficient, this is an easy approach.
Another even simpler solution would be to store the files into a shared file system, or having them all copied on all local file systems. I imagine this isn't an option since the question wouldn't have been asked otherwise...
Finally, in last resort, a possibility I see would be to dedicate a MPI process (or an OpenMP thread of a MPI process) per node to serve each files. This process would just act as a "file server", answering "read" request coming from the other MPI processes, and serving them by reading the requested data from the file, and sending it back via MPI. It's a bit lengthy to write, but it should work.

Upgrading an Amazon EC2 instance from t1.micro to medium, instance storage remains same

We have been using micro instance till our development phase. But now, as we are about to go live, we want to upgrade our instance to type medium.
I followed these simple steps: stop the running instance, change instance type to medium and then start the instance again. I can see the instance is upgraded in terms of the memory. But the storage still shows to be 8GB. But according to the configuration mentioned, a m1.medium type instance should have 1x410GB storage.
Am I doing anything wrong or missing out something? Please help!
Keep in mind, EBS storage (which you are currently using) and Instance storage (which is what you are looking for) are two different things in EC2.
EBS storage is similar to a SAN volume. It exists outside of the host. You can create multiple EBS volumes of up to 1TB and attach them to any instance size. Smaller instances have lower available bandwidth to EBS volumes so they will not be able to effectively take advantage of all that many volumes.
Instance storage is essentially hard drives attached to the host. While its included in the instance cost, it comes with some caveats. It is not persistent. If you stop your instance, or the host fails for any reason, the data stored on the instance store will be lost. For this reason, it has to be explicitly enabled when the instance is first launched.
Generally, its not recommended that to use instance storage unless you are conformable with and have designed your infrastructure around the non-persistance of instance storage.
The sizes mentioned for the instance types are just these defaults. If you create an image from a running micro instance, it will get that storage size as default, even if this image later is started as medium.
But you can change the storage size when launching the instance:
You also can change the default storage size when creating an image:
WARNING: This will resize the storage size. It will not necessarily resize the partition existing on it nor will it necessarily resize the file system on that partition. On Linux it resized everything automagically (IIRC), on a Windows instance you will have to resize your stuff yourself. For other OSes I have no idea.
I had a similar situation. I created a m2.medium instance of 400 GB, but when I log into the shell and issue the command
df -h
... it shows an 8 GB partition.
However, the command
sudo fdisk -l
showed that the device was indeed 400 GB. The problem is that Amazon created a default 8 GB partition on it, and that partition needs to be expanded to the full size of the device. The command to do this is:
sudo resize2fs -f /dev/xvda1
where /dev/xvda1 is the mounted root volume. Use the 'df -h' command to be sure you have the right volume name.
Then simply reboot the instance, log in again, and you'll see the fdisk command now says there's nearly 400 GB available space. Problem solved.

MPI, NFS File Writing

I'm having an issue with a MPI program running across a group of Linux nodes. The group is currently set up with NFS, with /home/mpi mounted across all nodes. The problem is that the program requires all of the nodes to open a file in the file system in write mode (use fopen on /home/mpi/file), and write to while it does calculations. One node will be able to open it, and the others won't and will throw an error. Instead I want each node to have its own file to write to.
I was wondering if there was a way to get around this. I was thinking about making a separate file for each node, with the nodes rank appended to the filename, but was wondering if there were simpler ways to get around this issue. Is there a way to set up the group so that all the worker nodes have their own copy of the /home/mpi directory that is auto-updated with any changes that the master node does to its copy?
Thanks.
As far as I know, the standard way of doing things is to open one file per node, indexed by rank as you described. Depending on what these files are used for (e.g. logging), you then have to write a script to re-combine them at the end of the computation.
If you really need all processes to write to the same file on the filesystem, you'll have to somehow coordinate concurrent outputs from all processes wanting to write to the file.
There is no way to do this at the filesystem level as far as I know, but you can do this within you MPI code. The standard, historical implementation of this is to have all MPI processes send messages to rank 0, which is in charge of effectively writing them to the filesystem.
Another option would be to look at the IO features introduced in MPI2, which allow all processes to work on different parts of the same file.

0-copy inter-process communication on Unix without using the filesystem

If I have to move a moderate amount of memory between two processes, I can do the following:
create a file for writing
ftruncate to desired size
mmap and unlink it
use as desired
When another process requires that data, it:
connects to the first process through a unix socket
the first process sends the fd of the file through a unix socket message
mmap the fd
use as desired
This allows us to move memory between processes without any copy - but the file created must be on a memory-mounted filesystem, otherwise we might get a disk hit, which would degrade performance. Is there a way to do something like that without using a filesystem? A malloc-like function that returned a fd along with a pointer would do it.
[Edit] Having a file descriptor provides also a reference count mechanism that is maintained by the kernel.
Is there anything wrong with System V or POSIX shared memory (which are somewhat different, but end up with the same result)? With any such system, you have to worry about coordination between the processes as they access the memory, but that is true with memory-mapped files too.

Resources