Running different versions of the same binary, same file - unix

I have a single binary that can run in server or client mode. It can be used like this:
$ ./a.out --server &
$ ./a.out --client &
They talk to each other, and this is working well. My question is what is the expected behavior when I launch the server:
$ ./a.out --server &
But then I forget to kill it, and go about my development work, editing and building, and running the client:
$ edit client.c
$ make
$ ./a.out --client
^C
<repeat>
Now without the sticky bit set, is my OS (Ubuntu) running two different versions of my binary? Or is it taking a shortcut and using the in-memory instance and therefore ignoring my latest build? Are there any other side effects to this mistake?

make replaces the executable by deleting the original file. However, since it is executing in the background, there is a reference to it. The file isn't completely deleted until the reference is cleared (though directory entries are cleared to make way for the new executable file).
So, in your example there are two versions of the program running. One side-effect is if you make changes which cause major incompatibility b/w your server & client code - such as changes in packet structures. You'll probably see weird, unexplainable behavior, crashes, etc. Its always a good idea to kill the background server and re-run your entire test.

If you do not change server code, the just copy your a.out into 'my_server' e.g. Then run it as my_server --server. make will replace a.out, but not my_server.
Another way - tell make to kill all running a.out-s just before recompile: add target 'all' (it must be first in makefile), which depends on a.out and executes 'killall a.out'.

Related

rsync with --fake-super not preserving owner after restore - Monterey/Synology DS920+/rsync 3

Working through a backup script debug backup/restore on:
macStudio M1 / macOS Monterey <-> Synology DS920+
On the mac, I've downloaded HomeBrew rsync 3.2.4
On the synology, I'm running what it shipped with - rsync 3.1.2
For debug, I used /Volumes/Recovery which has files with
owner set to root and group set to wheel.
src="/Volumes/Recovery/"
dest="$userID#$remoteIP::NetBackup/MacStudio1/Volumes/Recovery/
restore="/tmp/RestoreBackup/"
userID is has admin privileges on the NAS.
rsync services are enabled on the NAS.
user directories are enabled on the NAS.
Backup:
rsync -ahX --delete -M--fake-super $src $dest
Restore:
rsync -ahX --delete -M--fake-super $dest $restore
It all seems to work without error. Files are on restore as expected except I'm seeing the files have owner set to my ID.
for example, ls -laR shows (abridged) :
/Volumes/Recovery/E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 root wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
/tmp/RestoreBackup//E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 myID wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
I've looked at the rsync man (more than once) and I see words like "To affect the remote side of a remote-shell connection...".
However, I'm not sure how to apply that to a backup or a restore.
Do I want to effect the remote side on the backup?
Do I want to effect the remote side on the restore?
Any guidance on what I should have set the options to?
So looks like I'm not getting any responses. Guess I'll wrap this up with my observations.
In testing I've done on a user directory (with test data files), the rsync is working to save and restore files with extended attributes (I verified they got set and that they matched on restore). So I think the overall switches on the rsync commands are correct.
The problems I'm seeing on backing up and restoring the "Recovery" volume have the following issues:
All regular files have the wrong "owner". The groups look correct.
The one linked directory has the wrong "owner" and the wrong "group".
I believe (1) problem is caused because I need to use sudo rsync on the restore. I'm guessing that the files that are backup up have the correct owner/group in metadata, but the restore doesn't have the authority to set the owner to 'root'. I tried using sudo briefly and it died with some errors I didn't quite understand. I believe I need to set up the etc/sudoers file with some information. The (2) problem may partially go away if I fix (1) or it may need some additional rsync flags to do with linked files and directories.
Overall, my backup script is working, but I'm now starting to question if I know enough to know what to backup on macOS. A rather length article by the CCC folk seems to explain this but it leaves me feeling I don't know enough above macOS data structures and it seems some of this may change over time when new version are released. I had started with the idea of just backuping up everything under /* (Macintosh HD), and perhaps this would work, though there are at least somethings that need to be excluded (like /Volumes/* and perhaps /tmp/* ). Also noticed that there is a /System tree that doesn't show up with ls /* that CCC folk say to leave alone. So not exactly got a good feeling I understand what I need to know.
So for the moment I'm going to sideline this effort. I've got Time Machine running to my NAS and I need to get the NAS backed up to a cloud first. My fall back positions are either (1) to just be dependent on TimeMachine only, (2) to buy and use CCC as a secondary backup, or (3) to create a backup with just my user directories as a secondary backup - which will require my reinstalling any 3rd party software in the event that I can't recover with Time Machine.

Fedora 30: GTK fails to load appmenu-gtk-module

I am learning to program in Python and Rust. On different versions of Ubuntu these programs compiled and ran perfectly. Now that I have a dedicated Fedora 30 KDE system, every time I try and build a program, I get a warning: Failed to load module "appmenu-gtk-module"
I have tried looking this up and have re/installed anything GTK on my system. The programs otherwise function well, but no menus are drawn. I was also trying things in GNOME and hit the same thing.
I am also using QT. Those programs also build and run fine, but again, no menu.
I'm going bonkers with this. Any help is appreciated.
The appmenu-gtk module is not packaged on Fedora. (GNOME doesn't support them anyway.)
The real questions are:
Why is it configured to load? Did you copy or share GTK config files from an Ubuntu system? You should remove this module from your settings.
Even with improper configs I don't believe this should result in menus not appearing. It should just fail to load and work as normal. How is your application using menus?
I finally got so fed up with getting this error that I went full nerd-diagnosis, and ran this command to find out which file contained the errant reference to the appmenu-gtk (the package that would provide this is not installable on my system either).
(Replace "dolphin" with the command that is giving you the error.)
strace -e openat,access dolphin 2>&1 |grep -v ENOENT |awk '/appmenu-gtk/ {exit} !/appmenu-gtk/ {print}'|cut -d '"' -f2 |sort|uniq|xargs grep appmenu-gtk 2>/dev/null
This will then give you a list of files which contain the line appmenu-gtk, and in my case it was ~/.config/gtk-3.0/settings.ini. From there I just commented it out, and that gets rid of the error message (not sure if this will fix your problem of not having any menus, but you might just be able to edit that line to fix it in another way if commenting it out doesn't work).

using MPI: What on earth is "execvp error on file" error?

I am using my own laptop locally with win 10 system and intel parallel studio .
After I compiled my mpi code with mpiifort and run it with mpiexec for the first time. It warns me to input account and password, like below
I am sure I put in the correct password. But it just didn't work. What does "execvp error" mean? I never encountered this problem before on my old win8 system. I just installed this new win10 system on my laptop, everything is new. Could somebody please help me instead of making close vote without any comment? At least, say something
execvp error on file is the error from doing execvp system call. It is variant of exec system call used to start programs. In your case the mpiexec program tries to start the mpi-learning-pack.exe file on the target hosts (according to settings, probably some environment settings). This error says that it can't start your program on target hosts, because either it is not executable file, or cannot be found (not copied to target hosts or have no full path).
mpiexec does not copy file to targets, you should copy it to every target hosts.
You can also check if it executable by manually starting it on target host: just login to target host and type mpi-learning-pack.exe without mpiexec;
program may not start if there are no any of required library on target.
Or your account has no enough privileges like https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/607844 https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/624054
Or you just should use relative (mpiexec [options] .\mpi-learning-pack.exe) or full path (mpiexec [options] e:\w\work\fortran\_test_and_learning\mpi-learning-pack.exe) of target executable like in https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/624054

When using mpirun with R script, should I copy manually file/script on clusters?

I'm trying to understand how openmpi/mpirun handle script file associated with an external program, here a R process ( doMPI/Rmpi )
I can't imagine that I have to copy my script on each host before running something like :
mpirun --prefix /home/randy/openmpi -H clust1,clust2 -n 32 R --slave -f file.R
But, apparently it doesn't work until I copy the script 'file.R' on clusters, and then run mpirun. Then, when I do this, the results are written on cluster, but I expected that they would be returned to working directory of localhost.
Is there another way to send R job from localhost to multiple hosts, including the script to be evaluated ?
Thanks !
I don't think it's surprising that mpirun doesn't know details of how scripts are specified to commands such as "R", but the Open MPI version of mpirun does include the --preload-files option to help in such situations:
--preload-files <files>
Preload the comma separated list of files to the current working
directory of the remote machines where processes will be
launched prior to starting those processes.
Unfortunately, I couldn't get it to work, which may be because I misunderstood something, but I suspect it isn't well tested because very few use that option since it is quite painful to do parallel computing without a distributed file system.
If --preload-files doesn't work for you either, I suggest that you write a little script that calls scp repeatedly to copy the script to the cluster nodes. There are some utilities that do that, but none seem to be very common or popular, which I again think is because most people prefer to use a distributed file system. Another option is to setup an sshfs file system.

Does changing the order of compiling with GCC in unix delete files?

So I just messed up real bad.. I'm hoping someone can tell me I didn't just ruin everything I did for the last 4 weeks with this simple typo..
I kept making changes to my C program and would recompile to test the changes using this in terminal:
gcc -o server server.c
Due to programming for the past 5 hours straight for the most part.. I accidentally typed this the last time I tried compiling:
gcc -o server.c server
I got some long message and realized my mistake.. tried recompiling using the first way I listed.. And it says "no such file server.c"
I typed "ls" and sure enough.. my program isn't there.
Please tell me everything I did hasn't vanished? :((
Unfortunately, you told the compiler to read your executable, and write its output to your source file. The file is gone. If you are on a Windows system, perhaps it could be undeleted with something like Norton Utilities. If not, you're probably out of luck.
Next time, consider using a Makefile to contain the compiler commands, so you can just type "make" to build your program. Other strategies include keeping the file open in a single editor session the whole time you're working, and using a source control system like git or subversion (which would let you back up to previous versions of the file, as well.)

Resources