udev mounted drive unmounts after script is done - udev

I have a fedora23 live cd spin created in which I created a udev script
the udev rule states:
SUBSYSTEMS=="scsi", KERNEL=="sd[a-z]", GOTO="mount_through_script"
# Else
GOTO="script_end"
LABEL="mount_through_script"
ACTION=="add", RUN+="/usr/bin/mount_usb.sh %N"
ACTION=="remove", RUN="/usr/bin/rmdir %N"
# Exit
LABEL="script_end"
the mount_usb.sh script does multiple things, like doing some work when a specific USB is inserted, but the most important command executed is:
mount -ouser,umask=0000 \${mount_source} "/media/mountpoint";
where mount_source is the path provided by the ADD action.
until the last line of the script the mounted drive seems fine, auto mounted and scripts executed, but when it exits, the freshly mounted drive is unmounted.
When I run the script with the same parameters as root in a console everything works just fine.
Formerly with fedora 19 everything seemed to work, but now we are upgrading to fedora23 and it starts failing.
I can not find any logs stating a reason why it is unmounted and besides the occasional "not correctly unmounted" warning everything looks ok.
Anyone a hint what might be going on

Related

(Dagster) Schedule my_hourly_schedule was started from a location that can no longer be found

I'm getting the following Warning message when trying to start the dagster-daemon:
Schedule my_hourly_schedule was started from a location Scheduler that can no longer be found in the workspace, or has metadata that has changed since the schedule was started. You can turn off this schedule in the Dagit UI from the Status tab.
I'm trying to automate some pipelines with dagster and created a new project using dagster new-project Scheduler where "Scheduler" is my project.
This command, as expected, created a diretory with some hello_world files. Inside of it I put the dagster.yaml file with configuration for a PostgreDB to which I want to right the logs. The whole thing looks like this:
However, whenever I run dagster-daemon run from the directory where the workspace.yaml file is located, I get the message above. I tried runnning running the daemon from other folders, but it then complains that it can't find any workspace.yaml files.
I guess, I'm running into a "beginner mistake", but could anyone help me with this?
I appreciate any counsel.
One thing to note is that the dagster.yaml file will not do anything unless you've set your DAGSTER_HOME environment variable to point at the directory that this file lives.
That being said, I think what's going on here is that you don't have the Scheduler package installed into the python environment that you're running your dagster-daemon in.
To fix this, you can run pip install -e . in the Scheduler directory, although the README.md inside that directory has more specific instructions for working with virtualenvs.

Udev does not have permissions to write to the file system

I have been fighting with udev all afternoon. Basically I have created a rule that detects when a mass storage device is plugged into the system. This rule works and I can get it to execute a script without any issues, here it is for review purposes:
ACTION=="add", KERNEL=="sd?*", SUBSYSTEM=="block", RUN+="/usr/local/bin/udevhelper.sh"
The problem I am running into is that the script is executed as some sort of strange user that has read-only permissions to the entire system. The script I am executing is quite simple:
#!/bin/sh
cd /usr/local/bin
touch .drivedetect
echo "1" > .drivedetect
exit
Basically I would like udev to run this script and simply output a 1 to a file named .drivedetect within the /usr/local/bin folder. But as I mentioned before it sees the rule and executes the rule when I plug in a drive however when it tries to run the script it comes back with file system is read-only script quit with error code 1.
I am currently running this on a raspberry pi zero and the latest Debian image. udev is still being run from init.d from what I can tell because there is no systemd service for it registered. Any help would be great and if you need any more information just ask.
Things I've tried:
MODE="0660"
GROUP="plugdev"
Various combinations of RUN+="/bin/sh -c '/path/to/script'" and /bin/bash
OPTIONS="last_rule"
And last but not least I tried running the script under the main username as well
#!/bin/sh
su pi drivedetect
I had same issue, when I just used
udevadm control --reload-rules
after editing a udev rule. But, if I do:
sudo /etc/init.d/udev restart
The script can edit a file.
It's not enough to reboot. I have to do the restart after booting. It then works as expected until the next reboot.
This is on an rpi with stretch-lite.

"Homestead Improved" Vagrant VM - Failed to restart php7.0-fpm.service: Unit php7.0-fpm.service not found

I'm trying to get a Homestead Improved Vagrant VM instance running on Windows.
See Homestead Improved on Github. I'm following this easy introduction:
https://www.sitepoint.com/quick-tip-get-homestead-vagrant-vm-running/
My steps are:
git clone https://github.com/swader/homestead_improved my_project
cd my_project
bin/folderfix.sh
vagrant up
Machine boots and is ready. Then provisioner is running. Then I get the follwoing error message:
==> default: Failed to restart php7.0-fpm.service: Unit php7.0-fpm.service not found.
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Any hints what to do?
This has been fixed on the repo-level, and should never happen again if you run git pull inside your homestead improved cloned folder (but outside of the VM, not SSH-ed into it). If your machine is already running, you might have to apply the steps below, too. But new machines (so new clones of Homestead Improved) will not have this happen any more). Explanation of what happened is here.
#daniel-sixl please try to re-download/re-clone and start from scratch, everything should be working just fine now.
Old solution:
Try to change php7.0-fpm to php7.1-fpm - the box was auto-updated to the new version.
You can do this by going into /etc/nginx/sites-available and changing the required file - its name will match the site you defined, as per that post you linked. So probably /etc/nginx/sites-available/homestead.app.
--
Edit: added more detailed instructions for people very new to it all.
Ok, so what you need to do is, once you're in the sites-available folder, edit the homestead.app file. Something like sudo vim homestead.app will do just fine, it'll open a basic text editor (that's quite nightmarish to use when you're new at it, so just be patient :) ) Sudo is important, because you are editing a file that only an admin has access to.
Once you're "in", do the following:
press / (this activates "search") and input php7.0-fpm. This should take you to the line which contains that phrase. If you press / again and press Enter, that works like "find next", so it'll go to the next line having the phrase, or restart from the top if no other lines contain it.
when your cursor is on the line with php7.0-fpm (you can move it around with arrows of course), press i. This activated "insert" mode. Now you can edit the file.
change the 7.0 to 7.1.
press ESC to exit edit mode, and go back into read-only mode.
repeat for each line with 7.0
once done, while in read-only mode (ESC to make sure), type :x. Yeah, like an emoticon with cross lips. Press Enter. That's short for "Save and Exit".
you will now be in the folder again, from where you should execute sudo service nginx restart.
The new configuration should now take effect, and everything should start working.

What is the condition for iexpress restart

I use iexpress.exe to quickly create a simple installer based on a batch file. The IExpress Wizards provides the option "Only restart if needed".
But how can I tell from the batch file that a restart is required? I tried using exit code 3016 as in windows updates. But that doesn't work.
BTW: I call the batch file with
cmd.exe /c my.bat
The contents of my.bat:
exit /b 3010
I tried to get IExpress to recognize the return code. I think you want 3010, not 3016, though. Also the command would be:
exit 3010
[No /b – we want to return an exit code from cmd, not set errorlevel].
But it didn’t work, which makes me wonder if IExpress even bothers to check that.
Anyhow, I did a bit of investigation with Process Monitor. Immediately after the install process runs, it seems IExpress checks the PendingFileRenameOperations registry value to see whether files have been marked for rename (or deletion). If there are any, it determines that a reboot is needed, and takes the action you specified in your SED file (eg prompt the user for a reboot; or just reboot; or nothing).
In case you’re not familiar with it, the PendingFileRenameOperations registry value is a list of files to be moved or deleted on the next system boot.
You can use Sysinternals MoveFile to simulate one of these scheduled-at-next-startup renames. Add movefile.exe to your IExpress archive, and add a line like this in your batch file:
movefile.exe -accepteula foo bar
The actual filenames aren’t important – just use a file that you know is certain to not exist. (As long as you didn’t change directory in the batch file, that’ll still be a file in, eg, %temp%\IXP000.TMP.)
Note that you need to be running elevated for that (Run as administrator).
Worked well here. IExpress pops up after each run, prompting the user to reboot.

Automounting riofs in ubuntu

I have several buckets mounted using the awesome riofs and they work great, however I'm at a loss trying to get them to mount after a reboot. I have tried entering in the following to my /etc/fstab with no luck:
riofs#bucket-name /mnt/bucket-name fuse _netdev,allow_other,nonempty,config=/path/to/riofs.conf.xml 0 0
I have also tried adding a startup script to run the riofs commands to my rc.local file but that too fails to mount them.
Any idea's or recommendations?
Currently RioFS does not support fstab. In order to mount remote bucket at the startup time, consider adding corresponding command line to your startup script (rc.local, as you mentioned).
If for some reason it fails to start RioFS from startup script, please feel free to contact developers and/or fill issue report.
If you enter your access key and secret access key in the riofs config xml file, then you should be able to mount this via fstab or an init.d or rc.local script ..
See this thread
EDIT:
I tested this myself and this is what I find. Even with the AWS access details specified in the config file, there is no auto-matic mounting at boot. But to access the system, all one needs to do is to issue mount /mount/point/in-fstab .. and the fstab directive would work and persist like a standard fstab mounted filesystem.
So, it seems the riofs system is not ready at that stage of the boot process when filesystems are mounted. That's the only logical reason I can find so far. This can be solved with an rc.local or init.d script that just issues a mount command (at the worst)
But riofs does work well, even as the documentation seems sparse. It is certainly more reliable and less buggy than s3fs ..
Thanks all,
I was able to get them auto-mounting from rc.local with the syntax similar to:
sudo riofs --uid=33 --gid=33 --fmode=0777 --dmode=0777 -o "allow_other" -c ~/.config/riofs/riofs.conf.xml Bucket-Name /mnt/mountpoint
Thanks again!

Resources