I have been fighting with udev all afternoon. Basically I have created a rule that detects when a mass storage device is plugged into the system. This rule works and I can get it to execute a script without any issues, here it is for review purposes:
ACTION=="add", KERNEL=="sd?*", SUBSYSTEM=="block", RUN+="/usr/local/bin/udevhelper.sh"
The problem I am running into is that the script is executed as some sort of strange user that has read-only permissions to the entire system. The script I am executing is quite simple:
#!/bin/sh
cd /usr/local/bin
touch .drivedetect
echo "1" > .drivedetect
exit
Basically I would like udev to run this script and simply output a 1 to a file named .drivedetect within the /usr/local/bin folder. But as I mentioned before it sees the rule and executes the rule when I plug in a drive however when it tries to run the script it comes back with file system is read-only script quit with error code 1.
I am currently running this on a raspberry pi zero and the latest Debian image. udev is still being run from init.d from what I can tell because there is no systemd service for it registered. Any help would be great and if you need any more information just ask.
Things I've tried:
MODE="0660"
GROUP="plugdev"
Various combinations of RUN+="/bin/sh -c '/path/to/script'" and /bin/bash
OPTIONS="last_rule"
And last but not least I tried running the script under the main username as well
#!/bin/sh
su pi drivedetect
I had same issue, when I just used
udevadm control --reload-rules
after editing a udev rule. But, if I do:
sudo /etc/init.d/udev restart
The script can edit a file.
It's not enough to reboot. I have to do the restart after booting. It then works as expected until the next reboot.
This is on an rpi with stretch-lite.
Related
I'm trying to get a Homestead Improved Vagrant VM instance running on Windows.
See Homestead Improved on Github. I'm following this easy introduction:
https://www.sitepoint.com/quick-tip-get-homestead-vagrant-vm-running/
My steps are:
git clone https://github.com/swader/homestead_improved my_project
cd my_project
bin/folderfix.sh
vagrant up
Machine boots and is ready. Then provisioner is running. Then I get the follwoing error message:
==> default: Failed to restart php7.0-fpm.service: Unit php7.0-fpm.service not found.
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Any hints what to do?
This has been fixed on the repo-level, and should never happen again if you run git pull inside your homestead improved cloned folder (but outside of the VM, not SSH-ed into it). If your machine is already running, you might have to apply the steps below, too. But new machines (so new clones of Homestead Improved) will not have this happen any more). Explanation of what happened is here.
#daniel-sixl please try to re-download/re-clone and start from scratch, everything should be working just fine now.
Old solution:
Try to change php7.0-fpm to php7.1-fpm - the box was auto-updated to the new version.
You can do this by going into /etc/nginx/sites-available and changing the required file - its name will match the site you defined, as per that post you linked. So probably /etc/nginx/sites-available/homestead.app.
--
Edit: added more detailed instructions for people very new to it all.
Ok, so what you need to do is, once you're in the sites-available folder, edit the homestead.app file. Something like sudo vim homestead.app will do just fine, it'll open a basic text editor (that's quite nightmarish to use when you're new at it, so just be patient :) ) Sudo is important, because you are editing a file that only an admin has access to.
Once you're "in", do the following:
press / (this activates "search") and input php7.0-fpm. This should take you to the line which contains that phrase. If you press / again and press Enter, that works like "find next", so it'll go to the next line having the phrase, or restart from the top if no other lines contain it.
when your cursor is on the line with php7.0-fpm (you can move it around with arrows of course), press i. This activated "insert" mode. Now you can edit the file.
change the 7.0 to 7.1.
press ESC to exit edit mode, and go back into read-only mode.
repeat for each line with 7.0
once done, while in read-only mode (ESC to make sure), type :x. Yeah, like an emoticon with cross lips. Press Enter. That's short for "Save and Exit".
you will now be in the folder again, from where you should execute sudo service nginx restart.
The new configuration should now take effect, and everything should start working.
I have a fedora23 live cd spin created in which I created a udev script
the udev rule states:
SUBSYSTEMS=="scsi", KERNEL=="sd[a-z]", GOTO="mount_through_script"
# Else
GOTO="script_end"
LABEL="mount_through_script"
ACTION=="add", RUN+="/usr/bin/mount_usb.sh %N"
ACTION=="remove", RUN="/usr/bin/rmdir %N"
# Exit
LABEL="script_end"
the mount_usb.sh script does multiple things, like doing some work when a specific USB is inserted, but the most important command executed is:
mount -ouser,umask=0000 \${mount_source} "/media/mountpoint";
where mount_source is the path provided by the ADD action.
until the last line of the script the mounted drive seems fine, auto mounted and scripts executed, but when it exits, the freshly mounted drive is unmounted.
When I run the script with the same parameters as root in a console everything works just fine.
Formerly with fedora 19 everything seemed to work, but now we are upgrading to fedora23 and it starts failing.
I can not find any logs stating a reason why it is unmounted and besides the occasional "not correctly unmounted" warning everything looks ok.
Anyone a hint what might be going on
I have a Google Compute Engine VM instance with a Asterisk Server running on it. I get this message when I try to run sudo:
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
Is there a password for root so I can try to change it there? Any suggestions on this?
It looks like you have manually edited the /etc/sudoers file so while you would normally have sudo access, due to the parse error, you won't be able to do this directly.
Here's how to fix this situation.
1. Save the current boot disk
go to to the instance view in Developers Console
find your VM instance and click on its name; you should now be looking at a URL such as
https://console.cloud.google.com/project/[PROJECT]/compute/instancesDetail/zones/[ZONE]/instances/[VM-NAME]
stop the instance
detach the boot disk from the instance
2. Fix the /etc/sudoers on the boot disk
create a new VM instance with its own boot disk; you should have sudo access here
attach the disk saved above as a separate persistent disk
mount the disk you just attached
fix the /etc/sudoers file on the disk
unmount the second disk
detach the second disk from the VM
delete the new VM instance (let it delete its boot disk, you won't need it)
3. Restore the original VM instance
re-attach the boot disk to the original VM
restart the original VM with its original boot disk, with fixed config
How to avoid this in the future
Always use the command visudo rather just any text editor directly to edit the /etc/sudoers file which will validate the contents of the file prior to saving it.
I ran into this issue as well and had the same issue Nakilon was reporting when trying the gcloud workaround.
What we ended up doing was configure a startup script that removed the broken sudoers file.
So in your metadata put something like:
#/bin/sh
rm "/etc/sudoers.d/broken-config-file"
echo "ok" > /tmp/ok.log
https://cloud.google.com/compute/docs/startupscript
As you probably figured out this requires the /etc/sudoers file to be fixed. As nobody has root access to the instance, you will not be able to do this from inside the instance.
The best way to solve this is to edit the disk from another instance. The basic steps to do this are:
Take a snapshot of your disk as a backup (!)
Shutdown your instance, taking care not to delete the boot disk.
Start a new "debugger" instance from one of the stock GCE images.
Attach the old boot disk to the new instance.
In the debugger instance, mount the disk.
In the debugger instance, fix the sudoers file on the mounted disk.
In the debugger instance, unmount the disk
Shutdown the debugger instance.
Create a new instance with the same specs as your original instance using the fixed disk as the boot disk.
The new disk will then have the fixed sudoers file.
Since i bumped into this issue too, if you have another instance or any place where you can run with gcloud privileges, you can run:
gcloud compute --project "<project id>" ssh --zone "europe-west1-b" "<servername>"
I ran this on a server which had gcloud as root, so you login to the other box as root too! Then fix your issue. (if you don't have a box, just spin a micro up with the correct gcloud privileges) saves the hassle of disk stuff etc.
As mentioned in above comments, I am getting the same error like below in gcp VM.
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
To solve this
I have ssh to another vm and become root then I ran gcloud ssh command to our main vm (where you are getting the sudo error.)
gcloud compute --project "<project id>" ssh --zone "europe-west1-b "<servername>"
And BOOM!, now are login as root in the VM.
Now you can access/change the /etc/sudoers file accordingly.
I found this hack better than recreating vm/disks.
Hope this helps to someone!
It is possible to connect to a VM as root from your developers console Google Cloud Shell. Make sure the VM is running, start the shell and use this command:
gcloud compute ssh root#<instance-name> --zone <zone> [--project <project-id>]
where instance-name is found in the Compute Engine VM Instances screen. project-id is optional but required if you are connecting to an instance in a different project from the project where you started the shell.
You can then fix this and other issues that may prevent you from using sudo.
I got a Permission denied error when trying to ssh to the problem instance via gcloud. Using a startup script as mentioned above by #Jorick works. Instructions for it are here. You will have to stop and restart the VM instance for the startup script to get executed. I modified the script slightly:
rm -f /etc/sudoers.d/google_sudoers >& /tmp/startup.log
After the restart, launch an SSH session from the cloud console and check that you are able to view the file contents (with sudo more /etc/sudoers.d/google_sudoers for example). If that works your problem has been solved.
I have several buckets mounted using the awesome riofs and they work great, however I'm at a loss trying to get them to mount after a reboot. I have tried entering in the following to my /etc/fstab with no luck:
riofs#bucket-name /mnt/bucket-name fuse _netdev,allow_other,nonempty,config=/path/to/riofs.conf.xml 0 0
I have also tried adding a startup script to run the riofs commands to my rc.local file but that too fails to mount them.
Any idea's or recommendations?
Currently RioFS does not support fstab. In order to mount remote bucket at the startup time, consider adding corresponding command line to your startup script (rc.local, as you mentioned).
If for some reason it fails to start RioFS from startup script, please feel free to contact developers and/or fill issue report.
If you enter your access key and secret access key in the riofs config xml file, then you should be able to mount this via fstab or an init.d or rc.local script ..
See this thread
EDIT:
I tested this myself and this is what I find. Even with the AWS access details specified in the config file, there is no auto-matic mounting at boot. But to access the system, all one needs to do is to issue mount /mount/point/in-fstab .. and the fstab directive would work and persist like a standard fstab mounted filesystem.
So, it seems the riofs system is not ready at that stage of the boot process when filesystems are mounted. That's the only logical reason I can find so far. This can be solved with an rc.local or init.d script that just issues a mount command (at the worst)
But riofs does work well, even as the documentation seems sparse. It is certainly more reliable and less buggy than s3fs ..
Thanks all,
I was able to get them auto-mounting from rc.local with the syntax similar to:
sudo riofs --uid=33 --gid=33 --fmode=0777 --dmode=0777 -o "allow_other" -c ~/.config/riofs/riofs.conf.xml Bucket-Name /mnt/mountpoint
Thanks again!
I run my meteor app on EC2 like this: node main.js (in tmux session)
Here are the steps I use to update my meteor app:
1) meteor bundle app.tgz
2) scp app.tgz EC2-server:/path
3) ssh EC2-server and attach to tmux
4) kill the current meteor-node process by C-c
5) extract app.tgz
6) run "node main.js" of the extracted app.tgz
Is this the standard practice?
I realize forever can be used too but still do you have to kill the old node process and start a new one every time I update my app? Can the upgrade be more seamless without killing the Node process?
You can't do this without killing the node process, but I haven't found that really matters. What's actually more annoying is the browser refresh on the client, but there isn't much you can do about that.
First, let's assume the application is already running. We start our app via forever with a script like the one in my answer here. I'd show you my whole upgrade script but it contains all kinds of Edthena-specific stuff, so I'll outline the steps we take below:
Build a new bundle. We do this on the server itself, which avoids any missing fibers issues. The bundle file is written to /home/ubuntu/apps/edthena/edthena.tar.gz.
We cd into the /home/ubuntu/apps/edthena directory and rm -rf bundle. That will blow away the files used by the current running process. Because the server is still running in memory it will keep executing. However, this step is problematic if your app regularly does uncached disk operatons like reading from the private directory after startup. We don't, and all of the static assets are served by nginx, so I feel safe in doing this. Alternatively, you can move the old bundle directory to something like bundle.old and it should work.
tar xzf edthena.tar.gz
cd bundle/programs/server && npm install
forever restart /home/ubuntu/apps/edthena/bundle/main.js
There really isn't any downtime with this approach - it just restarts the app in the same way it would if the server threw an exception. Forever also keeps the environment from your original script, so you don't need to specify your environment variables again.
Finally, you can have a look at the log files in your ~/.forever directory. The exact path can be found via forever list.
David's method is better than this once, because there's less downtime when using forever restart compared to forever stop; ...; forever start.
Here's the deploy script spelled out, using the latter technique. In ~/MyApp, I run this bash script:
echo "Meteor bundling..."
meteor bundle myapp.tgz
mkdir ~/myapp.prod 2> /dev/null
cd ~/myapp.prod
forever stop myapp.js
rm -rf bundle
echo "Unpacking bundle"
tar xzf ~/MyApp/myapp.tgz
mv bundle/main.js bundle/myapp.js
# `pwd` is there because ./myapp.log would create the log in ~/.forever/myapp.log actually
PORT=3030 ROOT_URL=http://myapp.example.com MONGO_URL=mongodb://localhost:27017/myapp forever -a -l `pwd`/myapp.log start myapp.js
You're asking about best practices.
I'd recommend mup and cluster
They allow for horizontal scaling, and a bunch of other nice features, while using simple commands and configuration.