I am running my application on colibri-vf50 toradex(running over angstrom distribution) board having sqlite database resides on sd-card. My sd-card becomes read-only
whenever I remove power-supply just after board boot up(same time my app start making connection with db). Due to this I getting error in dmesg like "FAT-fs (mmcblk0p1): error, clusters badly computed".
After this sqlite can not write in db. I also tried to repair using fsck using below command but found no success.
$ fsck.msdos -r -v /dev/mmcblk
Currently, only 1 or 2 FATs are supported, not 251.
I also tried with various journel_mode of sqlite but could not prevent. So How I can prevent my sd-card to become read-only and also how to repair on board?
Thanks in advance
As dosfstools is not available in angstrom on which board is running. So unable to repair sd-card on board itself. But It is possible to repair on laptop using following command:
sudo dosfsck -r -a /dev/sdb1
So To prevent issue, I have changed file system from FAT32 to ext4. In which fsck do recovery on boot.
I used kparted, unmount the device before,
then mark the partition, then at top, device, make new FAT.
Then made a new partition and anything worked again.
Related
I need my application to run from a USB stick and perform the installation from there.
The application is eventually installed on a Linux/Debian.
For the application installation I need a DB to be installed on that USB. I also need the DB data (tables, etc.) to be kept on that USB stick.
I read that SQLite is a good candidate to be used for such a purpose. However, I could no find the steps needed for installing it on the USB stick.
I did download the sqlite-snapshot-202002271621.tar.gz from the sqlite.org site, placed it in one of my Debian directories and used the 3 commands to install it (./configure,make,make install).
That installed SQLite on my hard disk.
What should I do in order to achieve the same on the USB stick?
Mount the USB to the Debian box, place the tar.gz file there, and run the commands from there?
Will that install SQLite on the USB?
Thanks
So the answer is indeed:
Mount the USB to the Debian box
Place the sqlite-snapshot-202002271621.tar.gz file there
Run the commands from there
./configure
make
make install
Notice just that I had to tar the sqlite-snapshot-202002271621.tar.gz file using:
tar -zvxf sqlite-snapshot-202002271621.tar.gz -C /media/usbstick/ --no-same-owner
In order to avoid the error:
"Cannot change ownership to uid 1000, gid 1000: Operation not permitted"
I have been fighting with udev all afternoon. Basically I have created a rule that detects when a mass storage device is plugged into the system. This rule works and I can get it to execute a script without any issues, here it is for review purposes:
ACTION=="add", KERNEL=="sd?*", SUBSYSTEM=="block", RUN+="/usr/local/bin/udevhelper.sh"
The problem I am running into is that the script is executed as some sort of strange user that has read-only permissions to the entire system. The script I am executing is quite simple:
#!/bin/sh
cd /usr/local/bin
touch .drivedetect
echo "1" > .drivedetect
exit
Basically I would like udev to run this script and simply output a 1 to a file named .drivedetect within the /usr/local/bin folder. But as I mentioned before it sees the rule and executes the rule when I plug in a drive however when it tries to run the script it comes back with file system is read-only script quit with error code 1.
I am currently running this on a raspberry pi zero and the latest Debian image. udev is still being run from init.d from what I can tell because there is no systemd service for it registered. Any help would be great and if you need any more information just ask.
Things I've tried:
MODE="0660"
GROUP="plugdev"
Various combinations of RUN+="/bin/sh -c '/path/to/script'" and /bin/bash
OPTIONS="last_rule"
And last but not least I tried running the script under the main username as well
#!/bin/sh
su pi drivedetect
I had same issue, when I just used
udevadm control --reload-rules
after editing a udev rule. But, if I do:
sudo /etc/init.d/udev restart
The script can edit a file.
It's not enough to reboot. I have to do the restart after booting. It then works as expected until the next reboot.
This is on an rpi with stretch-lite.
I have a Google Compute Engine VM instance with a Asterisk Server running on it. I get this message when I try to run sudo:
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
Is there a password for root so I can try to change it there? Any suggestions on this?
It looks like you have manually edited the /etc/sudoers file so while you would normally have sudo access, due to the parse error, you won't be able to do this directly.
Here's how to fix this situation.
1. Save the current boot disk
go to to the instance view in Developers Console
find your VM instance and click on its name; you should now be looking at a URL such as
https://console.cloud.google.com/project/[PROJECT]/compute/instancesDetail/zones/[ZONE]/instances/[VM-NAME]
stop the instance
detach the boot disk from the instance
2. Fix the /etc/sudoers on the boot disk
create a new VM instance with its own boot disk; you should have sudo access here
attach the disk saved above as a separate persistent disk
mount the disk you just attached
fix the /etc/sudoers file on the disk
unmount the second disk
detach the second disk from the VM
delete the new VM instance (let it delete its boot disk, you won't need it)
3. Restore the original VM instance
re-attach the boot disk to the original VM
restart the original VM with its original boot disk, with fixed config
How to avoid this in the future
Always use the command visudo rather just any text editor directly to edit the /etc/sudoers file which will validate the contents of the file prior to saving it.
I ran into this issue as well and had the same issue Nakilon was reporting when trying the gcloud workaround.
What we ended up doing was configure a startup script that removed the broken sudoers file.
So in your metadata put something like:
#/bin/sh
rm "/etc/sudoers.d/broken-config-file"
echo "ok" > /tmp/ok.log
https://cloud.google.com/compute/docs/startupscript
As you probably figured out this requires the /etc/sudoers file to be fixed. As nobody has root access to the instance, you will not be able to do this from inside the instance.
The best way to solve this is to edit the disk from another instance. The basic steps to do this are:
Take a snapshot of your disk as a backup (!)
Shutdown your instance, taking care not to delete the boot disk.
Start a new "debugger" instance from one of the stock GCE images.
Attach the old boot disk to the new instance.
In the debugger instance, mount the disk.
In the debugger instance, fix the sudoers file on the mounted disk.
In the debugger instance, unmount the disk
Shutdown the debugger instance.
Create a new instance with the same specs as your original instance using the fixed disk as the boot disk.
The new disk will then have the fixed sudoers file.
Since i bumped into this issue too, if you have another instance or any place where you can run with gcloud privileges, you can run:
gcloud compute --project "<project id>" ssh --zone "europe-west1-b" "<servername>"
I ran this on a server which had gcloud as root, so you login to the other box as root too! Then fix your issue. (if you don't have a box, just spin a micro up with the correct gcloud privileges) saves the hassle of disk stuff etc.
As mentioned in above comments, I am getting the same error like below in gcp VM.
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
To solve this
I have ssh to another vm and become root then I ran gcloud ssh command to our main vm (where you are getting the sudo error.)
gcloud compute --project "<project id>" ssh --zone "europe-west1-b "<servername>"
And BOOM!, now are login as root in the VM.
Now you can access/change the /etc/sudoers file accordingly.
I found this hack better than recreating vm/disks.
Hope this helps to someone!
It is possible to connect to a VM as root from your developers console Google Cloud Shell. Make sure the VM is running, start the shell and use this command:
gcloud compute ssh root#<instance-name> --zone <zone> [--project <project-id>]
where instance-name is found in the Compute Engine VM Instances screen. project-id is optional but required if you are connecting to an instance in a different project from the project where you started the shell.
You can then fix this and other issues that may prevent you from using sudo.
I got a Permission denied error when trying to ssh to the problem instance via gcloud. Using a startup script as mentioned above by #Jorick works. Instructions for it are here. You will have to stop and restart the VM instance for the startup script to get executed. I modified the script slightly:
rm -f /etc/sudoers.d/google_sudoers >& /tmp/startup.log
After the restart, launch an SSH session from the cloud console and check that you are able to view the file contents (with sudo more /etc/sudoers.d/google_sudoers for example). If that works your problem has been solved.
I have several buckets mounted using the awesome riofs and they work great, however I'm at a loss trying to get them to mount after a reboot. I have tried entering in the following to my /etc/fstab with no luck:
riofs#bucket-name /mnt/bucket-name fuse _netdev,allow_other,nonempty,config=/path/to/riofs.conf.xml 0 0
I have also tried adding a startup script to run the riofs commands to my rc.local file but that too fails to mount them.
Any idea's or recommendations?
Currently RioFS does not support fstab. In order to mount remote bucket at the startup time, consider adding corresponding command line to your startup script (rc.local, as you mentioned).
If for some reason it fails to start RioFS from startup script, please feel free to contact developers and/or fill issue report.
If you enter your access key and secret access key in the riofs config xml file, then you should be able to mount this via fstab or an init.d or rc.local script ..
See this thread
EDIT:
I tested this myself and this is what I find. Even with the AWS access details specified in the config file, there is no auto-matic mounting at boot. But to access the system, all one needs to do is to issue mount /mount/point/in-fstab .. and the fstab directive would work and persist like a standard fstab mounted filesystem.
So, it seems the riofs system is not ready at that stage of the boot process when filesystems are mounted. That's the only logical reason I can find so far. This can be solved with an rc.local or init.d script that just issues a mount command (at the worst)
But riofs does work well, even as the documentation seems sparse. It is certainly more reliable and less buggy than s3fs ..
Thanks all,
I was able to get them auto-mounting from rc.local with the syntax similar to:
sudo riofs --uid=33 --gid=33 --fmode=0777 --dmode=0777 -o "allow_other" -c ~/.config/riofs/riofs.conf.xml Bucket-Name /mnt/mountpoint
Thanks again!
Intermittently, I'll run into an issue where my rsync script will simply freeze mid transfer. This freeze may occur while downloading a file, or amidst listing uptodate files.
I'm running this on my mac, here's the code below:
rsync -vvhrtplHP -e "ssh" --rsync-path="sudo rsync" --filter=". $FILTER" --delete --delete-excluded --log-file="$BACKUP/log" --link-dest="$BACKUP/current/" $CONNECT:$BASE $BACKUP/$DATE/
For example, the console will output the download progress of a file, and stop at an arbitrary percentage and speed. The log doesn't even list the file (probably because it's incomplete).
I'll try numerous attempts and it'll freeze on different files or steps with no rhyme or reason. Terminal will show the loading icon while it's working, the output will freeze, and after a few seconds the loading icon vanishes.
Any ideas what could be causing this? I'm using rsync 3.1.0 on Mavericks. Could it be a connectivity issue or a system max execution time issue?
I have had rsync freezes in the past and I recall reading somewhere that it may have to do with rsync having to look for files to link, something increasingly difficult as you accumulate backup over backup. I suggest you skip the --link-dest in the next backup if your disk space allows it (to break the chain, so to speak).
As mentioned in https://serverfault.com/a/207693 you could use the hardlink command afterwards, I haven't tried it yet.
Just had a similar problem while doing rsync from harddisk to a fat32 usb. rsync froze already in less than a second in my case and did not react at all after that.
Found out that the problem was a combination of usage of hardlinks on the harddisk and having fat32 filesystem on the usb drive, which does not support hardlinks.
Formatting the usb drive with ext4 solved the problem for me.