I work under linux debian buster.
This morning I worked as usual and my PC crashed. I forced it to shut down and when I restarted, it presents the terminal with initramfs (if I'm not mistaken) by inviting me to do an fsck.
This is not the first time this has happened to me. I usually do an fsck -y / dev / sda1 then fsck -y / dev / sda3 for my root and home partition.
But this morning, after crashing, when I did that, he scrolled through several messages quickly, and that worried me. At the end I restarted my PC and voila, I can no longer find my work folder. In fact, I have a folder containing two other folders. Hey there is only one visible folder left. All of my shortcuts to the missing folder no longer works.
When I make a df -h, the size appears as if the file is present, but impossible to see it. It is not in / lost + found
I have a global search in my home, and nothing
I can no longer work, all my work was there, I have a 1 month old backup, but good.
If really really, you have a solution, please I'm desperate.
My disk is partitioned into 4 including 3 for linux and one for ntfs
Thank you
too happy
I found my data.
What made me have 0.5% of hope (I must admit I was at the edge of the window with the pc and looking for my data one last time), was the size of my partition. When I right click on home and I look at the size I have 31go, and with a
df -h
and as a result:
/ dev / sda3 192G 95G 87G 53% / home
or 95 GB of used, compared to 31 GB above, so where are the 60 GB?
Before the problem, I was around 95 GB in size.
It is true that several messages including the word inode or node (I do not remember) with numbers quickly appeared during fsck -y
Someone suggested I take a look at / home / lost + found, and when I did, I saw nothing. But when I logged in as root in a terminal, then "cd / home" and ls lost + found ", I saw numbers like # 13032 # 13036 # 1181667, and a folder with the number # 4703. So I made a "chmod 777 -R lost + found" in order to be able to access it via my account (simple user account). Once the command was executed and after a few minutes, I opened / home / lost + found via a "nemo" file explorer and TADAM, all my data was there.
I’ve done SEVERAL SAVINGS and vowed not to trust fsck -y anymore, even though it’s a great tool, but I’ll use it with caution.
Related
Working through a backup script debug backup/restore on:
macStudio M1 / macOS Monterey <-> Synology DS920+
On the mac, I've downloaded HomeBrew rsync 3.2.4
On the synology, I'm running what it shipped with - rsync 3.1.2
For debug, I used /Volumes/Recovery which has files with
owner set to root and group set to wheel.
src="/Volumes/Recovery/"
dest="$userID#$remoteIP::NetBackup/MacStudio1/Volumes/Recovery/
restore="/tmp/RestoreBackup/"
userID is has admin privileges on the NAS.
rsync services are enabled on the NAS.
user directories are enabled on the NAS.
Backup:
rsync -ahX --delete -M--fake-super $src $dest
Restore:
rsync -ahX --delete -M--fake-super $dest $restore
It all seems to work without error. Files are on restore as expected except I'm seeing the files have owner set to my ID.
for example, ls -laR shows (abridged) :
/Volumes/Recovery/E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 root wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
/tmp/RestoreBackup//E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 myID wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
I've looked at the rsync man (more than once) and I see words like "To affect the remote side of a remote-shell connection...".
However, I'm not sure how to apply that to a backup or a restore.
Do I want to effect the remote side on the backup?
Do I want to effect the remote side on the restore?
Any guidance on what I should have set the options to?
So looks like I'm not getting any responses. Guess I'll wrap this up with my observations.
In testing I've done on a user directory (with test data files), the rsync is working to save and restore files with extended attributes (I verified they got set and that they matched on restore). So I think the overall switches on the rsync commands are correct.
The problems I'm seeing on backing up and restoring the "Recovery" volume have the following issues:
All regular files have the wrong "owner". The groups look correct.
The one linked directory has the wrong "owner" and the wrong "group".
I believe (1) problem is caused because I need to use sudo rsync on the restore. I'm guessing that the files that are backup up have the correct owner/group in metadata, but the restore doesn't have the authority to set the owner to 'root'. I tried using sudo briefly and it died with some errors I didn't quite understand. I believe I need to set up the etc/sudoers file with some information. The (2) problem may partially go away if I fix (1) or it may need some additional rsync flags to do with linked files and directories.
Overall, my backup script is working, but I'm now starting to question if I know enough to know what to backup on macOS. A rather length article by the CCC folk seems to explain this but it leaves me feeling I don't know enough above macOS data structures and it seems some of this may change over time when new version are released. I had started with the idea of just backuping up everything under /* (Macintosh HD), and perhaps this would work, though there are at least somethings that need to be excluded (like /Volumes/* and perhaps /tmp/* ). Also noticed that there is a /System tree that doesn't show up with ls /* that CCC folk say to leave alone. So not exactly got a good feeling I understand what I need to know.
So for the moment I'm going to sideline this effort. I've got Time Machine running to my NAS and I need to get the NAS backed up to a cloud first. My fall back positions are either (1) to just be dependent on TimeMachine only, (2) to buy and use CCC as a secondary backup, or (3) to create a backup with just my user directories as a secondary backup - which will require my reinstalling any 3rd party software in the event that I can't recover with Time Machine.
I'm trying to get a Homestead Improved Vagrant VM instance running on Windows.
See Homestead Improved on Github. I'm following this easy introduction:
https://www.sitepoint.com/quick-tip-get-homestead-vagrant-vm-running/
My steps are:
git clone https://github.com/swader/homestead_improved my_project
cd my_project
bin/folderfix.sh
vagrant up
Machine boots and is ready. Then provisioner is running. Then I get the follwoing error message:
==> default: Failed to restart php7.0-fpm.service: Unit php7.0-fpm.service not found.
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Any hints what to do?
This has been fixed on the repo-level, and should never happen again if you run git pull inside your homestead improved cloned folder (but outside of the VM, not SSH-ed into it). If your machine is already running, you might have to apply the steps below, too. But new machines (so new clones of Homestead Improved) will not have this happen any more). Explanation of what happened is here.
#daniel-sixl please try to re-download/re-clone and start from scratch, everything should be working just fine now.
Old solution:
Try to change php7.0-fpm to php7.1-fpm - the box was auto-updated to the new version.
You can do this by going into /etc/nginx/sites-available and changing the required file - its name will match the site you defined, as per that post you linked. So probably /etc/nginx/sites-available/homestead.app.
--
Edit: added more detailed instructions for people very new to it all.
Ok, so what you need to do is, once you're in the sites-available folder, edit the homestead.app file. Something like sudo vim homestead.app will do just fine, it'll open a basic text editor (that's quite nightmarish to use when you're new at it, so just be patient :) ) Sudo is important, because you are editing a file that only an admin has access to.
Once you're "in", do the following:
press / (this activates "search") and input php7.0-fpm. This should take you to the line which contains that phrase. If you press / again and press Enter, that works like "find next", so it'll go to the next line having the phrase, or restart from the top if no other lines contain it.
when your cursor is on the line with php7.0-fpm (you can move it around with arrows of course), press i. This activated "insert" mode. Now you can edit the file.
change the 7.0 to 7.1.
press ESC to exit edit mode, and go back into read-only mode.
repeat for each line with 7.0
once done, while in read-only mode (ESC to make sure), type :x. Yeah, like an emoticon with cross lips. Press Enter. That's short for "Save and Exit".
you will now be in the folder again, from where you should execute sudo service nginx restart.
The new configuration should now take effect, and everything should start working.
My Problem: I am trying get hold of the official Chrome WideVine CDM plugin for an ARM architecture.
My Understanding So Far: Given ARM-based Chromebooks can stream Netflix (and Netflix uses the WideVine CRM plugin), I am lead to believe a Chrome OS installation should contain the files I'm after. As I don't have access to an ARM-based Chromebook, my next best is a Chromebook recovery image.
Where I'm up to: I have downloaded a HP Chromebook 11 recovery image, chromeos_6812.88.0_daisy-skate_recovery_stable-channel_skate-mp.bin, from here (the HP Chromebook 11 is ARM-based)
What I'd like to do next: Extract two files from the recovery image.
Note: I don't have access to an ARM based Chromebook to just copy the files from :/
Does anyone know how I could do such a thing?
The .bin file is just a disk image that contains many partitions. You can "load" the image by running sudo kpartx -av chromeos_6812.88.0_daisy-skate_recovery_stable-channel_skate-mp.bin (the -v is for verbose mode). This will load 12 partitions (from /dev/mapper/loop0p1 to /dev/mapper/loop0p12) and make them available for mounting, and you should see some additional drives in your file manager.
In this case, the partition you're looking for is labelled ROOT-A, and corresponds to the third partition (/dev/mapper/loop0p3). For some reason, opening it in my file manager directly didn't work, so I had to mount it manually by running sudo mount -t ext2 /dev/mapper/loop0p3 -o ro /media/saikrishna/chromeos/. This will mount the ext2 partition in read-only mode in the /media/saikrishna/chromeos directory (change the last part to an existing empty directory on your system).
To remove the mappings, run sudo kpartx -dv chromeos_6812.88.0_daisy-skate_recovery_stable-channel_skate-mp.bin. If that doesn't print out anything (which was the case for me), run sudo kpartx -dv /dev/loop0.
Intermittently, I'll run into an issue where my rsync script will simply freeze mid transfer. This freeze may occur while downloading a file, or amidst listing uptodate files.
I'm running this on my mac, here's the code below:
rsync -vvhrtplHP -e "ssh" --rsync-path="sudo rsync" --filter=". $FILTER" --delete --delete-excluded --log-file="$BACKUP/log" --link-dest="$BACKUP/current/" $CONNECT:$BASE $BACKUP/$DATE/
For example, the console will output the download progress of a file, and stop at an arbitrary percentage and speed. The log doesn't even list the file (probably because it's incomplete).
I'll try numerous attempts and it'll freeze on different files or steps with no rhyme or reason. Terminal will show the loading icon while it's working, the output will freeze, and after a few seconds the loading icon vanishes.
Any ideas what could be causing this? I'm using rsync 3.1.0 on Mavericks. Could it be a connectivity issue or a system max execution time issue?
I have had rsync freezes in the past and I recall reading somewhere that it may have to do with rsync having to look for files to link, something increasingly difficult as you accumulate backup over backup. I suggest you skip the --link-dest in the next backup if your disk space allows it (to break the chain, so to speak).
As mentioned in https://serverfault.com/a/207693 you could use the hardlink command afterwards, I haven't tried it yet.
Just had a similar problem while doing rsync from harddisk to a fat32 usb. rsync froze already in less than a second in my case and did not react at all after that.
Found out that the problem was a combination of usage of hardlinks on the harddisk and having fat32 filesystem on the usb drive, which does not support hardlinks.
Formatting the usb drive with ext4 solved the problem for me.
For some odd reason, the "whatis" command in my Unix shell (cygwin) is not working. It constantly returns "ls: nothing appropriate" or "cd: nothing appropriate". I'm wondering if there is something incorrectly set-up. Does anyone have any light to shed? Thanks!
I ran into a similar issue using the 64-bit Red Hat Cygwin installation.
In my case, /usr/sbin/makewhatis did not exist. Running man and a command worked, but neither apropos nor whatis returned anything other than "nothing appropriate".
After searching for a missing package and binging a bunch, I Read The Friendly Manual page for man and found out about mandb.
Running mandb solved my problem.
From the Cygwin FAQ:
Why doesn't man -k (or apropos) work?
Before you can use man -k or apropos, you must create the whatis database. Just run the command
mandb
(it may take a minute to complete).
(Note: It used to say /usr/sbin/makewhatis instead of mandb in older versions of that FAQ.)
Run sudo mandb once
Not sure if this helps, but when I ran mandb, I got this (over several attempts).
mandb
0 man subdirectories contained newer manual pages.
0 manual pages were added.
0 stray cats were added.
0 old database entries were purged.
However,
sudo mandb
75 man subdirectories contained newer manual pages.
7235 manual pages were added.
0 stray cats were added.
0 old database entries were purged.
worked for real.
I faced the same issue.
I fixed it by #mandb command.
My problem was fixed by running #mandb command
[root#localhost log]# whatis last
last (1) - show a listing of last logged in users
[root#localhost log]#
sudo mandb solved the problem for me. It regenerates the apropos database, but you have to make sure you run it with sudo.