Writing to Tape Drive Multiple Times using tar command - unix

I am using the command
tar -cvfE $TAPE_DRIVE $BACKUP_FILE
to write to tape for the first time. It works like a charm.
BUT, when there is already a file in the tape (older backup) I use the command
tar -rvfE $TAPE_DRIVE $BACKUP_FILE
which disappoints every time.
There is enough space on the tape (1.3TB).
I am only writing 80-90GB files at a time.
The tape is mounted locally.
After failing to write to tape if I try to list files on tape, i get the old (first) file that I wrote to it.
Is there any other command I should be using?

Apparently the native tar commands is not perfect and has bugs. It is recommended that using -i (ignore directory checksum errors) flag will resolve this but it did not in my case.
Using GNU tar solved my problem. Simply use gtar instead of tar and it works like a charm.
So the commands are like
gtar -cvf $TAPE_DRIVE $BACKUP_FILE
and
gtar -rvf $TAPE_DRIVE $BACKUP_FILE

Related

mv command created an executable rather than a directory

I recently attempted to move some files by running an -exec mv command with find (command linked below). When I did this, I mistyped the destination directory path (so the directory did not yet exist) and mv created what appears to be an executable instead of a directory?
When I run "Get Info" one image renders and the file size is about the correct size for an image, but hundreds of files were supposed to be copied. Have I lost this data for good? Is there any way to get macOS to recognize this "executable" as a directory?
This is the command I used:
find . -type f -name "*.JPG" -exec mv {} ../../DestinationFolderName \;
Here's an image showing a successful mv into an existing directory, and what happened when I put a path to a directory that did not yet exist.
Unfortunately "mv" to a name that doesn't exist is interpreted as a filename rather than a directory. So the OS has, one-by-one, copied your JPG file on top of each other. The resulting file is most likely whatever JPG happened to be the one it moved last (if you rename it to JPG extension you can check which one).
So, very unfortunately, you probably need to investigate a data recovery tool for MacOS quickly (and do so before you've done things that make more files on your disk, as much a possible). The "ghosts" of the files are for now at least mostly still present on your hard drive (as deallocated segments), but are back in the pool to be overwritten as you create new files (even when your browser creates temporary cache files, and things like that). It's a conundrum.
If you don't have a backup/time-machine of the files, the best thing to do is get a MacOS data recovery program QUICKLY.
VERY sorry not to have a happier answer.

Unzipping Multiple zip files using 7zip command line

I have a number of zip files located in a single folder eg:
file1.gz
file2.gz
file3.gz
file4.gz
I'm looking for a way of automatically unzipping these using a batch job to a similarly named folder structure so for example the contents of file1.gz will drop into a folder named file1.
I have been told that 7zip would address my issue but can't figure out how to go about it.
Any help is greatly appreciated.
Which OS are you using? This is something you'd do using the shell's capabilities, you could write
for A in *.gz ; do gunzip $A ; done
I'm using gunzip here, because .gz is actually gzip, But you can use the 7zip CLI tool as well, of course. If you're on Windows, then I recommend installing a real shell (the standard cmd.exe can not really be considered a shell IMHO).

What are #file# and file~ and how can I get rid of them?

I originally had three files: makefile, readme.txt, and hashtable.c in my directory, where I am writing my code in emacs. I noticed that some new files: #hashtable.c#, #readme.txt#, hashtable.c~, and makefile~ have been created. I was wondering what these files were. Are these important, and if not, how do I tell emacs to stop making them? I'm also curious why readme.txt doesn't get a tilde file and makefile doesn't get a sharp file.
The file with the ~ is a backup file that automatically gets created when you save a file. The #readme.txt# is the file being currently edited/in use (i.e., the autosave version). That will usually go away (unlike the ~ file) when you exit emacs normally (if it crashes or gets killed the # files may stay around).
You might find this page about emacs backup files of interest, and this SO question: How do I control how Emacs makes backup files?
You can prevent backup files from being created with this:
(setq make-backup-files nil)
I recommend installing no-littering. It automatically puts backup files (file~) in ~/.emacs.d/var/backup/. It doesn't do anything about autosaves (#file#), but there is a note about putting those files in a specified directory in the README:
(setq auto-save-file-name-transforms
`((".*" ,(no-littering-expand-var-file-name "auto-save/") t)))
Neither of these things actually prevents Emacs from creating these files, but I'm assuming most people actually want these files (in case of a crash), but don't want them strewn all over the filesystem.
For #files# you have to do rm "#file#" from the terminal, because rm #file# doesn't work.
For ~file you can simply digit rm ~file.
Maybe you could try:
find . -name \\#*\\# | xargs rm
Warning: this will remove those files matching in subdirectories.

Untar a UNIX-based operating system

I am trying to untar UNIX-based operating system from a .tar.gz file. In order to do so I use the following command:
tar -xvf rootfs.tar.gz -o
The -o flag is to not to preserve the ownership of the files (it gave some problems). The problem is that when a symbolic link is untared the following message shows up
Cannot create symlink to `toto': Operation not permitted
Moreover, mknod also gives problems
dev/tty0: Cannot mknod: Operation not permitted
I am in a FAT system. Does anyone know how to untar that file?
Thanks in advance
If the file is a tar.gz you must use:
tar -xvzf rootfs.tar.gz
And notice that a FAT filesystem doesn't support symbolic links, so it doesn't know how to make it on that FS, and it explains the Operation Not Permitted Error.
+1 fpr Ivan's answer
please note that:
flags always go right after the name of the command!
you will need to study "man tar" to see what other options you want, e.g. preserve owner, permissions, time-creation date, etc..
The correct answer is that if you're trying to untar a UNIX root file system, that's going to include special files such as device nodes (which is why tar is invoking mknod).
To create those successfully, tar must be allowed to run as root. Therefore, the correct answer is to use sudo, like so:
sudo tar -xvzf rootfs.tar.gz
Try this to untar a tar file. Hopefully it will work fine without any problem, as this one solved my issue
tar -xvvf foo.tar

Add last n lines of files to tar/zip

I need to regularly send a collection of log files that can grow quite large, so I would like to only send the last n lines of the each of the files.
for example:
/usr/local/data_store1/file.txt (500 lines)
/usr/local/data_store2/file.txt (800 lines)
Given a file with a list of needed files named files.txt, I would like to create an archive (tar or zip) with the last 100 lines of each of those files.
I can do this by creating a separate directory structure with the tail-ed files, but that seems like a waste of resources when there's probably some piping magic that can happen to accomplish it. Full directory structure also must be preserved since files can have the same names in different directories.
I would like the solution to be a shell script if possible, but perl (without added modules) is also acceptable (this is for Solaris machines that don't have ruby/python/etc.. installed on them.)
You could try
tail -n 10 your_file.txt | while read line; do zip /tmp/a.zip $line; done
where a.zip is the zip file and 10 is n or
tail -n 10 your_file.txt | xargs tar -czvf test.tar.gz --
for tar.gz
You are focusing in an specific implementation instead of looking at the bigger picture.
If the final goal is to have an exact copy of the files on the target machine while minimizing the amount of data transfered, what you should use is rsync, which automatically sends only the parts of the files that have changed and also can automatically compress while sending and decompress while receiving.
Running rsync doesn't need any more daemons on the target machine that the standard sshd one, and to setup automatic transfers without passwords you just need to use public key authentication.
There is no piping magic for that, you will have to create the folder structure you want and zip that.
mkdir tmp
for i in /usr/local/*/file.txt; do
mkdir -p "`dirname tmp/${i:1}`"
tail -n 100 "$i" > "tmp/${i:1}"
done
zip -r zipfile tmp/*
Use logrotate.
Have a look inside /etc/logrotate.d for examples.
Why not put your log files in SCM?
Your receiver creates a repository on his machine from where he retrieves the files by checking them out.
You send the files just by commiting them. Only the diff will be transmitted.

Resources