tar command not processing - unix

I want to restore previous backuped directory (spect folder into /opt)
Architecture (Solaris 10) :
root#sms01sxdg /opt> ls -ltr
total 22
[...]
drwxr-xr-x 2 specadm nms 1024 Dec 24 13:40 spect
root#sms01sxdg /opt>
root#sms01sxdg /opt> df -kh
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d0 9.8G 4.2G 5.6G 43% /
[...]
/dev/md/dsk/d30 7.9G 94M 7.7G 2% /opt/spect
root#sms01sxdg /opt>
I have previously backuped folder with tar command : tar cvf spect.tar spect.
It has worked successfully and when I launch tar -tf spect.tar it shows the sub-folders/files into.
When I try to restore backup, it doesn't work or more precisely, it returns nothing and files are finally not extracted.
root#sms01sxdg /opt> tar -xvf /export/specbackup_db/spect.tar .
root#sms01sxdg /opt> ls -l spect/
total 0
root#sms01sxdg /opt>
I suspect that the folder I have backup is a mount point and it is the cause of this problem.
But it seems the mount point is still mounted.
I have always performed this kind of command but it is the first time I encounter this kind.

Try again after removing the dot at the end of the command. Generally, you can specify the files that should be extracted from the tar by listing their paths after the tar file name ex: tar -xvf tarfile.tar file1 file2. this will extract only file1 and file2 from the tarfile.tar.

Related

backup command on HP-UX fail

I have two biggest file and i am trying to take backup of them to a tape drive.
the operate system is HP-UX and the directory of the tape drive is /dev/rmt/2m.
The command that i perform for backup was tar cvf /dev/rmt/2m file1 and after that the file2
But when i use the command to view the file
tar tvf /dev/rmt/2m
that command show me that i have only 1 file backup ( the last file2).
Please can you help on this.
where is the problem ? The problem is on backup command or the command to view the file.
Thanx in advance
Not sure for HP-UX but maybe device you use is autorewind so you should change the device or use tar on this way:
tar cvf /dev/rmt/2m file1 file2
Or you can try to use
tar cvf /dev/rmt/2mn file1
tar cvf /dev/rmt/2mn file2
as this driver is not autorewind

Source file size increase during rsync

I backup a directory with rsync. I looked at the directory size before I started the rsync with du -s, which reported a directory size of ~1TB.
Then I started the rsync and during the sync I looked at the size of the backup directory to get an estimated end time. When the backup grew much larger than 1TB I got curious. It seems that the size of many files in the source directory increases. I did an du -s on a file in the source before and after the rsync process copied that file:
## du on source file **before** it was rsynced
# du -s file.dat
2 file.dat
## du on source file **after** it was rsynced
# du -s file.dat
4096 file.dat
```
The rsync command:
rsync -av -s --relative --stats --human-readable --delete --log-file someDir/rsync.log sourceDir destinationDir/
The file system on both sides (source, destination) is BeeGFS 6.16 on RHEL 7.4, kernel 3.10.0-693
Any ideas what is happening here?
file.dat is maybe a sparse file. Use option --sparse :
-S, --sparse
Try to handle sparse files efficiently so they take up less
space on the destination. Conflicts with --inplace because it’s
not possible to overwrite data in a sparse fashion.
Wikipedia about sparse files:
a sparse file is a type of computer file that attempts to use file system space more efficiently when the file itself is partially empty. This is achieved by writing brief information (metadata) representing the empty blocks to disk instead of the actual "empty" space which makes up the block, using less disk space.
A sparse file can be created as follows:
$ dd if=/dev/zero of=file.dat bs=1 count=0 seek=1M
Now let's examine and copy it:
$ ls -l file.dat
.... 1048576 Nov 1 20:59 file.dat
$ rsync file.dat file.dat.rs1
$ rsync --sparse file.dat file.dat.rs2
$ du -sh file.dat*
0 file.dat
1.0M file.dat.rs1
0 file.dat.rs2

How to rsync web repo

I want to rsync everything in mirrors.kernel.org/centos/6/updates/x86_64/Packages/ to a directory on my server. I do NOT want to wind up with a directory structure like ~/mirrors.kernel.org/centos/6/updates/x86_64/Packages/
[joliver#lake ~]$ rsync mirrors.kernel.org/centos/6/updates/x86_64/Packages/ CentOS6/
rsync: change_dir "/home/joliver/mirrors.kernel.org/centos/6/updates/x86_64/Packages" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1039) [sender=3.0.6]
[joliver#lake ~]$ mkdir -p mirrors.kernel.org/centos/6/updates/x86_64/Packages
[joliver#lake ~]$ rsync mirrors.kernel.org/centos/6/updates/x86_64/Packages/ mirrors.kernel.org/centos/6/updates/x86_64/Packages
skipping directory .[joliver#lake ~]$ rsync mirrors.kernel.org/centos/6/updates/x86_64/Packages/ mirrors.kernel.org/centos/6/updates/x86_64/Packages/
skipping directory .
After I have that, I'd like to be able to rsync just the deltas. I suppose just redoing the rsync and then finding all files with a ctime of less that since the last run would suffice, but it would be nice if there was a neater way to grab new / changed files.
rsync -avSHP --delete --exclude "local*" --exclude "isos" \
mirrors.kernel.org::centos/6/updates/x86_64/Packages/ CentOS6/

Unix : how to tar only N first files of each folder?

I have a folder containing 2Gb of images, with sub-folders several levels deep.
I'd like to archive only N files of each (sub) folder in a tar file. I tried to use find then tail then tar but couldn't manage to get it to work. Here is what I tried (assuming N = 10):
find . | tail -n 10 | tar -czvf backup.tar.gz
… which outputs this error:
Cannot stat: File name too long
What's wrong here? thinking of it - even if it works I think it will tar only the first 10 files of all folders, not the first 10 files of each folder.
How can I get the first N files of each folder?
A proposal with some quirks: order is only determined by the order out of find, so "first" isn't well-defined.
find . -type f |
awk -v N=10 -F / 'match($0, /.*\//, m) && a[m[0]]++ < N' |
xargs -r -d '\n' tar -rvf /tmp/backup.tar
gzip /tmp/backup.tar
Comments:
use find . -type f to ensure that files have a leading directory-name prefix, so the next step can work
the awk command tracks such leading directory names, and emits full path names until N (10, here) files with the same leading directory have been emitted
use xargs to invoke tar - we're gathering regular file names, and they need to be arguments to that archiving command
xargs may invoke tar more than once, so we'll append (-r option) to a plain archive, then compress it after it's all written
Also, you may not want to write a backup file into the current directory, since you're scanning that - that's why this suggestion writes into /tmp.

rsync - transfer only files whose file name starts with today's date

I want to transfer files using rsync to a FTP at the end of every day.
My current rsync script:
rsync -avz /var/spool/asterisk/monitorDONE/MP3 pbciftp:/home/voicefiles/ftp/`date +%Y.%m.%d`
The issue
I want rsync to transfer files that have today's date in their file name; a file might for example be called 20130527_agent_number_campaign.mp3.
So I need rsync to find all files whose file name starts with 20130527 and transfer them.
The most flexible way is probably with find. Something like:
find /var/spool/asterisk/monitorDONE/MP3 -name "*`date +%Y%m%d`*" -print0 | \
rsync -avz --files-from=- --from0 \
/var/spool/asterisk/monitorDONE/MP3 pbciftp:/home/voicefiles/ftp/`date +%Y.%m.%d`

Resources