Unix Copy Recursive Including All Directories - unix

I have the following two directories:
~/A
drawable/
imageb.png
new/`
newimage.png
~/B
drawable/
imagec.png
When I use the cp -r ~/A/* ~/B command newimage.png with its new/ folder is copied across to ~/B however imageb.png is not copied into ~/B/drawable.
Could you explain why this is the case and how I can get around this?

Use tar instead of cp:
(cd A ; tar cf - *) | (cd B ; tar xf -)
or more compactly (if you're using GNU tar):
tar cC A -f - . | tar xC B -f -

If you are on linux you can use the -r option.
eg: cp -r ~/A/. ~/B/
If you are on BSD you could use the -R option.
eg: cp -R ~/A/. ~/B/
For more information on exactly what option you should pass, refer man cp
Also note that, if you do not have permissions to the file you it would prevent copying files.

Related

Delete a folder named ~ in unix

Under my home directory I see a directory named ~. I guess I must have accidentally copied my home directory somehow.
Anyway, it's eaten up all my space and I'd like to remove it but obviously just running rm -r ~ will delete the entire contents of my home directory.
Any idea how to delete that ~ directory without any damage?
Just add a \ before it: rm -rf \~.
Escape it so the shell doesn't expand the tilde. Any of these will do:
rm -r '~'
rm -r \~
rm -r ~/'~'
rm -r ~/\~
I would use rm -rf \~
The \ escape key should stop you from deleting you're home directory.
You can try to make an ls | grep -v <other files> statement, which ignores all the other files, so that it only lists the file with that weird name.
Then you do:
rm $(ls | grep -v <other files>)
Obviously, you need to be careful first to test this thoroughly.

Find and tar files on Solaris

I've got a little problem with my bash script. I'm newbie in unix world, so I find it difficult to deal with an exercise. What I have to do is find files on Solaris server with specific name, modified in specific time and archive them in one .tar file. First two points are easy, but I'm having a nightmare with trying to archive it. The thing is, I constantly archive whole tree of file (with file at the end) to .tar file, but I need just a file. My code looks like this:
find ~ -name "$maska" -mtime -$dni | xargs -t -L 1 tar -cvf $3 -C
where $maska is the name of the file, $dni refers to modification time and $3 is just a archive name. I found out about -C switch, that let's me jump into the folder where desired file is, but when I use it with xargs, it seems just to jump there and do nothing else.
So my question is:
1) is there any possibility of achieving my goal this way?
Please remember, I don't work on gnu tar. And I HAVE TO use commands: tar, find.
Edit: I'd like to specify more my problem. When I use the script for, for example, file a, it should look for it since the point shown in script (it's ~ ) and everything it will find should be in one tar file.
What I got right now is (I'm in /home/me/Scripts):
-bash-3.2$ ./Script.sh a 1000 backup
a /home/me/Program/Test/a/ 0K
a /home/me/Program/Test/a/a.c 1K
a /home/me/Program/Test/a/a.out 8K
So script has done some packing. Next I want to see my packed file, so:
-bash-3.2$ tar -tf backup
/home/me/Program/Test/a/
/home/me/Program/Test/a/a.c
/home/me/Program/Test/a/a.out
And that's the problem. Tar file have all the paths in it, so if I will untar it, instead of getting just the file I wanted to archive, I will replace them in their old places. For visualisation:
-bash-3.2$ ls
Script.sh* Script.sh~* backup
-bash-3.2$ tar -xvf backup
x /home/me/Program/Test/a, 0 bytes, 0 tape blocks
x /home/me/Program/Test/a/a.c, 39 bytes, 1 tape blocks
x /home/me/Program/Test/a/a.out, 7928 bytes, 16 tape blocks
-bash-3.2$ ls
Script.sh* Script.sh~* backup
That's the problem.
So all I want is to pack all those desired file (a in example above) in one tar file without those paths, so it will simply untar in the directory I run the Script.sh.
I'm not sure to understand what you want but this might be it :
find ~ -name "$maska" -mtime -$dni -exec tar cvf $3 {} +
Edit: second attempt after your wrote the main issue is the absolute path:
( cd ~; find . -name "$maska" -type f -mtime -$dni -exec tar cvf $3 {} + )
Edit: third attempt, after you wrote you want no path at all in the archive, maska is a directory name and $3 need to be in the current directory:
mkdir ~/foo && \
find ~ -name "$maska" -type d -mtime -$dni -exec sh -c 'ln -s $1/* ~/foo/' sh {} \; && \
( cd ~/foo ; tar chf - * ) > $3 && \
rm -rf ~/foo
Replace ~/foo by ~/somethingElse if ~/foo already exists for some reason.
Maybe you can do something like this:
#!/bin/bash
find ~ -name "$maska" -mtime -$dni -print0 | while read -d $'\0' file; do
d=$(dirname "$file")
f=$(basename "$file")
echo $d: $f # Show directory and file for debug purposes
tar -rvf tarball.tar -C"$d" "$f"
done
I don't have a Solaris box at hand for testing :-)
First of all, my assumptions:
1. "one tar file", like you said, and
2. no absolute paths, ie if you backup ~/dir/file, you should be able to test extracting it in /tmp obtaining /tmp/dir/file.
If the problem is the full paths, you should replace
find ~ # etc
with
cd ~ || exit
find . # etc
If the tar archive isn't an absolute name, instead, it should be something like
(
cd ~ || exit
find . etc etc | xargs tar cf - etc etc
) > $3
Explanation
"(...)" runs a subshell, meaning some of the tings you change in there have no effects outside of the parens; the current directory is one of them, so "(cd whatever; foo)" means you run another shell, change its current directory, run foo from there, and then you're back in your script which never changed directory.
"cd ~ || exit" is paranoia, it means "cd ~; if that fails, exit".
"." is an alias meaning "the current directory, whatever that is"; play with "find ." vs "find ~" if you don't know what it means, you'll understand it better than if I explained it here.
"tar cf -" means that you create the tar archive on standard output; I think the syntax is portable enough, you may have to replace "-" with "/dev/stdout" or whatever works on solaris (the simplest solution is simply "tar", without the "c" command, but it's ugly to read).
The final "> $3", outside of the parens, is output redirection: rather than writing the output to the terminal, you save it into a file.
So the whole script reads like this:
- open a subshell
- change the subshell's current directory to ~
- in the subshell, find the files newer than requested, archive them, and write the contents of the resulting tar archive to standard output
- the subshell's stdout is saved to $3; because the redirection is outside the parens, relative paths are resolved relatively to your script's $PWD, meaning that eg if you run the script from the /tmp directory you'll get a tar archive in the /tmp directory (it would be in ~ if the redirection happened in the subshell).
If I misunderstood your question, the solution doesn't work or the explanation isn't clear let me know (the answer is too long, but I already know that :).
The pax command will output tar-compatible archives and has the flexibility you need to rewrite pathnames.
find ~ -name "$maska" -mtime -$dni | pax -w -x ustar -f "$3" -s '!.*/!!'
Here are what the options mean, paraphrasing from the man page:
-w write the contents of the file operands to the standard output (or to the pathname specified by the -f option) in an archive format.
-x ustar the output archive format is the extended tar interchange format specified in the IEEE POSIX standard.
-s '!.*/!!' Modifies file operands according to the substitution expression, using regular expression syntax. Here, it deletes all characters in each file name from the beginning to the final /.

unix ls directory with sub-directory exists

In Unix, is it possible to use one command ONLY to list the directory if a sub-directory exists?
For example, I would like to list the directory name if it contains a sub-directory called "division_A"
/data/data_file/form_100/division_A
/data/data_file/form_101/division_A
/data/data_file/form_102/division_A
The desired result would be
form_100
form_101
form_102
I can only use 2 command lines to realize the goal.
cd /data/data_files
echo `ls -d */division_A 2> /dev/null | sed 's,/division_A,,g'`
So I would like to ask if anyone can use one command to proceed it.
Many Thanks!
Using find:
find /data/data_file -type d -name division_A -exec sh -c 'basename `dirname {}`' \; 2> /dev/null
If you don't mind the weird .., you can just do:
$ ls -d /data/data_file/*/division_A/..
It will output something like /data/data_file/form_100/division_A/.. and you can access it like normal folders.

Tar only the Directory structure

I want to copy my directory structure excluding the files. Is there any option in the tar to ignore all files and copy only the Directories recursively.
You can use find to get the directories and then tar them:
find .. -type d -print0 | xargs -0 tar cf dirstructure.tar --no-recursion
If you have more than about 10000 directories use the following to work around xargs limits:
find . -type d -print0 | tar cf dirstructure.tar --no-recursion --null --files-from -
Directory names that contain spaces or other special characters may require extra attention. For example:
$ mkdir -p "backup/My Documents/stuff"
$ find backup/ -type d | xargs tar cf directory-structure.tar --no-recursion
tar: backup/My: Cannot stat: No such file or directory
tar: Documents: Cannot stat: No such file or directory
tar: backup/My: Cannot stat: No such file or directory
tar: Documents/stuff: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors
Here are some variations to handle these cases of "unusual" directory names:
$ find backup/ -type d -print0 | xargs -0 tar cf directory-structure.tar --no-recursion
Using -print0 with find will emit filenames as null-terminated strings; with -0 xargs will interpret arguments that same way. Using null as a terminator helps ensure that even filenames with spaces and newlines will be interpreted correctly.
It's also possible to pipe results straight from find to tar:
$ find backup/ -type d | tar cf directory-structure.tar -T - --no-recursion
Invoking tar with -T - (or --files-from -) will cause it to read filenames from stdin, expecting each filename to be separated by a line break.
For maximum effect this can be combined with options for null-terminated strings:
$ find . -type d -print0 | tar cf directory-structure.tar --null --files-from - --no-recursion
Of these I consider this last version to be the most robust, because it supports both unusual filenames and (unlike xargs) is not inherently limited by system command-line sizes. (see xargs --show-limits)
for i in `find . -type d`; do mkdir -p /tmp/tar_root/`echo $i|sed 's/\.\///'`; done
pushd /tmp/tar_root
tar cf tarfile.tar *
popd
# rm -fr /tmp/tar_root
go into the folder you want to start at (that's why we use find dot)
save tar file somewhere else. I think I got an error leaving it right there.
tar with r not c. I think with cf you keep creating new files and you only
get the last set of file subdirectories. tar r appends to the tar file.
--no-recursion because the find is giving you your whole list of files already
so you don't want to recurse.
find . -type d |xargs tar rf /somewhereelse/whatever-dirsonly.tar --no-recursion
tar tvf /somewhereelse/whatever-dirsonly.tar |more to check what you got.
For AIX:
tar cvfD some-tarball.tar `find /dir_to_start_from -type d -print`

Unix cp argument list too long

I am using AIX.
When I try to copy all the file in a folder to another folder with the following command:
cp ./00012524/*.PDF ./dummy01
The shell complains:
ksh: /usr/bin/cp: 0403-027 The parameter list is too long.
How to deal with it? My folder contain 8xxxx files, how can I copy them very fast? each file have size of 4x kb to 1xx kb.
Use find command in *nix:
find ./00012524 -type f -name "*.PDF" -exec cp {} ./dummy01/ \; -print
The cp command has a limitation of files which you can copy simultaneous.
One possibility you can copy them using multiple times the cp command bases on your file patterns, like:
cp ./00012524/A*.PDF ./dummy01
cp ./00012524/B*.PDF ./dummy01
cp ./00012524/C*.PDF ./dummy01
...
cp ./00012524/*.PDF ./dummy01
You can also copy trough find command:
find ./00012524 -name "*.PDF" -exec cp {} ./dummy01/ \;
$ ( cd 00012524; ls | grep '\.PDF$' | xargs -I{} cp {} ../dummy01/ )
The -t flag to cp is useful here:
find ./00012524 -name \*.PDF -print | xargs cp -t ./dummy01
The best command to copy large number of files from one directory to another.
find /path/to/source/ -name "*" -exec cp -ruf "{}" /path/to/destination/ \;
This helped me a lot.
You should be able use a for loop, e.g.
for f in $(ls ./00012524/*.pdf)
do
cp $f ./dummy01
done
I have no way of testing this, but it should work in theory.
you can do something like this and grab each line of the directory
# you can use the -rv to check the status of the command verbose
for i in /from_dir/*; do cp -rv "$i" /to_dir/; done

Resources